id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k โ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,628,411,882 | vscode | Intellisense text can't be worked when I installed Snippet feature and I created the script of Snippet | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version:
- OS Version:
Steps to Reproduce:
1. Installed Snippet feature on VS Code
2. Created new snippet
3. Intellisense Text doesn't work when I enter the words
| info-needed | low | Critical |
2,628,446,590 | transformers | Compile Grounding DINO | ### Feature request
I found that the Gounding DINO model `IDEA-Research/grounding-dino-base` cann't be compiled.
When I use `torch.comple(<The model>)`, it raises many error, such as `TypeError: unhashable type: 'dict'`.
can this be implemented?
### Motivation
It's very slow for Grounding DINO to perform batch inference, so I want some way to speed it up.
### Your contribution
Not sure. | Feature request,Vision,Compilation,Multimodal | low | Critical |
2,628,451,603 | pytorch | [ONNX] 2.0 regression: dynamic shapes lost for an operator | ### ๐ Describe the bug
Code in : https://gist.github.com/PhilCuriosity/a19ab78dfa770c3fe495069365c5a638
The full version is on Google Cloud Drive: https://drive.google.com/file/d/1TuRH3c1p2GTnNAeDjq3kZV_DuFro7rMo/view?usp=drive_link ๏ผCode in parseq/trt๏ผ
Project source code: https://github.com/baudm/parseq
**Executing export2onnx.py in torch1.13 gives the correct result.**
> (torch1.13) root@d0811d03cfb1:/workspace/trt# python3 export_onnx_nn_part_test.py
/root/miniconda3/envs/torch1.13/lib/python3.8/site-packages/timm/models/helpers.py:7: FutureWarning: Importing from timm.models.helpers is deprecated, please import via timm.models
warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.models", FutureWarning)
Lightning automatically upgraded your loaded checkpoint from v1.9.5 to v2.0.0. To apply the upgrade to your files permanently, run `python -m pytorch_lightning.utilities.upgrade_checkpoint --file ../outputs/parseq/2024-05-27_08-56-44/checkpoints/epoch=14-step=220998-val_accuracy=91.6555-val_NED=98.4783.ckpt`
{'charset_train': './dict/hsDict802.txt', 'charset_test': './dict/hsDict802.txt', 'max_label_length': 80, 'batch_size': 72, 'lr': 0.0007, 'warmup_pct': 0.075, 'weight_decay': 0.0001, 'img_size': [64, 640], 'patch_size': [8, 16], 'embed_dim': 384, 'enc_num_heads': 6, 'enc_mlp_ratio': 4, 'enc_depth': 12, 'dec_num_heads': 12, 'dec_mlp_ratio': 4, 'dec_depth': 1, 'perm_num': 6, 'perm_forward': True, 'perm_mirrored': True, 'decode_ar': True, 'refine_iters': 0, 'dropout': 0.1, 'name': 'parseq'}
834
/root/miniconda3/envs/torch1.13/lib/python3.8/site-packages/torch/__init__.py:853: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert condition, message
export ./weights/parseq_encoder_ar_iter0_bs6.onnx done!!!!!!!!!!
onnx check done
finished Simplified onnx
out-shape: torch.Size([6, 10, 832])
export ./weights/parseq_decoder_ar_iter0_bs6.onnx done!!!!!!!!!!
onnx check done
finished Simplified onnx
Exported model has been tested with ONNXRuntime, and the result looks good!
In torch2.1/2.5, the correct result is not obtained, and it seems that the operator of the original dynamic shape is frozen.
> (torch2.4) root@d0811d03cfb1:/workspace/trt# python3 export_onnx_nn_part.py
/root/miniconda3/envs/torch2.4/lib/python3.10/site-packages/timm/models/helpers.py:7: FutureWarning: Importing from timm.models.helpers is deprecated, please import via timm.models
warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.models", FutureWarning)
Lightning automatically upgraded your loaded checkpoint from v1.9.5 to v2.4.0. To apply the upgrade to your files permanently, run `python -m pytorch_lightning.utilities.upgrade_checkpoint ../outputs/parseq/2024-05-27_08-56-44/checkpoints/epoch=14-step=220998-val_accuracy=91.6555-val_NED=98.4783.ckpt`
/root/miniconda3/envs/torch2.4/lib/python3.10/site-packages/torch/__init__.py:2041: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert condition, message
export ./weights/parseq_encoder_ar_iter0_bs6.onnx done!!!!!!!!!!
onnx check done
finished Simplified onnx
out-shape: torch.Size([6, 10, 832])
export ./weights/parseq_decoder_ar_iter0_bs6.onnx done!!!!!!!!!!
onnx check done
finished Simplified onnx
2024-11-01 11:26:15.207400041 [E:onnxruntime:, sequential_executor.cc:516 ExecuteKernel] Non-zero status code returned while running Reshape node. Name:'/decoder/layers.0/self_attn/Reshape_4' Status Message: /onnxruntime_src/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:45 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape&, onnxruntime::TensorShapeVector&, bool) input_shape_size == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{2,6,384}, requested shape:{10,72,32}
Traceback (most recent call last):
File "/workspace/trt/export_onnx_nn_part.py", line 384, in <module>
ort_outs = ort_session.run(None, ort_inputs)
File "/root/miniconda3/envs/torch2.4/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 220, in run
return self._sess.run(output_names, input_feed, run_options)
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Reshape node. Name:'/decoder/layers.0/self_attn/Reshape_4' Status Message: /onnxruntime_src/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:45 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape&, onnxruntime::TensorShapeVector&, bool) input_shape_size == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{2,6,384}, requested shape:{10,72,32}
### Versions
torch1.13
> aiohappyeyeballs==2.4.3
aiohttp==3.10.10
aiosignal==1.3.1
async-timeout==4.0.3
attrs==24.2.0
Brotli @ file:///croot/brotli-split_1714483155106/work
certifi @ file:///croot/certifi_1725551672989/work/certifi
charset-normalizer @ file:///croot/charset-normalizer_1721748349566/work
click==8.1.7
coloredlogs==15.0.1
cuda-python==12.3.0
filelock==3.16.1
flatbuffers==24.3.25
frozenlist==1.5.0
fsspec==2024.10.0
huggingface-hub==0.26.1
humanfriendly==10.0
idna @ file:///croot/idna_1714398848350/work
joblib==1.4.2
lightning-utilities==0.11.8
markdown-it-py==3.0.0
mdurl==0.1.2
mkl-fft @ file:///croot/mkl_fft_1695058164594/work
mkl-random @ file:///croot/mkl_random_1695059800811/work
mkl-service==2.4.0
mpmath==1.3.0
multidict==6.1.0
nltk==3.9.1
numpy @ file:///work/mkl/numpy_and_numpy_base_1682953417311/work
onnx==1.17.0
onnx-simplifier==0.4.36
onnxruntime==1.19.2
opencv-python==4.10.0.84
packaging==24.1
pillow @ file:///croot/pillow_1721059439630/work
polygraphy==0.49.9
propcache==0.2.0
protobuf==5.28.3
Pygments==2.18.0
PySocks @ file:///tmp/build/80754af9/pysocks_1605305779399/work
pytorch-lightning==2.0.0
PyYAML==6.0.2
regex==2024.9.11
requests @ file:///croot/requests_1721410876868/work
rich==13.9.3
safetensors==0.4.5
sympy==1.13.3
tensorrt @ file:///root/TensorRT-10.5.0.18/python/tensorrt-10.5.0-cp38-none-linux_x86_64.whl#sha256=038d9bd6997533a8d59a203354a9f31a852eed028c68e29342ceae21bfb92011
timm==1.0.11
torch==1.13.1
torchaudio==0.13.1
torchmetrics==1.5.1
torchvision==0.14.1
tqdm==4.66.5
typing_extensions @ file:///croot/typing_extensions_1715268824938/work
urllib3 @ file:///croot/urllib3_1727769808118/work
yarl==1.15.2
torch2.5
> aiohappyeyeballs==2.4.3
aiohttp==3.10.10
aiosignal==1.3.1
async-timeout==4.0.3
attrs==24.2.0
certifi==2024.8.30
charset-normalizer==3.4.0
click==8.1.7
coloredlogs==15.0.1
filelock==3.16.1
flatbuffers==24.3.25
frozenlist==1.5.0
fsspec==2024.10.0
huggingface-hub==0.26.1
humanfriendly==10.0
idna==3.10
Jinja2==3.1.4
joblib==1.4.2
lightning-utilities==0.11.8
markdown-it-py==3.0.0
MarkupSafe==3.0.2
mdurl==0.1.2
mpmath==1.3.0
multidict==6.1.0
networkx==3.4.2
nltk==3.9.1
numpy==1.24.4
nvidia-cublas-cu12==12.4.5.8
nvidia-cuda-cupti-cu12==12.4.127
nvidia-cuda-nvrtc-cu12==12.4.127
nvidia-cuda-runtime-cu12==12.4.127
nvidia-cudnn-cu12==9.1.0.70
nvidia-cufft-cu12==11.2.1.3
nvidia-curand-cu12==10.3.5.147
nvidia-cusolver-cu12==11.6.1.9
nvidia-cusparse-cu12==12.3.1.170
nvidia-nccl-cu12==2.21.5
nvidia-nvjitlink-cu12==12.4.127
nvidia-nvtx-cu12==12.4.127
onnx==1.17.0
onnx-simplifier==0.4.36
onnxruntime==1.19.2
opencv-python==4.10.0.84
packaging==24.1
pillow==11.0.0
propcache==0.2.0
protobuf==5.28.3
Pygments==2.18.0
pytorch-lightning==2.4.0
PyYAML==6.0.2
regex==2024.9.11
requests==2.32.3
rich==13.9.3
safetensors==0.4.5
sympy==1.13.1
timm==1.0.11
torch==2.5.0
torchmetrics==1.5.1
torchvision==0.20.0
tqdm==4.66.5
triton==3.1.0
typing_extensions==4.12.2
urllib3==2.2.3
yarl==1.16.0
| module: onnx,triaged | low | Critical |
2,628,454,882 | vscode | Webview: throttling of `setTimeout` and `setInterval` | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: Version: 1.95.1 (Universal), Commit: 65edc4939843c90c34d61f4ce11704f09d3e5cb6
- OS Version: Darwin arm64 24.1.0
We make heavy use of Webviews via the VS Code API. We have observed that `setTimeout` and `setInterval` calls are throttled. When the time is less than 1000ms, the time will be overridden to 1000ms as a minimum.
We can reproduce the problem on both MacOS and Windows.
We found that the problem is much harder (and sometimes impossible) to reproduce when display refresh rate is higher. With a refresh rate of 60hz, the problem is fairly easy to reproduce.
We have created a sample repo that shows the problem.
Steps to Reproduce:
1. Download the sample repo to reproduce from https://github.com/wallabyjs/webview-issue
2. Open this example in VS Code 1.47+
3. Run `npm install`
4. Run `npm run watch` or `npm run compile`
5. `F5` to start debugging
6. Run the `Webview Issue: Reproduce Webview Issue` command to create the webview.
7. Note the time displayed in the webview:
```
Expected time to update: 500ms
Actual time to update: 500ms
```
The two times should be similar.
8. If you do not have the issue, close and re-open the webview multiple times. Eventually, the actual time will update at ~1000ms instead of 500ms.
_Note: it can sometimes take a few times before the throttling starts to occur._
| upstream,webview,chromium | low | Critical |
2,628,481,238 | electron | Requesting a capture device (`MediaDevices.getUserMedia`) without constraints always returns the first device | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a feature request that matches the one I want to file, without success.
### Problem Description
The first audio device is always selected when no constraints are passed to the API by the application developer.
It would be desirable to
1. try to sniff out system-specific default selections (such as the device default status flags provided by system APIs like PulseAudio) and only use the first element as a fallback,
2. have some interface for a sufficiently knowledgeable end user to override this behaviour anyway, such as an environment variable.
The focus is on requirement 2, as 1 would require implementing many tests and lookups that would have to look for various combinations of not only OS, but also available backend audio services, like the many audio services available in Linux.
**Note A:** This issue is almost impossible to notice for an application developer or end user on a setup with only one media device, and might be hard to notice on multi-device setups. However, a good example of a setup where the 1st device in the list may not be the preferred device, is the case where the system registers a headphone microphone being plugged and unplugged as logically a second device, whereas a hardwired, lower quality built-in microphone, and thus not preferable device, stays in the first place.
### Proposed Solution
Ideally requirement 2 would be achieved by a centralized object that keeps track of such user preferences, checking environment variables and maybe config locations (perhaps even Electron-wide, not just app-specific, though that might entail security considerations), as well as an utility method which sources information from that object to select between available media devices, which would be used by `MediaCaptureDevicesDispatcher::GetPreferredAudioDeviceForBrowserContext`.
### Alternatives Considered
In the meantime, a simple override, by checking an environment variable and acting depending on its presence or absence, using a probably already existing system-agnostic environment variable API, would be just fine.
Finally, if it is undesired to have any such logic be placed inside the browser shell, give the application developer a warning if they do not pass constraints, asking them to enumerate devices and implement their own selection logic (or user selection prompt), due to note A.
### Additional Information
When no constraints are found, `front()` is called. This is easily visible in the following snippet from the native browser shell:
https://github.com/electron/electron/blob/15151c68533f5d8d1c9b57dbd7953e805f7719c9/shell/browser/media/media_capture_devices_dispatcher.cc#L33-L41
This should be a very, very last resort, not a fallback for every time a developer fails to pass constraints or enumerate devices on their end. A notable example of this, and in fact the catalyst of this particular inquiry in this particular instance, is [Signal for Desktop](https://github.com/signalapp/Signal-Desktop/blob/92d22d0827b4686c0e4a5bd14c4692c3ad92cd31/ts/services/audioRecorder.ts#L73-L77). ([I just bugged them about it, too.](https://github.com/signalapp/Signal-Desktop/issues/6606#issuecomment-2451423643))
https://github.com/signalapp/Signal-Desktop/blob/92d22d0827b4686c0e4a5bd14c4692c3ad92cd31/ts/services/audioRecorder.ts#L73-L77
But it is reasonable to believe there are many more such cases, since it is not entirely obvious (see note A).
I understand that relying on the developers to be thorough with giving users a friendly interface to choice might help reduce the thickness of the middle layer in Electron's browser shell _and_ encourage developers to be more proactive in implementing their own interfaces. However, when all such tasks are off-loaded to the application developer, this can not only lead to end user frustration, but compromise on the functionality and versatility of use of the final products.
In either case, I am greatly thankful of the team's consideration in what may seem like a niche issue, but one which I can imagine is bugging more people than myself. I would love to try and contribute something myself, but I lack the understanding of the guts of Electron that the developers have. But hey, maybe there's something else I can help with! | enhancement :sparkles: | low | Critical |
2,628,494,895 | vscode | Toggling views/panels has inconsistent focus behaviour | While investigating https://github.com/microsoft/vscode/issues/198293 I noticed that our action to toggle visibility of primary or secondary side bar or panel has inconsistent behaviour when it comes to passing focus to the view that becomes visible or not.
Notice when the explorer is active how focus remains in the editor:

And now with the SCM view:

First of all, these actions seem to call into methods to toggle visibility of the container via layout service which eventually calls into `openPaneComposite`, e.g. for the primary sidebar:
https://github.com/microsoft/vscode/blob/4520d915c98954dc96dd0bc00b8bb68181cbf2b6/src/vs/workbench/browser/layout.ts#L1750
The last parameter is a `true` to indicate that focus should move to that pane.
However, at the point where we want to focus the pane, it is not yet visible because the grid only updates a few lines below:
https://github.com/microsoft/vscode/blob/4520d915c98954dc96dd0bc00b8bb68181cbf2b6/src/vs/workbench/browser/layout.ts#L1756
Any view that implements `focus` by e.g. focussing the list will be a no-op because the DOM nodes are not yet visible. The SCM view is probably a bit async and that is why it works by chance.
I am actually not sure how to address this: people might have gotten used to the fact that these commands typically preserve focus for most of our views. I still think that the implementation is currently buggy around focussing the pane when the container becomes visible, so maybe a fix would need to be:
* handle focus properly after the container in the grid is visible
* make focus more explicit so that the toggling actions can explicitly pass in `focus: false` to preserve todays behaviour
//cc @sbatten | bug,ux,layout | low | Critical |
2,628,552,766 | pytorch | Maybe there is a precision issue with torch.quantize_per_channel? | ### ๐ Describe the bug
import torch
x = torch.tensor([[757.5]])
y = torch.quantize_per_channel(x, torch.tensor([15.0]), torch.tensor([0]), 0, torch.qint8).int_repr()
print(y)#51
real: torch.round(757.5/15) = torch.round(50.5) = 50
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (conda-forge gcc 14.1.0-1) 14.1.0
Clang version: 12.0.1 (git@code.streamcomputing.com:toolchain/llvm-12.git b30fdc04ec9219d7987ffd2eaff36b95054ab356)
CMake version: version 3.29.4
Libc version: glibc-2.31
Python version: 3.10.0 (default, Mar 3 2022, 09:58:08) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.4.0-198-generic-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 7713 64-Core Processor
Stepping: 1
Frequency boost: enabled
CPU MHz: 1496.212
CPU max MHz: 2000.0000
CPU min MHz: 1500.0000
BogoMIPS: 3992.66
Virtualization: AMD-V
L1d cache: 2 MiB
L1i cache: 2 MiB
L2 cache: 32 MiB
L3 cache: 256 MiB
NUMA node0 CPU(s): 0-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.23.5
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] onnx==1.15.0
[pip3] onnxruntime==1.16.3
[pip3] torch==2.5.1
[pip3] triton==3.1.0
[conda] numpy 1.23.5 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim | oncall: quantization | low | Critical |
2,628,568,212 | next.js | Turbopack: failed to load chunk error | ### Link to the code that reproduces this issue
https://github.com/moonlitgrace/next-turbopack-issue-repro
### To Reproduce
1. run dev server with `--turbopack`
2. open console (make sure you've checked 'Persists logs')
3. refresh browser multiple times
### Current vs. Expected behavior
When refresh many times, error boundary shows and hides, when we check browser console, get this error:
```console
13:48:32.648 Uncaught (in promise) Error: Failed to load chunk static/chunks/[turbopack]_browser_dev_hmr-client_d6d8d4._.js from module [turbopack]/browser/dev/hmr-client/hmr-client.ts [app-client] (ecmascript, async loader)
NextJS 51
undefined:472:15
```
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: #1 SMP PREEMPT_DYNAMIC Tue, 22 Oct 2024 18:31:38 +0000
Available memory (MB): 7849
Available CPU cores: 4
Binaries:
Node: 23.1.0
npm: 10.9.0
Yarn: N/A
pnpm: 9.12.3
Relevant Packages:
next: 15.0.2 // Latest available version is detected (15.0.2).
eslint-config-next: 15.0.2
react: 19.0.0-rc-02c0e824-20241028
react-dom: 19.0.0-rc-02c0e824-20241028
typescript: 5.6.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Lazy Loading, Turbopack
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
running dev server without `--turbopack` flag works fine as wine. | bug,Lazy Loading,Turbopack | low | Critical |
2,628,571,982 | stable-diffusion-webui | [Bug]: Error training embedding | ### Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
```
Calculating sha256 for D:\stable-diffusion-webui\embeddings\n0n1pp1e5.pt: 5ab059d44f700da25700191f6762d483468c57739982625e860a7546d2c83663
Training at rate of 0.005 until step 100000
Preparing dataset...
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 209/209 [00:08<00:00, 23.73it/s]
0%| | 0/100000 [00:00<?, ?it/s]*** Error training embedding
Traceback (most recent call last):
File "D:\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 551, in train_embedding
loss = shared.sd_model.forward(x, cond)[0] / gradient_step
File "D:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 846, in forward
return self.p_losses(x, c, t, *args, **kwargs)
File "D:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 886, in p_losses
model_output = self.apply_model(x_noisy, t, cond)
File "D:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 22, in <lambda>
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "D:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 34, in __call__
return self.__sub_func(self.__orig_func, *args, **kwargs)
File "D:\stable-diffusion-webui\modules\sd_hijack_unet.py", line 50, in apply_model
result = orig_func(self, x_noisy.to(devices.dtype_unet), t.to(devices.dtype_unet), cond, **kwargs)
File "D:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 22, in <lambda>
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "D:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 36, in __call__
return self.__orig_func(*args, **kwargs)
File "D:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
out = self.diffusion_model(x, t, context=cc)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\stable-diffusion-webui\modules\sd_unet.py", line 91, in UNetModel_forward
return original_forward(self, x, timesteps, context, *args, **kwargs)
File "D:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 789, in forward
emb = self.time_embed(t_emb)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\container.py", line 215, in forward
input = module(input)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "D:\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 582, in network_Linear_forward
network_apply_weights(self)
File "D:\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 454, in network_apply_weights
network_restore_weights_from_backup(self)
File "D:\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 403, in network_restore_weights_from_backup
restore_weights_backup(self, 'weight', weights_backup)
File "D:\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 388, in restore_weights_backup
getattr(obj, field).copy_(weight)
RuntimeError: a leaf Variable that requires grad is being used in an in-place operation.
---
Applying attention optimization: xformers... done.
```
### Steps to reproduce the problem


### What should have happened?
.
### What browsers do you use to access the UI ?
Mozilla Firefox
### Sysinfo
[sysinfo-2024-11-01-08-05.json](https://github.com/user-attachments/files/17597521/sysinfo-2024-11-01-08-05.json)
### Console logs
```Shell
.
```
### Additional information
[n0n1pp1e5.zip](https://github.com/user-attachments/files/17597574/n0n1pp1e5.zip)
[n0n1pp1e5.z01.zip](https://github.com/user-attachments/files/17597579/n0n1pp1e5.z01.zip) <== rename as `n0n1pp1e5.z01`
| bug-report | low | Critical |
2,628,586,667 | neovim | LSP: `:tag` behaving differently from `C-]`, `C-w ]`, `C-w }` etc with `vim.lsp.tagfunc` | ### Problem
I think this is a problem with `vim.lsp.tagfunc`, so I put it this category.
## Problem
Consider this minimal typescript file:
```ts
interface Iface {
}
const iface = 1;
const thing = null;
const Thing = 2;
const Other = 3;
function _() {
const other = null;
}
function main() {
const x = Thing;
const y: Iface = {};
const z = Other;
}
const x = Thing;
const y: Iface = {};
const z = Other;
```
The problem is basically that if you do `:tag` with one of the uppercase symbols, it goes to the lower case one no matter what. I have put multiple cases above to show that it happens in all kinds of situations, regardless of order of appearance, kind of symbol, etc. The most peculiar is `:tag Other` for which it goes to an out of scope variable `other` inside that other function!
This problem does not happen for these actions: `:lua vim.lsp.buf.definition()`, `C-]`, `C-w }` and `C-w ]`. So, I don't think its a language server problem. Weirdly, `:h C-]` (or any of the others) says it calls `:tag` internally.
I haven't changed any of the `*case` options at all, but here are their defaults anyway: `noignorecase nosmartcase tagcase=followic`. Just in case, I checked that this happens with `tagcase=match` as well.
Aside, another issue I noticed but I'm not sure I should make a separate issue for, is that `vim.lsp.tagfunc` doesn't seem to support `i_CTRL-X_CTRL_]`
## Reproduction
- Checkout the latest stable(8b98642002d0506d20628683958cb5c97a0dad80) or the latest master(b34e137e43d359c8db4fb76028dea3b410842aff) (it happens on both)
- It also happens on 80e37aa533573ef1ad96bcccc006b8d45dc963b9 fwiw
- Save the above typescript file as `one.ts` and the below lua file as `repro.lua` in the neovim directory itself
- `make distclean && make clean && make -j$(nproc) CMAKE_BUILD_TYPE=Release`
- Run `VIMRUNTIME=runtime build/bin/nvim -u repro.lua one.ts`
- Try doing `:tag Other` (or any of the other uppercase ones). It will be wrong, try doing `<C-]>` with cursor on them and see that it is correct.
Below is the repro.lua for minimal reproduction.
```lua
local pattern = 'typescript'
local cmd = {'typescript-language-server', '--stdio'}
local root_markers = { 'package.json' }
local settings = vim.empty_dict()
vim.api.nvim_create_autocmd('FileType', {
pattern = pattern,
callback = function(args)
local match = vim.fs.find(root_markers, { path = args.file, upward = true })[1]
local root_dir = match and vim.fn.fnamemodify(match, ':p:h') or nil
vim.lsp.start({
name = 'bugged-ls',
cmd = cmd,
root_dir = root_dir,
settings = settings
})
end
})
-- remove effects of any user plugins in standard directories
vim.opt.runtimepath = '/etc/xdg/nvim,/usr/local/share/nvim/site,/usr/share/nvim/site,runtime,/usr/local/lib/nvim,/usr/share/nvim/site/after,/usr/local/share/nvim/site/after,/etc/xdg/nvim/after'
```
### Steps to reproduce using "nvim -u minimal_init.lua"
all given above
### Expected behavior
all given above
### Nvim version (nvim -v)
master or stable
### Language server name/version
typescript-language-server 4.3.3
### Operating system/version
Linux 6.11.3-arch1-1
### Log file
_No response_ | bug,lsp | low | Critical |
2,628,601,123 | electron | desktopCapturer.getSources returns empty thumbnail in some window | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
33.0.2
### What operating system(s) are you using?
macOS
### Operating System Version
macOS
### What arch are you using?
arm64 (including Apple Silicon)
### Last Known Working Electron version
26.6.10
### Expected Behavior
every window return its own thumbnail
### Actual Behavior
some windows return its own thunbnail, others only return base64 prefix 'data:image/png;base64,'
### Testcase Gist URL
https://gist.github.com/8eb6ea5620a0b2a271ff456f8933ac12
### Additional Information
_No response_ | platform/macOS,bug :beetle:,has-repro-gist,33-x-y,34-x-y | low | Critical |
2,628,692,182 | vscode | Layout Controls: move them to respective corners | An idea from @sbatten to split up our layout controls into respective controls per corner:

I.e. have them appear at the corners where the view or panel is.
Probably drop the layout picker button. | feature-request,layout,workbench-auxsidebar | low | Major |
2,628,692,644 | vscode | Auto-Fill of Highlighted Text in Go-to-File (CTRL + P) Search | <!-- โ ๏ธโ ๏ธ Do Not Delete This! feature_request_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
Currently, when text is highlighted and CTRL + F is pressed, the search menu opens with the highlighted text automatically filled as the search term. I would like this functionality extended to the Go menu, so that when I highlight text in a file and press CTRL + P, it automatically searches for files with the selected text as the filename. | help wanted,feature-request,search | low | Major |
2,628,718,195 | ui | [bug]: Charts not working in next-15 | ### Describe the bug
I've tried to used shadcn provided charts in the same way as in version 14 but it doesn't work. The chart is not visible in the browser, same code works in next-14 but not in next-15. The console says:
Hydration failed because the server rendered HTML didn't match the client. As a result this tree will be regenerated on the client. This can happen if a SSR-ed Client Component used
here is the example of the what the chart renders like:

### Affected component/components
component/chart
### How to reproduce
1. go to sandbox with the following url
2. run `npm run dev` or start dev server
3. then check the dev environment in the browser
### Codesandbox/StackBlitz link
https://codesandbox.io/p/devbox/ntv547
### Logs
```bash
Hydration failed because the server rendered HTML didn't match the client. As a result this tree will be regenerated on the client. This can happen if a SSR-ed Client Component used
See more info here: https://nextjs.org/docs/messages/react-hydration-error
- A server/client branch `if (typeof window !== 'undefined')`.
- Variable input such as `Date.now()` or `Math.random()` which changes each time it's called.
- Date formatting in a user's locale which doesn't match the server.
- External changing data without sending a snapshot of it along with the HTML.
- Invalid HTML tag nesting.
It can also happen if the client has a browser extension installed which messes with the HTML before React loaded.
```
### System Info
```bash
All browsers produce this error
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,628,766,059 | vscode | Drag and drop from operating system doesn't work (Ubuntu 24) | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.95.1
- OS Version: Ubuntu 24.04
Steps to Reproduce:
I am using stock VS Code, freshly installed, from the debian file on the official website. When I drag and drop a file from my file manager (I'm using Nemo), it doesn't open in VSCode. If a file is open already, it pastes the new file in as a link.
https://github.com/user-attachments/assets/32fb4b09-3681-4056-b9e4-4c5c0f63637a
As you can see in this video, the behavior is actually a bit random - the functionality works as expected maybe 10% of the time.
I couldn't reproduce it in the video, but I've noticed if I drag a file into the tab area above the open editor panes in VS Code, often the file will open but with a bad filename, with a bunch of numbers after it:

| electron,workbench-dnd | low | Critical |
2,628,774,104 | neovim | Build failed on Windows | ### Problem
The newest Makefile gives this:
E:\neovim_sources>make -v
GNU Make 4.4.1
Built for x86_64-pc-msys
Copyright (C) 1988-2023 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
E:\neovim_sources>cmake --version
cmake version 3.30.5
CMake suite maintained and supported by Kitware (kitware.com/cmake).
E:\neovim_sources>make distclean
if (Test-Path ".deps") { remove-item -recurse ".deps" }
if (Test-Path build) { remove-item -recurse build }
make clean
make[1]: Entering directory '/e/neovim_sources'
/usr/bin/make -C test/old/testdir clean
make[2]: Entering directory '/e/neovim_sources/test/old/testdir'
rm -f -rf *.out *.failed *.res *.rej *.orig *.tlog opt_test.vim test_result.log test.log messages starttime test.out X* viminfo test.ok valgrind.* .*.swp .*.swo .gdbinit /e/neovim_sources/test/old/testdir/X-test-tmpdir del ../../../runtime/doc/.*.swp
make[2]: Leaving directory '/e/neovim_sources/test/old/testdir'
/usr/bin/make -C runtime/indent clean
make[2]: Entering directory '/e/neovim_sources/runtime/indent'
rm -f testdir/*.fail testdir/*.out
make[2]: Leaving directory '/e/neovim_sources/runtime/indent'
make[1]: Leaving directory '/e/neovim_sources'
E:\neovim_sources>make CMAKE_BUILD_TYPE=Release CMAKE_INSTALL_PREFIX=D:/nvim install
At line:1 char:3
+ if [ -f build/.ran-cmake ]; then \
+ ~
Missing '(' after 'if' in if statement.
At line:1 char:5
+ if [ -f build/.ran-cmake ]; then \
+ ~
Missing type name after '['.
At line:2 char:57
+ cached_prefix=At line:1 char:21 + cmake -L -N build | 2>/dev/null g ...
+ ~
Expressions are only allowed as the first element of a pipeline.
At line:2 char:69
+ ... efix=At line:1 char:21 + cmake -L -N build | 2>/dev/null grep 'CMAKE_ ...
+ ~~~~
Unexpected token 'grep' in expression or statement.
At line:2 char:240
+ ... ment of a pipeline. At line:1 char:33 + cmake -L -N build | 2>/dev/nu ...
+ ~
Expressions are only allowed as the first element of a pipeline.
At line:2 char:252
+ ... ine. At line:1 char:33 + cmake -L -N build | 2>/dev/null grep 'CMAKE_ ...
+ ~~~~
Unexpected token 'grep' in expression or statement.
At line:3 char:5
+ if ! [ "D:/nvim" = "$cached_prefix" ]; then \
+ ~
Missing '(' after 'if' in if statement.
At line:3 char:9
+ if ! [ "D:/nvim" = "$cached_prefix" ]; then \
+ ~
Missing type name after '['.
At line:3 char:7
+ if ! [ "D:/nvim" = "$cached_prefix" ]; then \
+ ~
Missing expression after unary operator '!'.
At line:3 char:8
+ if ! [ "D:/nvim" = "$cached_prefix" ]; then \
+ ~
Unexpected token '[' in expression or statement.
+ CategoryInfo : ParserError: (:) [], ParentContainsErrorRecordException
+ FullyQualifiedErrorId : MissingOpenParenthesisInIfStatement
make: *** [Makefile:50: checkprefix] Error 1
`
### Steps to reproduce
make CMAKE_BUILD_TYPE=Release CMAKE_INSTALL_PREFIX=D:/nvim install
### Expected behavior
no error message and make succeeds.
### Nvim version (nvim -v)
nvim v0.11.0-dev-649+ge48179f31-dirty
### Vim (not Nvim) behaves the same?
no
### Operating system/version
windows 10 22H2 (19045. 5073)
### Terminal name/version
cmd/powershell 7
### $TERM environment variable
-
### Installation
- | bug,build,platform:windows | low | Critical |
2,628,816,962 | neovim | snippet can't stop at $0 section | ### Problem
After accept a lsp snippet like `namespace` in c++, press `<tab>` jump to section `$0` in there is newline. and press `<tab>` again it jump to section `$1` instead insert a indent.
### Steps to reproduce
`nvim --clean -u test.lua test.cc`
```lua
vim.g.loaded_matchparen = 1
vim.api.nvim_create_autocmd("FileType", {
pattern = 'cpp',
callback = function()
local id = vim.lsp.start({
cmd = { "clangd" },
root_dir = vim.uv.cwd(),
})
vim.lsp.completion.enable(true, id, 0, { autotrigger = false })
vim.keymap.set("i", "<C-j>", function()
vim.lsp.completion.trigger()
end)
end,
})
```
1. type `namespace` then `<C-J>`
2. select namespace snippet and accept by `<C-y>`
3. type something for namespace name and then press `<tab> <tab>` cursor back to $1 name section
commented this line `vim.g.loaded_matchparen = 1` will works fine.

### Expected behavior
`tab` should insertion a indent instead jump to next snippet section.
### Nvim version (nvim -v)
v0.11.0-dev-1075+gb34e137e4
### Vim (not Nvim) behaves the same?
no
### Operating system/version
macos
### Terminal name/version
alacritty
### $TERM environment variable
alacritty
### Installation
build frome source | bug,snippet | low | Minor |
2,628,825,358 | node | [v18] `source` value is ignored from the loader's `load` function with `format: 'commonjs'` | > [!NOTE]
> This issue is limited to v18. Works fine on v20.
### Version
Confirmed: v18.19.0, v18.20.3
### Platform
```text
Darwin pro-m3-2023-36gb.local 24.0.0 Darwin Kernel Version 24.0.0: Tue Sep 24 23:37:25 PDT 2024; root:xnu-11215.1.12~1/RELEASE_ARM64_T6030 arm64
`source` value from the loader's `load` function with `format: 'commonjs'` is ignored in v18.
```
### Subsystem
_No response_
### What steps will reproduce the bug?
Repro: https://github.com/devjiwonchoi/repro-nodejs-loader-cjs-source
The Node.js loader hook function `load` can return properties: `format` and `source`. When running on Node.js v18 with `format` set to `commonjs`, the `source` property is ignored and does not affect the loaded module.
```js
// index.js
console.log('hello from index.js');
// loader.mjs
export async function load() {
return {
format: 'commonjs',
shortCircuit: true,
source: `console.log("hello from loader.mjs");`,
}
}
// optional: register.js
const { register } = require('node:module')
const { pathToFileURL } = require('node:url')
register('./loader.mjs', pathToFileURL(__filename))
```
#### Run with `--import ./register.js`
```
node --import ./register.js ./index.js
```
#### Run with `--loader ./loader.mjs`
```
node --loader ./loader.mjs ./index.js
```
### How often does it reproduce? Is there a required condition?
This issue is present on Node.js v18. Works fine on v20. (v19 doesn't support `module.register` API.)
### What is the expected behavior? Why is that the expected behavior?
The stdout must be:
```
{ format: 'module' }
hello from loader.mjs
```
### What do you see instead?
The source is from the `index.js`, not modified from the loader.
```
{ format: 'commonjs' }
hello from index.js
```
### Additional information
_No response_ | loaders,v18.x | low | Critical |
2,628,834,941 | opencv | Drivers for USB3 Vision and GigE Vision | ### Describe the feature and motivation
OpenCV needs generic standards-compliant drivers for "GigE Vision" and "USB3 Vision" compliant cameras.
GigE Vision and USB3 Vision are industry standards. OpenCV does not appear to have drivers for these, which excludes the use of a lot of very capable industrial cameras.
### Additional context
Occasionally people would like to use their expensive industrial cameras with OpenCV. They always run into the problem that these cameras aren't just simple USB Unified Video Class (UVC) devices, so they aren't exposed to system media APIs (V4L2, DSHOW, MSMF, ...), and OpenCV doesn't know how to talk to these.
OpenCV has, or has had, drivers for "Ximea" and "genicam/GenTL". I think it'd be a good idea to investigate if those `videoio` backends could be extended, or they're just binary libraries or license-encumbered. | feature,category: videoio(camera) | low | Major |
2,628,888,959 | pytorch | serialization of PT2E model impacts torch.fx.passes.utils.source_matcher_utils.get_source_partitions | ### ๐ Describe the bug
**My expectation is that get_source_partitions should act the same no matter the PT2E model is saved/loaded or not, and I have some other questions inline, thanks.**
### **case 1:**
call get_source_partitions on the exported model without save/load.
test code:
```
import torch
from torch.fx.passes.utils.source_matcher_utils import (
SourcePartition,
get_source_partitions,
)
class M(torch.nn.Module):
def __init__(self):
super().__init__()
self.linear1 = torch.nn.Linear(3, 3)
self.relu = torch.nn.ReLU()
self.linear2 = torch.nn.Linear(3, 5)
def forward(self, x):
x = self.linear1(x)
x = self.linear1(x)
x = self.relu(x)
x = self.linear2(x)
return x
inputs = (torch.randn(3, 3),)
model = M()
exported_prog = torch.export.export(model, inputs)
#torch.export.save(exported_prog, "pt2e.pt2")
#saved_ep=torch.export.load("pt2e.pt2")
saved_ep=exported_prog
print("-------------------------------")
saved_ep.graph_module.print_readable()
print("-------------------------------")
print(saved_ep.graph_module.graph)
module_partitions = get_source_partitions(saved_ep.graph_module.graph, [torch.nn.Linear, torch.nn.ReLU])
print("==============================")
for module_or_fn_type, partitions in module_partitions.items():
print("type")
print(module_or_fn_type)
for p in partitions:
print("partition")
print(p)
for node in p.params:
print("node")
print(node)
```
test result:
```
-------------------------------
class GraphModule(torch.nn.Module):
def forward(self, p_linear1_weight: "f32[3, 3]", p_linear1_bias: "f32[3]", p_linear2_weight: "f32[5, 3]", p_linear2_bias: "f32[5]", x: "f32[3, 3]"):
# File: /home/yguo18/tmp/tmp/yjguo.testkit/fp8/source_partitions.py:15 in forward, code: x = self.linear1(x)
linear: "f32[3, 3]" = torch.ops.aten.linear.default(x, p_linear1_weight, p_linear1_bias); x = None
# File: /home/yguo18/tmp/tmp/yjguo.testkit/fp8/source_partitions.py:16 in forward, code: x = self.linear1(x)
linear_1: "f32[3, 3]" = torch.ops.aten.linear.default(linear, p_linear1_weight, p_linear1_bias); linear = p_linear1_weight = p_linear1_bias = None
# File: /home/yguo18/tmp/tmp/yjguo.testkit/fp8/source_partitions.py:17 in forward, code: x = self.relu(x)
relu: "f32[3, 3]" = torch.ops.aten.relu.default(linear_1); linear_1 = None
# File: /home/yguo18/tmp/tmp/yjguo.testkit/fp8/source_partitions.py:18 in forward, code: x = self.linear2(x)
linear_2: "f32[3, 5]" = torch.ops.aten.linear.default(relu, p_linear2_weight, p_linear2_bias); relu = p_linear2_weight = p_linear2_bias = None
return (linear_2,)
-------------------------------
graph():
%p_linear1_weight : [num_users=2] = placeholder[target=p_linear1_weight]
%p_linear1_bias : [num_users=2] = placeholder[target=p_linear1_bias]
%p_linear2_weight : [num_users=1] = placeholder[target=p_linear2_weight]
%p_linear2_bias : [num_users=1] = placeholder[target=p_linear2_bias]
%x : [num_users=1] = placeholder[target=x]
%linear : [num_users=1] = call_function[target=torch.ops.aten.linear.default](args = (%x, %p_linear1_weight, %p_linear1_bias), kwargs = {})
%linear_1 : [num_users=1] = call_function[target=torch.ops.aten.linear.default](args = (%linear, %p_linear1_weight, %p_linear1_bias), kwargs = {})
%relu : [num_users=1] = call_function[target=torch.ops.aten.relu.default](args = (%linear_1,), kwargs = {})
%linear_2 : [num_users=1] = call_function[target=torch.ops.aten.linear.default](args = (%relu, %p_linear2_weight, %p_linear2_bias), kwargs = {})
return (linear_2,)
==============================
type
<class 'torch.nn.modules.linear.Linear'>
partition
SourcePartition(nodes=[p_linear1_weight, p_linear1_bias, linear_1], source=<class 'torch.nn.modules.linear.Linear'>, input_nodes=[linear], output_nodes=[p_linear1_weight, linear_1, p_linear1_bias], params=[])
partition
SourcePartition(nodes=[p_linear2_weight, p_linear2_bias, linear_2], source=<class 'torch.nn.modules.linear.Linear'>, input_nodes=[relu], output_nodes=[linear_2], params=[])
partition
SourcePartition(nodes=[linear], source=<class 'torch.nn.modules.linear.Linear'>, input_nodes=[x, p_linear1_bias, p_linear1_weight], output_nodes=[linear], params=[])
type
<class 'torch.nn.modules.activation.ReLU'>
partition
SourcePartition(nodes=[relu], source=<class 'torch.nn.modules.activation.ReLU'>, input_nodes=[linear_1], output_nodes=[relu], params=[])
```
**Q1)** a tiny concern (https://github.com/pytorch/pytorch/pull/98628/files#r1825656920) about the output_nodes.
**Q2)** For "SourcePartition(nodes=[linear], source=<class 'torch.nn.modules.linear.Linear'>, input_nodes=[x, p_linear1_bias, p_linear1_weight], output_nodes=[linear], params=[])", why p_linear1_bias and p_linear1_weight are not in nodes, but in input_nodes. It does not align with linear1 and linear2.
### **case 2:**
save the PT2E model to disk, load it and then call get_source_partitions,
test code:
```
import torch
from torch.fx.passes.utils.source_matcher_utils import (
SourcePartition,
get_source_partitions,
)
class M(torch.nn.Module):
def __init__(self):
super().__init__()
self.linear1 = torch.nn.Linear(3, 3)
self.relu = torch.nn.ReLU()
self.linear2 = torch.nn.Linear(3, 5)
def forward(self, x):
x = self.linear1(x)
x = self.linear1(x)
x = self.relu(x)
x = self.linear2(x)
return x
inputs = (torch.randn(3, 3),)
model = M()
exported_prog = torch.export.export(model, inputs)
torch.export.save(exported_prog, "pt2e.pt2")
saved_ep=torch.export.load("pt2e.pt2")
#saved_ep=exported_prog
print("-------------------------------")
saved_ep.graph_module.print_readable()
print("-------------------------------")
print(saved_ep.graph_module.graph)
module_partitions = get_source_partitions(saved_ep.graph_module.graph, [torch.nn.Linear, torch.nn.ReLU])
print("==============================")
for module_or_fn_type, partitions in module_partitions.items():
print("type")
print(module_or_fn_type)
for p in partitions:
print("partition")
print(p)
for node in p.params:
print("node")
print(node)
```
run result:
```
-------------------------------
class GraphModule(torch.nn.Module):
def forward(self, p_linear1_weight: "f32[3, 3]", p_linear1_bias: "f32[3]", p_linear2_weight: "f32[5, 3]", p_linear2_bias: "f32[5]", x: "f32[3, 3]"):
# File: /home/yguo18/tmp/tmp/yjguo.testkit/fp8/source_partitions.py:15 in forward, code: x = self.linear1(x)
linear: "f32[3, 3]" = torch.ops.aten.linear.default(x, p_linear1_weight, bias = p_linear1_bias); x = None
# File: /home/yguo18/tmp/tmp/yjguo.testkit/fp8/source_partitions.py:16 in forward, code: x = self.linear1(x)
linear_1: "f32[3, 3]" = torch.ops.aten.linear.default(linear, p_linear1_weight, bias = p_linear1_bias); linear = p_linear1_weight = p_linear1_bias = None
# File: /home/yguo18/tmp/tmp/yjguo.testkit/fp8/source_partitions.py:17 in forward, code: x = self.relu(x)
relu: "f32[3, 3]" = torch.ops.aten.relu.default(linear_1); linear_1 = None
# File: /home/yguo18/tmp/tmp/yjguo.testkit/fp8/source_partitions.py:18 in forward, code: x = self.linear2(x)
linear_2: "f32[3, 5]" = torch.ops.aten.linear.default(relu, p_linear2_weight, bias = p_linear2_bias); relu = p_linear2_weight = p_linear2_bias = None
return (linear_2,)
-------------------------------
graph():
%p_linear1_weight : [num_users=2] = placeholder[target=p_linear1_weight]
%p_linear1_bias : [num_users=2] = placeholder[target=p_linear1_bias]
%p_linear2_weight : [num_users=1] = placeholder[target=p_linear2_weight]
%p_linear2_bias : [num_users=1] = placeholder[target=p_linear2_bias]
%x : [num_users=1] = placeholder[target=x]
%linear : [num_users=1] = call_function[target=torch.ops.aten.linear.default](args = (%x, %p_linear1_weight), kwargs = {bias: %p_linear1_bias})
%linear_1 : [num_users=1] = call_function[target=torch.ops.aten.linear.default](args = (%linear, %p_linear1_weight), kwargs = {bias: %p_linear1_bias})
%relu : [num_users=1] = call_function[target=torch.ops.aten.relu.default](args = (%linear_1,), kwargs = {})
%linear_2 : [num_users=1] = call_function[target=torch.ops.aten.linear.default](args = (%relu, %p_linear2_weight), kwargs = {bias: %p_linear2_bias})
return (linear_2,)
==============================
type
<class 'torch.nn.modules.linear.Linear'>
partition
SourcePartition(nodes=[linear], source=<class 'torch.nn.modules.linear.Linear'>, input_nodes=[p_linear1_weight, x], output_nodes=[linear], params=[])
partition
SourcePartition(nodes=[linear_1], source=<class 'torch.nn.modules.linear.Linear'>, input_nodes=[p_linear1_weight, linear], output_nodes=[linear_1], params=[])
partition
SourcePartition(nodes=[linear_2], source=<class 'torch.nn.modules.linear.Linear'>, input_nodes=[relu, p_linear2_weight], output_nodes=[linear_2], params=[])
type
<class 'torch.nn.modules.activation.ReLU'>
partition
SourcePartition(nodes=[relu], source=<class 'torch.nn.modules.activation.ReLU'>, input_nodes=[linear_1], output_nodes=[relu], params=[])
```
**Q3)** why the source partition result is different than case 1?
**Q4)** why p_linear*_bias is not there
### Versions
pip list | grep torch
torch 2.5.1
thanks
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @penguinwu | export-triage-review,oncall: export | low | Critical |
2,628,893,063 | deno | Slow types warning for a non-public API when publishing | Version: Deno 2.0.3
I am using [zod](https://www.npmjs.com/package/zod) for validations. Zod relies a lot on the type inference feature of typescript which is why I can do something as shown below:
```ts
import { z } from "zod";
export const roles = ["admin", "manager", "user"] as const;
export const Role = z.enum(roles);
export type Role = z.infer<typeof Role>;
```
Now, deno gets upset when I try to publish the above to jsr because of slow-types.
```
|
5 | export const Role = z.enum(roles);
| ^^^^ this symbol is missing an explicit type
|
= hint: add an explicit type annotation to the symbol
```
To fix this issue, I replaced all references to the Zod instance from public API with my custom types that don't rely on type inference.
```diff
import { z } from "zod";
export const roles = ["admin", "manager", "user"] as const;
- export const Role = z.enum(roles);
+ const Role = z.enum(roles);
- export type Role = z.infer<typeof Role>;
+ export type Role = (typeof roles)[number];
+ export function isValidRole(role: unknown): [true] | [false, string[]] {
+ const res = Role.safeParse(role);
+ if (res.success) {
+ return [true];
+ }
+ return [false, res.error.errors.map((e) => e.message)];
+ }
```
However, when I try to publish the above code, I still see the following error:
```
|
5 | const Role = z.enum(roles);
| ^^^^ this symbol is missing an explicit type
|
= hint: add an explicit type annotation to the symbol
```
I suspected that this could be happening because of the name collision of `Role` object and `Role` type. So, I renamed the `Role` object to `ZodRole` and now I am able to publish.
```diff
import { z } from "zod";
export const roles = ["admin", "manager", "user"] as const;
- const Role = z.enum(roles);
+ const ZodRole = z.enum(roles);
export type Role = (typeof roles)[number];
export function isValidRole(role: unknown): [true] | [false, string[]] {
- const res = Role.safeParse(role);
+ const res = ZodRole.safeParse(role);
if (res.success) {
return [true];
}
return [false, res.error.errors.map((e) => e.message)];
}
```
Would it be possible for Deno to be able to differentiate between a type name and object name while publishing to avoid the mis-leading public API slow-types warning? | needs investigation,publish | low | Critical |
2,628,895,484 | vscode | word wrap is different for left and right sides |
Type: <b>Bug</b>
1. Create a file:
```
left
ignoreCase:
description: |-
IgnoreCase specifies that string matching should be case insensitive.
```
2. Create a file:
```
right
ignoreCase:
description: |-
IgnoreCase specifies that string matching should be case-insensitive.
```
3. Select the left tab and then shift click to select the right tab
4. Right click a tab
5. Choose `Compare Selected`
6. Turn on word-wrap
### Actual Results

(Note that the picture has the files detected as Markdown, but changing them to YAML results in the same behavior as long as word-wrap is turned back on...)
### Expected Results
The wrap column should match for both sides (offhand, I prefer the right side)
### ...
VS Code version: Code 1.94.2 (Universal) (384ff7382de624fb94dbaf6da11977bba1ecd427, 2024-10-09T16:08:44.566Z)
OS version: Darwin arm64 24.0.0
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Apple M1 Max (10 x 2400)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|5, 5, 5|
|Memory (System)|64.00GB (0.13GB free)|
|Process Argv|--crash-reporter-id 1fc67ee2-0174-4598-9f98-4537df0dd32c|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (19)</summary>
Extension|Author (truncated)|Version
---|---|---
quitcontrol-vscode|art|4.0.0
asciidoctor-vscode|asc|3.4.2
yamlfmt|blu|0.1.4
vscode-intelephense-client|bme|1.12.6
open-in-macdown|Cod|1.0.0
intelli-php-vscode|DEV|0.12.15062
dhall-lang|dha|0.0.4
vscode-dhall-lsp-server|dha|0.0.4
EditorConfig|Edi|0.16.4
html-preview-vscode|geo|0.2.5
vscode-github-actions|git|0.27.0
vscode-pull-request-github|Git|0.99.2024101604
go|gol|0.42.1
file-downloader|min|1.0.13
sarif-viewer|MS-|3.4.4
vscode-dhall-lsp-server|pan|0.0.4
vscode-xml|red|0.27.1
rst-vscode|tht|3.0.1
simple-rst|tro|1.5.4
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vswsl492:30256859
vscod805cf:30301675
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
vscrp:30673768
962ge761:30959799
pythongtdpath:30769146
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
accentitlementst:30995554
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
945dj816:31013170
dvdeprecation:31068756
dwnewjupytercf:31046870
newcmakeconfigv2:31071590
impr_priority:31102340
nativerepl2:31139839
refactort:31108082
pythonrstrctxt:31112756
wkspc-onlycs-t:31132770
wkspc-ranged-t:31151552
cf971741:31144450
autoexpandse:31146404
iacca1:31156133
notype1:31157159
5fd0e150:31155592
dwcopilotcf:31162479
icondisabled:31158250
```
</details>
<!-- generated by issue reporter --> | bug,diff-editor,editor-wrapping | low | Critical |
2,628,896,457 | deno | Error in deno while debugging code using vscode | **Describe the bug**
Error in deno while debugging code using vscode
**Steps to Reproduce**
The debug code looks like this:
```ts
class Test {
run() {
const resources: any[] = [1, 2, 3];
for (const adapter of resources) {
Promise.resolve().then(() => {
console.log(resources.at(adapter));
});
}
console.log("end");
}
}
new Test().run();
```
Set the breakpoint at the line 5

launch.json is configured as follows๏ผ
```json
{
"version": "0.2.0",
"configurations": [
{
"name": "Run TS",
"request": "launch",
"type": "node",
"program": "${file}",
"cwd": "${workspaceFolder}",
"sourceMaps": true,
"runtimeArgs": ["run"],
"runtimeExecutable": "A:/repo/study/deno/deno.exe"
}
]
}
```
Now start debugging through vscode ui.
The debug console outputs the following:
```
A:/repo/study/deno/deno.exe run --inspect-brk=127.0.0.1:53687 --allow-all .\a.ts
Debugger listening on ws://127.0.0.1:53687/ws/c5dc8fd1-a48e-48ca-8514-7452e087c0f7
Visit chrome://inspect to connect to the debugger.
Deno is waiting for debugger to connect.
Debugger session started.
#
# Fatal error in , line 0
# Check failed: needs_context && current_scope_ == closure_scope_ && current_scope_->is_function_scope() && !function_.is_null() implies function_->context() != *context_.
#
#
#
#FailureMessage Object: 000000403A7F4660
==== C stack trace ===============================
CrashForExceptionInNonABICompliantCodeRange [0x00007FF770EDD34B+1316811]
onig_get_string_end_by_callout_args [0x00007FF770BCA827+10150415]
onig_get_string_end_by_callout_args [0x00007FF770C26597+10526591]
CrashForExceptionInNonABICompliantCodeRange [0x00007FF77146F983+7159299]
CrashForExceptionInNonABICompliantCodeRange [0x00007FF77146EB31+7155633]
CrashForExceptionInNonABICompliantCodeRange [0x00007FF77129934B+5232587]
CrashForExceptionInNonABICompliantCodeRange [0x00007FF770F5A570+1829360]
CrashForExceptionInNonABICompliantCodeRange [0x00007FF770F4E5C5+1780293]
CrashForExceptionInNonABICompliantCodeRange [0x00007FF770EE8999+1363481]
onig_get_string_end_by_callout_args [0x00007FF770BCD804+10162668]
CrashForExceptionInNonABICompliantCodeRange [0x00007FF770EE4536+1345974]
CrashForExceptionInNonABICompliantCodeRange [0x00007FF770EE4914+1346964]
onig_get_string_end_by_callout_args [0x00007FF770D8B93A+11989794]
onig_get_string_end_by_callout_args [0x00007FF770D8AB6B+11986259]
CrashForExceptionInNonABICompliantCodeRange [0x00007FF77117C5EA+4065898]
CrashForExceptionInNonABICompliantCodeRange [0x00007FF771FF7C7F+19251455]
CrashForExceptionInNonABICompliantCodeRange [0x00007FF7720DBD7E+20185598]
CrashForExceptionInNonABICompliantCodeRange [0x00007FF771F566DE+18590558]
CrashForExceptionInNonABICompliantCodeRange [0x00007FF771F566DE+18590558]
CrashForExceptionInNonABICompliantCodeRange [0x00007FF771F9A6CD+18869069]
CrashForExceptionInNonABICompliantCodeRange [0x00007FF771F5425C+18581212]
CrashForExceptionInNonABICompliantCodeRange [0x00007FF771F53DAF+18580015]
onig_get_string_end_by_callout_args [0x00007FF770D22578+11558752]
onig_get_string_end_by_callout_args [0x00007FF770D23155+11561789]
onig_get_string_end_by_callout_args [0x00007FF770D232B3+11562139]
onig_get_string_end_by_callout_args [0x00007FF770D2A8D1+11592377]
onig_get_string_end_by_callout_args [0x00007FF770D2A02A+11590162]
onig_get_string_end_by_callout_args [0x00007FF770D29A9F+11588743]
onig_get_string_end_by_callout_args [0x00007FF770D23CDC+11564740]
onig_get_string_end_by_callout_args [0x00007FF770BAB95D+10023749]
onig_get_string_end_by_callout_args [0x00007FF770B97A40+9942056]
onig_get_start_by_callout_args [0x00007FF76F16216D+3445385]
onig_get_capture_tree [0x00007FF76ED2D187+4610007]
onig_get_capture_tree [0x00007FF76ED2A4B2+4598530]
onig_get_capture_tree [0x00007FF76ED28CBE+4592398]
onig_get_capture_tree [0x00007FF76ECC6459+4188841]
onig_get_capture_tree [0x00007FF76ED3A3A2+4663794]
onig_get_regex_by_callout_args [0x00007FF76E700BFF+265199]
onig_get_capture_tree [0x00007FF76EE096E5+5512501]
onig_get_regex_by_callout_args [0x00007FF76E85A844+1681460]
onig_get_capture_tree [0x00007FF76ED5BDDB+4801579]
onig_get_regex_by_callout_args [0x00007FF76E6CE9C6+59830]
onig_get_capture_tree [0x00007FF76EE097C7+5512727]
onig_unicode_define_user_property [0x00007FF7722775DC+1088992]
BaseThreadInitThunk [0x00007FF92195257D+29]
RtlUserThreadStart [0x00007FF922B2AF08+40]
```
If I set the breakpoint to line 7 or line 10, deno will be able to stop at the breakpoint
**Expected behavior**
The program stops at the breakpoint on line five
**Environment**
- OS: Windows 11 23H2
- deno version: 2.0.4
| bug,upstream,debugger,needs investigation | low | Critical |
2,628,909,457 | godot | "Toggle Animation Skeleton Visibility" enabled/disabled icons appear to be inverted | ### Tested versions
Godot v4.4.dev (c6c464cf9)
### System information
Godot v4.4.dev (c6c464cf9) - Windows 10.0.22631 - Multi-window, 1 monitor - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 4070 Laptop GPU (NVIDIA; 31.0.15.4683) - AMD Ryzen 7 7840HS w/ Radeon 780M Graphics (16 threads)
### Issue description
"Toggle Animation Skeleton Visibility" enabled/disabled icons appear to be inverted. It is colored when disabled and grey when enabled. When you compare it with the light icons, they are grey when disabled and while when enabled.
https://github.com/user-attachments/assets/68b1921f-9093-42fc-b490-4c82e2afc3e6
### Steps to reproduce
Open the import view for a gltf file with a skeleton animation included.
### Minimal reproduction project (MRP)
[toggle_skeleton_button_issue.zip](https://github.com/user-attachments/files/17599341/toggle_skeleton_button_issue.zip)
| bug,topic:editor,topic:import,topic:3d | low | Minor |
2,628,939,094 | vscode | VS Code continuously crashes on Windows 11 for more than month | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.94.2 and above (previous versions are also affected, but not quite certain which exactly)
- OS Version: Windows 11 Enterprise v. 24H2, build 26100.2033
Steps to Reproduce:
1. Run VS Code, work with it
2. Put your machine to the sleep mode or hibernation, then wake it up
3. [Optionally] Try to search something across the folders:
- You will see ENOENT error when trying to spawn search process
4. Close VS Code and try to open it again
- Observe error message:

- Observe unstopped VS Code processes:

5. Computer restart doesn't help
6. The only workaround is to uninstall VS Code and reinstall it again
- Executing `unins000.exe` doesn't uninstall VS Code folder completely after incident occurred - You need to double-check for running processes in task manager, kill them and remove the rest of files manually
8. **Repro is not stable**: It might appear once-twice per week - Update of VS Code doesn't seem to be related
- Update of OS doesn't seem to be related | under-discussion | medium | Critical |
2,628,952,573 | ui | [bug]: MenuBar inside ContextMenu close issue | ### Describe the bug
When I use a context menu within the menu bar, when the context menu closes, the menu bar content also closes, or when I open the context menu again, the menubar content closes again.
Actually, there is no problem in its operation, but when I click outside, only the context menu should close.
### Affected component/components
ContextMenu
### How to reproduce
When I use a context menu within the menu bar, when the context menu closes, the menu bar content also closes, or when I open the context menu again, the menubar content closes again.
```jsx
<Menubar>
<MenubarTrigger>Menu</MenubarTrigger>
<MenubarContent>
<ContextMenu>
<ContextMenuTrigger asChild>
<MenubarItem>Menu Item</MenubarItem>
</ContextMenuTrigget/>
<ContextMenuContent>
</ContextMenuContent>
</ContextMenu>
</MenubarContent>
</Menubar>
```
Actually, there is no problem in its operation, but when I click outside, only the context menu should close.
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
shadcn@2.1.3
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,628,965,772 | vscode | Workbench OOM on windows | Got a dump file from @sbatten with the following stack
```
Operating system: Windows NT
10.0.26100 2161
CPU: amd64
family 23 model 113 stepping 0
24 CPUs
GPU: UNKNOWN
Crash reason: Out of Memory
Crash address: 0x7ffe58f2831a
Process uptime: 47705 seconds
Thread 0 (crashed)
0 KERNELBASE.dll!RaiseException + 0x8a
rax = 0x00007ff61780183a rdx = 0x0000000000000066
rcx = 0x0000000000000001 rbx = 0x00000054511fbb80
rsi = 0x0000000000000001 rdi = 0x00000000e0000008
rbp = 0x0000000000000001 rsp = 0x00000054511fba60
r8 = 0x0000f9169104cd24 r9 = 0x000001300e1e5c00
r10 = 0x000001300699c0b8 r11 = 0x0000013014ab1d60
r12 = 0x0000013015801060 r13 = 0x0000013015804000
r14 = 0x0000000000000000 r15 = 0x00000000009ca000
rip = 0x00007ffe58f2831a
Found by: given as instruction pointer in context
1 Code - Insiders.exe!static void partition_alloc::internal::OnNoMemoryInternal(unsigned __int64) [oom.cc : 37 + 0x16]
rbx = 0x00007ff620b8dc38 rsi = 0x00000000009ca000
rdi = 0x00000054511fbc70 rbp = 0x0000013015801020
rsp = 0x00000054511fbb60 r12 = 0x0000013015801060
r13 = 0x0000013015804000 r14 = 0x0000000000000000
r15 = 0x00000000009ca000 rip = 0x00007ff6182be6db
Found by: call frame info
2 Code - Insiders.exe!partition_alloc::TerminateBecauseOutOfMemory(unsigned __int64) [oom.cc : 64 + 0x5]
rbx = 0x00007ff620b8dc38 rbp = 0x0000013015801020
rsp = 0x00000054511fbb90 r12 = 0x0000013015801060
r13 = 0x0000013015804000 r14 = 0x0000000000000000
r15 = 0x00000000009ca000 rip = 0x00007ff6182be6f9
Found by: call frame info
3 Code - Insiders.exe!partition_alloc::internal::OnNoMemory(unsigned __int64) [oom.cc : 74 + 0x8]
rbx = 0x00007ff620b8dc38 rbp = 0x0000013015801020
rsp = 0x00000054511fbbc0 r12 = 0x0000013015801060
r13 = 0x0000013015804000 r14 = 0x0000000000000000
r15 = 0x00000000009ca000 rip = 0x00007ff6182be715
Found by: call frame info
4 Code - Insiders.exe!static void WTF::PartitionsOutOfMemoryUsing16M(unsigned __int64) [partitions.cc : 342 + 0x8]
rbx = 0x00007ff620b8dc38 rbp = 0x0000013015801020
rsp = 0x00000054511fbbf0 r12 = 0x0000013015801060
r13 = 0x0000013015804000 r14 = 0x0000000000000000
r15 = 0x00000000009ca000 rip = 0x00007ff61ce28631
Found by: call frame info
5 Code - Insiders.exe!static void WTF::Partitions::HandleOutOfMemory(unsigned __int64) [partitions.cc : 452 + 0x5]
rbx = 0x00007ff620b8dc38 rbp = 0x0000013015801020
rsp = 0x00000054511fbc30 r12 = 0x0000013015801060
r13 = 0x0000013015804000 r14 = 0x0000000000000000
r15 = 0x00000000009ca000 rip = 0x00007ff61ce283c4
Found by: call frame info
6 Code - Insiders.exe!partition_alloc::PartitionRoot::OutOfMemory(unsigned __int64) [partition_root.cc : 903 + 0x9]
rbx = 0x00007ff620b8dc38 rbp = 0x0000013015801020
rsp = 0x00000054511fbce0 r12 = 0x0000013015801060
r13 = 0x0000013015804000 r14 = 0x0000000000000000
r15 = 0x00000000009ca000 rip = 0x00007ff6182b8f64
Found by: call frame info
7 Code - Insiders.exe!static void partition_alloc::internal::`anonymous namespace'::PartitionOutOfMemoryCommitFailure(struct partition_alloc::PartitionRoot *, unsigned __int64) [partition_bucket.cc : 55 + 0xb]
rbx = 0x00007ff620b8dc38 rbp = 0x0000013015801020
rsp = 0x00000054511fbd60 r12 = 0x0000013015801060
r13 = 0x0000013015804000 r14 = 0x0000000000000000
r15 = 0x00000000009ca000 rip = 0x00007ff6182be389
Found by: call frame info
8 Code - Insiders.exe!static struct partition_alloc::internal::SlotSpanMetadata * partition_alloc::internal::`anonymous namespace'::PartitionDirectMap(struct partition_alloc::PartitionRoot *, partition_alloc::internal::AllocFlags, unsigned __int64, unsigned __int64) [partition_bucket.cc : 411 + 0xb]
rbx = 0x00007ff620b8dc38 rbp = 0x0000013015801020
rsp = 0x00000054511fbda0 r12 = 0x0000013015801060
r13 = 0x0000013015804000 r14 = 0x0000000000000000
r15 = 0x00000000009ca000 rip = 0x00007ff6182bcf19
Found by: call frame info
9 Code - Insiders.exe!partition_alloc::internal::PartitionBucket::SlowPathAlloc(partition_alloc::PartitionRoot *,partition_alloc::internal::AllocFlags,unsigned __int64,unsigned __int64,partition_alloc::internal::SlotSpanMetadata * *,bool *) [partition_bucket.cc : 1343 + 0x10]
rbx = 0x00007ff620b8dc38 rbp = 0x0000013015801020
rsp = 0x00000054511fbe20 r12 = 0x0000013015801060
r13 = 0x0000013015804000 r14 = 0x0000000000000000
r15 = 0x00000000009ca000 rip = 0x00007ff6182bc5b3
Found by: call frame info
10 Code - Insiders.exe!partition_alloc::PartitionRoot::Alloc<0>(unsigned __int64,char const *) [partition_root.h : 515 + 0x2ee]
rbx = 0x00007ff620b8dc38 rbp = 0x0000013015801020
rsp = 0x00000054511fbf00 r12 = 0x0000013015801060
r13 = 0x0000013015804000 r14 = 0x0000000000000000
r15 = 0x00000000009ca000 rip = 0x00007ff6182b827b
Found by: call frame info
11 Code - Insiders.exe!WTF::StringImpl::CreateUninitialized(unsigned int,unsigned char * &) [string_impl.cc : 156 + 0xc]
rbx = 0x00007ff620b8dc38 rbp = 0x0000013015801020
rsp = 0x00000054511fbfc0 r12 = 0x0000013015801060
r13 = 0x0000013015804000 r14 = 0x0000000000000000
r15 = 0x00000000009ca000 rip = 0x00007ff61a98eaa5
Found by: call frame info
12 Code - Insiders.exe!static class WTF::String blink::ParkableStringImpl::UnparkInternal() [parkable_string.cc : 651 + 0x10]
rbx = 0x00007ff620b8dc38 rbp = 0x0000013015801020
rsp = 0x00000054511fc000 r12 = 0x0000013015801060
r13 = 0x0000013015804000 r14 = 0x0000000000000000
r15 = 0x00000000009ca000 rip = 0x00007ff61cf66f14
Found by: call frame info
13 Code - Insiders.exe!blink::ParkableString::ToString() [parkable_string.cc : 1037 + 0x4c]
rbx = 0x00007ff620b8dc38 rbp = 0x0000013015801020
rsp = 0x00000054511fc0d0 r12 = 0x0000013015801060
r13 = 0x0000013015804000 r14 = 0x0000000000000000
r15 = 0x00000000009ca000 rip = 0x00007ff61a9ac893
Found by: call frame info
14 Code - Insiders.exe!blink::ParkableStringResource8::data() [string_resource.h : 236 + 0x9]
rbx = 0x00007ff620b8dc38 rbp = 0x0000013015801020
rsp = 0x00000054511fc130 r12 = 0x0000013015801060
r13 = 0x0000013015804000 r14 = 0x0000000000000000
r15 = 0x00000000009ca000 rip = 0x00007ff61aad6aad
Found by: call frame info
15 Code - Insiders.exe!v8::internal::ScannerStream::For(v8::internal::Isolate *,v8::internal::Handle<v8::internal::String>,int,int) [scanner-character-streams.cc : 878 + 0x143]
rbx = 0x00007ff620b8dc38 rbp = 0x0000013015801020
rsp = 0x00000054511fc160 r12 = 0x0000013015801060
r13 = 0x0000013015804000 r14 = 0x0000000000000000
r15 = 0x00000000009ca000 rip = 0x00007ff61a2dbc4d
Found by: call frame info
16 Code - Insiders.exe!v8::internal::parsing::ParseAny(v8::internal::ParseInfo *,v8::internal::Handle<v8::internal::SharedFunctionInfo>,v8::internal::Isolate *,v8::internal::parsing::ReportStatisticsMode) [parsing.cc : 105 + 0xac]
rbx = 0x00007ff620b8dc38 rbp = 0x0000013015801020
rsp = 0x00000054511fc1e0 r12 = 0x0000013015801060
r13 = 0x0000013015804000 r14 = 0x0000000000000000
r15 = 0x00000000009ca000 rip = 0x00007ff61a28f10e
Found by: call frame info
17 Code - Insiders.exe!v8::internal::Compiler::Compile(v8::internal::Isolate *,v8::internal::Handle<v8::internal::SharedFunctionInfo>,v8::internal::Compiler::ClearExceptionFlag,v8::internal::IsCompiledScope *,v8::internal::CreateSourcePositions) [compiler.cc : 2630 + 0x16]
rbx = 0x00007ff620b8dc38 rbp = 0x0000013015801020
rsp = 0x00000054511fc890 r12 = 0x0000013015801060
r13 = 0x0000013015804000 r14 = 0x0000000000000000
r15 = 0x00000000009ca000 rip = 0x00007ff61a064916
Found by: call frame info
18 Code - Insiders.exe!v8::internal::Compiler::Compile(v8::internal::Isolate *,v8::internal::Handle<v8::internal::JSFunction>,v8::internal::Compiler::ClearExceptionFlag,v8::internal::IsCompiledScope *) [compiler.cc : 2693 + 0x1e]
rbx = 0x00007ff620b8dc38 rbp = 0x0000013015801020
rsp = 0x00000054511fcc00 r12 = 0x0000013015801060
r13 = 0x0000013015804000 r14 = 0x0000000000000000
r15 = 0x00000000009ca000 rip = 0x00007ff61a066b24
Found by: call frame info
19 Code - Insiders.exe!v8::internal::Runtime_CompileLazy(int,unsigned __int64 *,v8::internal::Isolate *) [runtime-compiler.cc : 45 + 0xa2]
rbx = 0x00007ff620b8dc38 rbp = 0x0000013015801020
rsp = 0x00000054511fccd0 r12 = 0x0000013015801060
r13 = 0x0000013015804000 r14 = 0x0000000000000000
r15 = 0x00000000009ca000 rip = 0x00007ff61a31aa12
Found by: call frame info
20 Code - Insiders.exe!Builtins_CEntry_Return1_ArgvOnStack_NoBuiltinExit + 0x3a
rbx = 0x00007ff620b8dc38 rbp = 0x0000013015801020
rsp = 0x00000054511fcd50 r12 = 0x0000013015801060
r13 = 0x0000013015804000 r14 = 0x0000000000000000
r15 = 0x00000000009ca000 rip = 0x00007ff61a77fefa
Found by: call frame info
21 Code - Insiders.exe!Builtins_MapPrototypeHas + 0x3a
rbx = 0x00007ff620b8dc38 rbp = 0x0000013015801020
rsp = 0x00000054511fcd60 r12 = 0x0000013015801060
r13 = 0x0000013015804000 r14 = 0x0000000000000000
r15 = 0x00000000009ca000 rip = 0x00007ff61a74643a
Found by: call frame info
``` | bug,freeze-slow-crash-leak,windows | low | Critical |
2,628,971,527 | godot | `.app` files can't be deleted from the browse files window | ### Tested versions
4.3.stable
### System information
Godot v4.3.stable - macOS 15.0.1 - Vulkan (Mobile) - integrated Apple M3 Max - Apple M3 Max (14 Threads)
### Issue description
Whether you consider this bug report an issue at all depends on your answer to this question: is a file with the `.app` extension a file or a folder?
One could argue: both. It's a package that Finder usually considers to be a single file, unless you right click on the `.app` file and choose Show Package Contents, after which you open the file as a folder.
This is perhaps a funny edge case, but I am of the opinion that the browser should be able to delete the `.app` file, hence why I'm issuing a bug report on it. I specifically right-mouse-clicked on it to delete it, so why shouldn't it work?
https://github.com/user-attachments/assets/bc65681d-95a3-43f9-af3e-22bc2757f441
### Steps to reproduce
Try to remove an `.app` file from the browse file window in Godot. It won't work.
### Minimal reproduction project (MRP)
n/a | discussion,topic:editor | low | Critical |
2,628,975,354 | react | [Compiler Bug]: | ### What kind of issue is this?
- [X] React Compiler core (the JS output is incorrect, or your app works incorrectly after optimization)
- [ ] babel-plugin-react-compiler (build issue installing or using the Babel plugin)
- [ ] eslint-plugin-react-compiler (build issue installing or using the eslint plugin)
- [ ] react-compiler-healthcheck (build issue installing or using the healthcheck script)
### Link to repro
https://playground.react.dev/#N4Igzg9grgTgxgUxALhASwLYAcIwC4AEwBUYCAsghhADQlkBKCAZnaQgMICGANjwEZc4AawIBfAsxgQMBADogYCIXgUBuOQDtMOfEQJ4AFtLx4eCcZOmyFPCABMuYQ+q1aEAD12F7LLlB5CZihNODw0CE0CcgBPAEEsLAAKAEoiLQICOEiwQiVmAgBeegQmZlSNKIysnMIjEzMEewAxCDhSIpLKaiTqzNSigD4+zINjCFNzXqjR2YHCwYJ8gDp2mCVNPAB+ZeY20lSaEdmAVgAGM6OZ2czgY5vzLns0TQBzZEleMiubm7wYLhoHgvd4GGBQBA-X7iKGzFKwggAbT27TAAF0oSlKtV2ABRZjMBBhJLzRbAeoTRotfZgVJiOiIimTJqtVFolJuGZKPCwKIAHmeADclixCsB8mJBgAJBB8CAEADquB49j5AHohYNKmItCAxEA
### Repro steps
The compiler assumes that `ref.current` is called during render when inside of `useMemo()`, even when `useMemo` is returning a callback that is only called in a `useEffect` and never during render. See playground for the code.
The most common use case for specifying a callback with `useMemo` instead of `useCallback`, is when the callback is wrapped in a debouncing technique, like `lodash/throttle`:
```tsx
const throttledFocus = useMemo(
() =>
throttle(
() => ref.current?.focus(),
500,
{
leading: false,
trailing: true,
},
),
[focus],
);
```
It's as if the compiler isn't differentiating the closure that returns the memoized value (and thus is indeed firing during render), and `useMemo` returning a callback ๐ค
### How often does this bug happen?
Every time
### What version of React are you using?
^18.3.1
### What version of React Compiler are you using?
19.0.0-beta-6fc168f-20241025 | Type: Bug,Status: Unconfirmed,Component: Optimizing Compiler | low | Critical |
2,628,975,943 | angular | tutorial offers solutions but not answers | ### Which @angular/* package(s) are relevant/related to the feature request?
_No response_
### Description

I am keen to learn something practical and plodding into the tutorials for angular.dev, early on. I notice that the answers are in the tutorials which is maybe helpful due to syntax, but this time I decide to look at the answer as well despite solving and the answer is correct in the output but not possible in the code of the supposed solution. Am I looking at an issue or that the learning is not useful or supported I wonder now.
### Proposed solution
The lessons are helpful so being all up to date and accurate is a good fix, ie just repair.
### Alternatives considered
It cannot be left innacurate, it shows the code is not balanced. | area: docs | low | Minor |
2,628,999,787 | godot | [macOS] Export to Windows doesn't start unless I click on the Wine app | ### Tested versions
4.3.stable
### System information
Godot v4.3.stable - macOS 15.0.1 - Vulkan (Mobile) - integrated Apple M3 Max - Apple M3 Max (14 Threads)
### Issue description
https://github.com/user-attachments/assets/f0b4b566-b330-4581-8987-950dfab62217
### Steps to reproduce
Export to Windows on macOS.
### Minimal reproduction project (MRP)
n/a | bug,platform:windows,platform:macos,topic:export | low | Minor |
2,629,014,462 | deno | deno.land & jsr.io "403 Forbidden" and "os error 104" | Version: Deno 2.0.4
Recently I've been trying to deploy a Fresh project on my own infrastructure which I setup in Hetzner Cloud. Currently I am facing an issue when installing dependencies from `deno.land` and `jsr.io`.
The initial issue that I faced was identical to the one described in this issue: https://github.com/denoland/deno/issues/23530. That being an issue with IPV6. However after disabling IPV6 on both of my VPS, the same issues persist:
### deno.land
```
error: client error (Connect): Connection reset by peer (os error 104)
```
and
```
error: Import 'https://deno.land/x/fresh@1.7.3/dev.ts' failed: error sending request for url (https://deno.land/x/fresh@1.7.3/dev.ts): client error (Connect): tls handshake eof` when installing dependencies from `deno.land
```
### jsr.io
```
JSR package manifest for '@luca/esbuild-deno-loader' failed to load. Import 'https://jsr.io/@luca/esbuild-deno-loader/meta.json' failed: 403 Forbidden
```
Interestingly if I try to manually `curl` these packages from the VPS there is no issue (for example: `curl https://deno.land/x/fresh@1.7.3/dev.ts`).
I've emailed support@deno.com with the IP addresses of the machines that are failing, but haven't heard back in a few days.
For now I will say that I have worked around the issue by changing the imports to deno.land dependencies to use the `raw.githubusercontent.com` urls:
```json
"imports": {
"$fresh/": "https://raw.githubusercontent.com/denoland/fresh/1.7.3/"
}
```
But this feels less than ideal. Any help with this would be greatly appreciated. Thank you! | bug,needs investigation,jsr | low | Critical |
2,629,030,117 | deno | Deno task cannot run a script when it contains "~" | Version: Deno 2.0.4
I have the following script in `package.json`, where when I run `deno task sync`, `deno` returns an error (see below): Am I doing something wrong?
**Script:**
```
"scripts": {
"sync": "watch rsync -av --delete ./build/ server-dns:~/dir/build/"
},
```
**Error:**
```
error: Error parsing script 'sync:wildnis'.
Caused by:
Unexpected character.
~/dir/build/
~
``` | bug,task runner | low | Critical |
2,629,036,575 | terminal | Enhance pane context menu | ### Description of the new feature
Add split pane up/down/left/right context menus as submenu.
Add split pane with profile up/down/left/right context menus as submenu.
Add swap panes up/down/left/right context menus as submenu.
Add toggle pane zoom context menu.
Add close other panes context menu.
The motivation is that Windows users are more accustomed to working with GUI Menus using a mouse, unlike Linux users.
- Relevant PR: (#18126)
### Proposed technical implementation details
Implemented it - PR (#18126) | Issue-Feature,Area-UserInterface,Product-Terminal | low | Minor |
2,629,037,167 | react | Bug: Incorrect Checkbox Toggle on Mobile Devices | ### Issue: Incorrect Checkbox Toggle on Mobile Devices
---
**Summary**
I noticed this bug in lists containing two or more checkboxes, tapping on a checkbox sometimes toggles a different one. This issue appears to be isolated to mobile and touch-based devices.
**Observed Behavior**
When interacting with checkboxes on mobile, there is an inconsistency: clicking on one checkbox may inadvertently toggle another. For instance, tapping in sequence may result in the prior checkbox activating or deactivating instead of the one currently being clicked.
**Technical Details**
- This problem occurs specifically in React version 18.3.1.
- The error does not appear to exist in React 16.14.0, where toggling functions as expected.
- The bug might relate to how touch event listeners are handled in React 18.3.1, given its absence in earlier versions.
**Steps to Reproduce**
1. Access the app on a mobile device (e.g., iPhone 15 Pro) using Safari, Orion, or Firefox.
2. Select multiple checkboxes in sequence.
3. Notice that the expected toggle behavior is disrupted, with the wrong checkbox sometimes activating.
**Expected Result**
Each checkbox should respond only to its respective tap or click, toggling exclusively as the user interacts with it.
**Live Demonstration**
- **Correct Behavior (React 16.14.0)**:
[React 16 Example Sandbox](https://codesandbox.io/s/react-16-checkboxes-xzzl96)
[Live Preview](https://xzzl96.csb.app/)
- **Incorrect Behavior (React 18.2.0)**:
[React 18 Example Sandbox](https://codesandbox.io/s/react-18-checkboxes-zrhpfp)
[Live Preview](https://zrhpfp.csb.app/) | Status: Unconfirmed | low | Critical |
2,629,063,705 | godot | Can't Quick Open multiple files at once anymore in 4.4 | ### Tested versions
- v4.4.dev.custom_build [c6c464cf9]
### System information
Windows 10 - Vulkan
### Issue description
After the Quick Open panel change at some point in 4.4 I think, I can no longer multiple select different files to open, for example 'Quick Open Scripts', I used to be able to hold `Shift` or `Control` and select several files and then they'd all open.
With this new panel it no longer works, it just opens the one I click on:

### Steps to reproduce
Scene -> Quick Open Script: try to select several files to open.
### Minimal reproduction project (MRP)
Any | enhancement,discussion,topic:editor,confirmed,usability | low | Minor |
2,629,082,095 | kubernetes | DRA: test flake in DRA [Feature:DynamicResourceAllocation] cluster DaemonSet with admin access [Feature:DRAAdminAccess] | ### Which jobs are flaking?
pull-kubernetes-kind-dra-all
https://prow.k8s.io/view/gs/kubernetes-ci-logs/pr-logs/pull/127511/pull-kubernetes-kind-dra-all/1852326009861312512
### Which tests are flaking?
DRA [Feature:DynamicResourceAllocation] cluster DaemonSet with admin access
### Since when has it been flaking?
Only once so far on 2024-11-01.
### Testgrid link
https://testgrid.k8s.io/sig-node-dynamic-resource-allocation#ci-kind-dra-all
### Reason for failure (if possible)
"support validating admission policy for admin access" must have run in parallel to "DaemonSet with admin access". The former deploys a VAP which prevents admin access, the latter doesn't.
### Anything else we need to know?
This patch *should* fix, but somehow the namespace selector didn't match the test namespace and thus the VAP didn't trigger anymore:
```patch
diff --git a/test/e2e/dra/dra.go b/test/e2e/dra/dra.go
index cb2324e0a5e..1498b0cbea4 100644
--- a/test/e2e/dra/dra.go
+++ b/test/e2e/dra/dra.go
@@ -830,6 +830,10 @@ var _ = framework.SIGDescribe("node")("DRA", feature.DynamicResourceAllocation,
f.It("support validating admission policy for admin access", feature.DRAAdminAccess, func(ctx context.Context) {
// Create VAP, after making it unique to the current test.
adminAccessPolicyYAML := strings.ReplaceAll(adminAccessPolicyYAML, "dra.example.com", b.f.UniqueName)
+ adminAccessPolicyYAML = strings.ReplaceAll(adminAccessPolicyYAML,
+ "null # namespaceSelector",
+ fmt.Sprintf(`{matchExpressions: [{key: "metadata.name", operator: "In", values: [%q]}]}`, f.Namespace.Name),
+ )
driver.createFromYAML(ctx, []byte(adminAccessPolicyYAML), "")
diff --git a/test/e2e/dra/test-driver/deploy/example/admin-access-policy.yaml b/test/e2e/dra/test-driver/deploy/example/admin-access-policy.yaml
index 822b1c7d991..964d0f00145 100644
--- a/test/e2e/dra/test-driver/deploy/example/admin-access-policy.yaml
+++ b/test/e2e/dra/test-driver/deploy/example/admin-access-policy.yaml
@@ -22,6 +22,9 @@ spec:
apiVersions: ["v1alpha3", "v1beta1"]
operations: ["CREATE", "UPDATE"]
resources: ["resourceclaims"]
+
+ # This is for tests. Don't change the comment!
+ namespaceSelector: null # namespaceSelector
validations:
- expression: '! object.spec.devices.requests.exists(e, has(e.adminAccess) && e.adminAccess)'
reason: Forbidden
@@ -52,6 +55,9 @@ spec:
apiVersions: ["v1alpha3", "v1beta1"]
operations: ["CREATE", "UPDATE"]
resources: ["resourceclaimtemplates"]
+
+ # This is for tests. Don't change the comment!
+ namespaceSelector: null # namespaceSelector
validations:
- expression: '! object.spec.spec.devices.requests.exists(e, has(e.adminAccess) && e.adminAccess)'
reason: Forbidden
```
### Relevant SIG(s)
/sig node
/wg device-management | sig/node,kind/flake,needs-triage,wg/device-management | low | Critical |
2,629,104,430 | three.js | BatchedMesh.InstancedBufferGeometry instead BatchedMesh.BufferGeometry possible ? | ### Description
BatchedMesh.InstancedBufferGeometry instead BatchedMesh.BufferGeometry possible ?
### Solution
Change buffergeometry to InstancedBufferGeometry and recompute draw range after frustum culling
### Alternatives
Maybe without raycasting. Its need for grass rendering with big amount
### Additional context
_No response_ | Suggestion | low | Minor |
2,629,109,327 | pytorch | _refs.div.floor_rounding returns NaN instead of +- inf when a divide by 0 occurs | ### ๐ Describe the bug
Currently, in `test/test_ops.py::TestCommonCPU::test_python_ref_torch_fallback__refs_div_floor_rounding_cpu_bfloat16` the floor div operator is tested using `torch` and `torch._refs`. In `PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=13` the divisor is very small. When this occurs, the `torch` result contains `+/- inf`, and the `torch._refs` result contains `NaN` which causes the test to fail.
This issue is dependent on https://github.com/pytorch/pytorch/pull/136308 landing which ensures the rounding mode will be passed through the kwargs and adds a skip for this test.
Reproducer:
Once the above PR is landed, disable the skip in `<base dir>/test/test_ops.py:535`
```
PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=13 python test/test_ops.py TestCommonCPU.test_python_ref_torch_fallback__refs_div_floor_rounding_cpu_bfloat16
```
### Versions
PyTorch version: 2.6.0a0+gitfeb5547
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.29.2
Libc version: glibc-2.35
Python version: 3.10.12 | packaged by conda-forge | (main, Jun 23 2023, 22:40:32) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-119-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 60
On-line CPU(s) list: 0-59
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7742 64-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 30
Socket(s): 1
Stepping: 0
BogoMIPS: 4491.56
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr wbnoinvd arat npt lbrv nrip_save tsc_scale vmcb_clean pausefilter pfthreshold v_vmsave_vmload vgif umip rdpid arch_capabilities
Virtualization: AMD-V
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1.9 MiB (30 instances)
L1i cache: 1.9 MiB (30 instances)
L2 cache: 15 MiB (30 instances)
L3 cache: 16 MiB (1 instance)
NUMA node(s): 2
NUMA node0 CPU(s): 0-29
NUMA node1 CPU(s): 30-59
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] torch==2.6.0a0+gitfeb5547
[conda] Could not collect
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 | module: cpu,triaged | low | Critical |
2,629,130,382 | material-ui | Support for disabling some shadows | ### Summary
Currently, MUI does not allow overriding the `Shadows` without augmenting the interface.
24 values are quite a lot to choose from. In our use case, we are okay with 3 values.
Since the current `Shadows` does not allow overrides unless you override the whole list of 24 values.
It would be beneficial to have a way of turning off the values that are not used.
### Examples
For example, in the `Typography` component, we override the `variants` and disable them via the `TypographyPropsVariantOverrides` interface.
See the docs [Adding & disabling variants](https://mui.com/system/typography/#adding-amp-disabling-variants). This functionality could be extended to the `Shadows`.
### Motivation
The use case varies across different applications. However, the chances that a design system will have 24 shadow values are almost zero. In our design system, we agreed to have 2 to 3 shadow values.
Hence, being able to disable the ones that are not used is beneficial.
My suggestion is a bit related to this: https://github.com/mui/material-ui/issues/28820, but not the same thing.
**Search keywords**: shadow overrides | v6.x,customization: theme,enhancement | low | Minor |
2,629,154,718 | langchain | anthropic_api_key not used for ChatLiteLLM | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code:
```
from langchain_community.chat_models import ChatLiteLLM
from langchain_core.messages import HumanMessage
chat = ChatLiteLLM(model="claude-3-haiku-20240307", anthropic_api_key="...")
messages = [
HumanMessage(
content="Translate this sentence from English to French. I love programming."
)
]
chat(messages)
```
will raise an error
```
AuthenticationError: litellm.AuthenticationError: Missing Anthropic API Key - A call is being made to anthropic but no key is set either in the environment variables or via params. Please set `ANTHROPIC_API_KEY` in your environment vars
```
However, setting the environment variable will work
```
import os
from langchain_community.chat_models import ChatLiteLLM
from langchain_core.messages import HumanMessage
os.environ["ANTROPIC_API_KEY"] = "xxx"
chat = ChatLiteLLM(model="claude-3-haiku-20240307")
messages = [
HumanMessage(
content="Translate this sentence from English to French. I love programming."
)
]
chat(messages)
```
### Error Message and Stack Trace (if applicable)
Full stack trace
```
/usr/local/lib/python3.10/dist-packages/langchain_core/_api/deprecation.py in warning_emitting_wrapper(*args, **kwargs)
180 warned = True
181 emit_warning()
--> 182 return wrapped(*args, **kwargs)
183
184 async def awarning_emitting_wrapper(*args: Any, **kwargs: Any) -> Any:
/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/chat_models.py in __call__(self, messages, stop, callbacks, **kwargs)
1015 **kwargs: Any,
1016 ) -> BaseMessage:
-> 1017 generation = self.generate(
1018 [messages], stop=stop, callbacks=callbacks, **kwargs
1019 ).generations[0][0]
/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/chat_models.py in generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
641 if run_managers:
642 run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> 643 raise e
644 flattened_outputs = [
645 LLMResult(generations=[res.generations], llm_output=res.llm_output) # type: ignore[list-item]
/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/chat_models.py in generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
631 try:
632 results.append(
--> 633 self._generate_with_cache(
634 m,
635 stop=stop,
/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/chat_models.py in _generate_with_cache(self, messages, stop, run_manager, **kwargs)
849 else:
850 if inspect.signature(self._generate).parameters.get("run_manager"):
--> 851 result = self._generate(
852 messages, stop=stop, run_manager=run_manager, **kwargs
853 )
/usr/local/lib/python3.10/dist-packages/langchain_community/chat_models/litellm.py in _generate(self, messages, stop, run_manager, stream, **kwargs)
357 message_dicts, params = self._create_message_dicts(messages, stop)
358 params = {**params, **kwargs}
--> 359 response = self.completion_with_retry(
360 messages=message_dicts, run_manager=run_manager, **params
361 )
/usr/local/lib/python3.10/dist-packages/langchain_community/chat_models/litellm.py in completion_with_retry(self, run_manager, **kwargs)
290 return self.client.completion(**kwargs)
291
--> 292 return _completion_with_retry(**kwargs)
293
294 @pre_init
/usr/local/lib/python3.10/dist-packages/tenacity/__init__.py in wrapped_f(*args, **kw)
334 copy = self.copy()
335 wrapped_f.statistics = copy.statistics # type: ignore[attr-defined]
--> 336 return copy(f, *args, **kw)
337
338 def retry_with(*args: t.Any, **kwargs: t.Any) -> WrappedFn:
/usr/local/lib/python3.10/dist-packages/tenacity/__init__.py in __call__(self, fn, *args, **kwargs)
473 retry_state = RetryCallState(retry_object=self, fn=fn, args=args, kwargs=kwargs)
474 while True:
--> 475 do = self.iter(retry_state=retry_state)
476 if isinstance(do, DoAttempt):
477 try:
/usr/local/lib/python3.10/dist-packages/tenacity/__init__.py in iter(self, retry_state)
374 result = None
375 for action in self.iter_state.actions:
--> 376 result = action(retry_state)
377 return result
378
/usr/local/lib/python3.10/dist-packages/tenacity/__init__.py in <lambda>(rs)
396 def _post_retry_check_actions(self, retry_state: "RetryCallState") -> None:
397 if not (self.iter_state.is_explicit_retry or self.iter_state.retry_run_result):
--> 398 self._add_action_func(lambda rs: rs.outcome.result())
399 return
400
/usr/lib/python3.10/concurrent/futures/_base.py in result(self, timeout)
449 raise CancelledError()
450 elif self._state == FINISHED:
--> 451 return self.__get_result()
452
453 self._condition.wait(timeout)
/usr/lib/python3.10/concurrent/futures/_base.py in __get_result(self)
401 if self._exception:
402 try:
--> 403 raise self._exception
404 finally:
405 # Break a reference cycle with the exception in self._exception
/usr/local/lib/python3.10/dist-packages/tenacity/__init__.py in __call__(self, fn, *args, **kwargs)
476 if isinstance(do, DoAttempt):
477 try:
--> 478 result = fn(*args, **kwargs)
479 except BaseException: # noqa: B902
480 retry_state.set_exception(sys.exc_info()) # type: ignore[arg-type]
/usr/local/lib/python3.10/dist-packages/langchain_community/chat_models/litellm.py in _completion_with_retry(**kwargs)
288 @retry_decorator
289 def _completion_with_retry(**kwargs: Any) -> Any:
--> 290 return self.client.completion(**kwargs)
291
292 return _completion_with_retry(**kwargs)
/usr/local/lib/python3.10/dist-packages/litellm/utils.py in wrapper(*args, **kwargs)
1011 e, traceback_exception, start_time, end_time
1012 ) # DO NOT MAKE THREADED - router retry fallback relies on this!
-> 1013 raise e
1014
1015 @wraps(original_function)
/usr/local/lib/python3.10/dist-packages/litellm/utils.py in wrapper(*args, **kwargs)
901 print_verbose(f"Error while checking max token limit: {str(e)}")
902 # MODEL CALL
--> 903 result = original_function(*args, **kwargs)
904 end_time = datetime.datetime.now()
905 if "stream" in kwargs and kwargs["stream"] is True:
/usr/local/lib/python3.10/dist-packages/litellm/main.py in completion(model, messages, timeout, temperature, top_p, n, stream, stream_options, stop, max_completion_tokens, max_tokens, modalities, audio, presence_penalty, frequency_penalty, logit_bias, user, response_format, seed, tools, tool_choice, logprobs, top_logprobs, parallel_tool_calls, deployment_id, extra_headers, functions, function_call, base_url, api_version, api_key, model_list, **kwargs)
2997 except Exception as e:
2998 ## Map to OpenAI Exception
-> 2999 raise exception_type(
3000 model=model,
3001 custom_llm_provider=custom_llm_provider,
/usr/local/lib/python3.10/dist-packages/litellm/main.py in completion(model, messages, timeout, temperature, top_p, n, stream, stream_options, stop, max_completion_tokens, max_tokens, modalities, audio, presence_penalty, frequency_penalty, logit_bias, user, response_format, seed, tools, tool_choice, logprobs, top_logprobs, parallel_tool_calls, deployment_id, extra_headers, functions, function_call, base_url, api_version, api_key, model_list, **kwargs)
1753 api_base += "/v1/messages"
1754
-> 1755 response = anthropic_chat_completions.completion(
1756 model=model,
1757 messages=messages,
/usr/local/lib/python3.10/dist-packages/litellm/llms/anthropic/chat/handler.py in completion(self, model, messages, api_base, custom_prompt_dict, model_response, print_verbose, encoding, api_key, logging_obj, optional_params, timeout, acompletion, litellm_params, logger_fn, headers, client)
446 client=None,
447 ):
--> 448 headers = validate_environment(
449 api_key,
450 headers,
/usr/local/lib/python3.10/dist-packages/litellm/llms/anthropic/chat/handler.py in validate_environment(api_key, user_headers, model, messages, tools, anthropic_version)
64
65 if api_key is None:
---> 66 raise litellm.AuthenticationError(
67 message="Missing Anthropic API Key - A call is being made to anthropic but no key is set either in the environment variables or via params. Please set `ANTHROPIC_API_KEY` in your environment vars",
68 llm_provider="anthropic",
AuthenticationError: litellm.AuthenticationError: Missing Anthropic API Key - A call is being made to anthropic but no key is set either in the environment variables or via params. Please set `ANTHROPIC_API_KEY` in your environment vars
```
### Description
I am trying to call ChatLiteLLM by passing `anthropic_api_key` to `ChatLiteLLM` without using the environment variable, but it raises an error that `anthropic_api_key` is not set.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Thu Jun 27 21:05:47 UTC 2024
> Python Version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.3.15
> langchain: 0.3.6
> langchain_community: 0.3.4
> langsmith: 0.1.137
> langchain_text_splitters: 0.3.0
Optional packages not installed
-------------------------------
> langgraph
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.10
> async-timeout: 4.0.3
> dataclasses-json: 0.6.7
> httpx: 0.27.2
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> numpy: 1.26.4
> orjson: 3.10.10
> packaging: 24.1
> pydantic: 2.9.2
> pydantic-settings: 2.6.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.36
> tenacity: 9.0.0
> typing-extensions: 4.12.2 | ๐ค:bug | low | Critical |
2,629,174,480 | pytorch | dynamo re-uses incorrect compiled frame when changing requires-gradness of model params | Repro:
```
from functools import partial
from typing import Any, Callable, Iterable, Optional, Tuple
from contextlib import nullcontext
import torch
import torch.nn as nn
from torch.autograd.profiler import record_function
def adjust_model(model):
to_freeze = model.num_iter % 2 == 0
if to_freeze:
for param in model.layer2.parameters():
param.requires_grad = False
else:
for param in model.layer2.parameters():
param.requires_grad = True
class MyModule(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(MyModule, self).__init__()
self.layer1 = nn.Linear(hidden_size, hidden_size)
self.layer2 = nn.Linear(hidden_size, hidden_size)
self.num_iter = 0
def forward(self, x):
x = self.layer2(x + self.layer1.bias)
self.num_iter += 1
return x
# Set random seed for reproducibility
torch.manual_seed(0)
# Generate synthetic data
input_size = 1024
hidden_size = 1024
output_size = 1
num_samples = 2048
# Features are random floats, and labels are also random floats
features = torch.randn(num_samples, input_size, device='cuda')
model = MyModule(input_size, hidden_size, output_size)
model = model.cuda()
model = torch.compile(model)
from torch.profiler import profile, record_function, ProfilerActivity
activities = [ProfilerActivity.CPU, ProfilerActivity.CUDA, ProfilerActivity.XPU]
with profile(activities=activities) as prof:
for _ in range(10):
model.zero_grad(True)
adjust_model(model)
res = model(features)
res.sum().backward()
prof.export_chrome_trace("trace_grad_change_compile_bad2.json")
```
The expected behavior in this repro is that:
* every iteration of the fw/bw, we have frozen or unfrozen layer2's weights
* therefore, the number of matmuls in the backward should flipflop between 1 and 2 (when layer2.weight is frozen, we have a single matmul for the gradient of the activation).
The trace that I get is this, though:
<img width="1408" alt="image" src="https://github.com/user-attachments/assets/96d2d840-4f16-4996-9b1e-d956fb09689a">
<img width="1104" alt="image" src="https://github.com/user-attachments/assets/58341643-0fe9-4755-8fdf-f6ee786d861f">
You can see that in each iteration, we end up with 2 matmuls in the backward, and we are also always re-using compiled frame `0/1` (we should be flip-flopping between `0/0` and `0/1`).
Here is the generated tlparse: https://interncache-all.fbcdn.net/manifold/tlparse_reports/tree/logs/hirsheybar/custom/index.html
And here is the logs with `TORCH_LOGS="+guards"`:
```
DEBUG: GUARDS:
DEBUG:
TREE_GUARD_MANAGER:
+- RootGuardManager
| +- DEFAULT_DEVICE: utils_device.CURRENT_DEVICE == None # _dynamo/output_graph.py:470 in init_ambient_guards
| +- GLOBAL_STATE: ___check_global_state()
| +- TORCH_FUNCTION_MODE_STACK: ___check_torch_function_mode_stack()
| +- GuardManager: source=L['x'], accessed_by=DictGetItemGuardAccessor(x)
| | +- TENSOR_MATCH: check_tensor(L['x'], Tensor, DispatchKeySet(CUDA, BackendSelect, ADInplaceOrView, AutogradCUDA), torch.float32, device=0, requires_grad=False, size=[2048, 1024], stride=[1024, 1]) # x = self.layer2(x + self.layer1.bias) # tmp4.py:31 in forward
| | +- NO_HASATTR: hasattr(L['x'], '_dynamo_dynamic_indices') == False # x = self.layer2(x + self.layer1.bias) # tmp4.py:31 in forward
| | +- NO_TENSOR_ALIASING: check_no_aliasing(L['x'], L['self']._modules['layer1']._parameters['bias'], L['self']._modules['layer2']._parameters['bias'], L['self']._modules['layer2']._parameters['weight'])
| +- GuardManager: source=L['self'], accessed_by=DictGetItemGuardAccessor(self)
| | +- TYPE_MATCH: ___check_type_id(L['self'], 99617504) # x = self.layer2(x + self.layer1.bias) # tmp4.py:31 in forward
| | +- GuardManager: source=L['self'].__dict__, accessed_by=GetGenericDictGuardAccessor
| | | +- GuardManager: source=L['self']._buffers, accessed_by=DictGetItemGuardAccessor(_buffers)
| | | | +- DICT_LENGTH: not L['self']._buffers # buffers = self.__dict__.get("_buffers") # nn/modules/module.py:1994 in __setattr__
| | | +- GuardManager: source=L['self']._modules, accessed_by=DictGetItemGuardAccessor(_modules)
| | | | +- DICT_LENGTH: len(L['self']._modules) == 2 # x = self.layer2(x + self.layer1.bias) # tmp4.py:31 in forward
| | | | +- GuardManager: source=L['self']._modules['layer1'], accessed_by=DictGetItemGuardAccessor(layer1)
| | | | | +- TYPE_MATCH: ___check_type_id(L['self']._modules['layer1'], 81972256) # x = self.layer2(x + self.layer1.bias) # tmp4.py:31 in forward
| | | | | +- GuardManager: source=L['self']._modules['layer1'].__dict__, accessed_by=GetGenericDictGuardAccessor
| | | | | | +- GuardManager: source=L['self']._modules['layer1']._parameters, accessed_by=DictGetItemGuardAccessor(_parameters)
| | | | | | | +- DICT_LENGTH: len(L['self']._modules['layer1']._parameters) == 2 # x = self.layer2(x + self.layer1.bias) # tmp4.py:31 in forward
| | | | | | | +- GuardManager: source=L['self']._modules['layer1']._parameters['weight'], accessed_by=DictGetItemGuardAccessor(weight)
| | | | | | | +- GuardManager: source=L['self']._modules['layer1']._parameters['bias'], accessed_by=DictGetItemGuardAccessor(bias)
| | | | | | | | +- TENSOR_MATCH: check_tensor(L['self']._modules['layer1']._parameters['bias'], Parameter, DispatchKeySet(CUDA, BackendSelect, ADInplaceOrView, AutogradCUDA), torch.float32, device=0, requires_grad=True, size=[1024], stride=[1]) # x = self.layer2(x + self.layer1.bias) # tmp4.py:31 in forward
| | | | | | | | +- NO_TENSOR_ALIASING
| | | | +- GuardManager: source=L['self']._modules['layer2'], accessed_by=DictGetItemGuardAccessor(layer2)
| | | | | +- TYPE_MATCH: ___check_type_id(L['self']._modules['layer2'], 81972256) # x = self.layer2(x + self.layer1.bias) # tmp4.py:31 in forward
| | | | | +- GuardManager: source=L['self']._modules['layer2'].__dict__, accessed_by=GetGenericDictGuardAccessor
| | | | | | +- DICT_CONTAINS: not ___dict_contains('forward', L['self']._modules['layer2'].__dict__) # x = self.layer2(x + self.layer1.bias) # tmp4.py:31 in forward
| | | | | | +- GuardManager: source=L['self']._modules['layer2']._parameters, accessed_by=DictGetItemGuardAccessor(_parameters)
| | | | | | | +- DICT_LENGTH: len(L['self']._modules['layer2']._parameters) == 2 # return F.linear(input, self.weight, self.bias) # nn/modules/linear.py:125 in forward
| | | | | | | +- GuardManager: source=L['self']._modules['layer2']._parameters['weight'], accessed_by=DictGetItemGuardAccessor(weight)
| | | | | | | | +- TENSOR_MATCH: check_tensor(L['self']._modules['layer2']._parameters['weight'], Parameter, DispatchKeySet(CUDA, BackendSelect, ADInplaceOrView, AutogradCUDA), torch.float32, device=0, requires_grad=False, size=[1024, 1024], stride=[1024, 1]) # return F.linear(input, self.weight, self.bias) # nn/modules/linear.py:125 in forward
| | | | | | | | +- NO_TENSOR_ALIASING
| | | | | | | +- GuardManager: source=L['self']._modules['layer2']._parameters['bias'], accessed_by=DictGetItemGuardAccessor(bias)
| | | | | | | | +- TENSOR_MATCH: check_tensor(L['self']._modules['layer2']._parameters['bias'], Parameter, DispatchKeySet(CUDA, BackendSelect, ADInplaceOrView, AutogradCUDA), torch.float32, device=0, requires_grad=False, size=[1024], stride=[1]) # return F.linear(input, self.weight, self.bias) # nn/modules/linear.py:125 in forward
| | | | | | | | +- NO_TENSOR_ALIASING
| | | +- GuardManager: source=L['self'].num_iter, accessed_by=DictGetItemGuardAccessor(num_iter)
| | | | +- EQUALS_MATCH: L['self'].num_iter == 0 # self.num_iter += 1 # tmp4.py:33 in forward
| | | +- GuardManager: source=L['self']._parameters, accessed_by=DictGetItemGuardAccessor(_parameters)
| | | | +- DICT_LENGTH: not L['self']._parameters # x = self.layer2(x + self.layer1.bias) # tmp4.py:31 in forward
| +- GuardManager: source=G, accessed_by=GlobalsGuardAccessor
| | +- GuardManager: source=G['__builtins_dict___0'], accessed_by=DictGetItemGuardAccessor(__builtins_dict___0)
| | | +- GuardManager: source=G['__builtins_dict___0']['super'], accessed_by=DictGetItemGuardAccessor(super)
| | | | +- ID_MATCH: ___check_obj_id(G['__builtins_dict___0']['super'], 7614144) # super().__setattr__(name, value) # nn/modules/module.py:2032 in __setattr__
| | | +- GuardManager: source=G['__builtins_dict___0']['isinstance'], accessed_by=DictGetItemGuardAccessor(isinstance)
| | | | +- ID_MATCH: ___check_obj_id(G['__builtins_dict___0']['isinstance'], 140428831199440) # if isinstance(value, Parameter): # nn/modules/module.py:1945 in __setattr__
| | +- GuardManager: source=G['__import_torch_dot_nn_dot_modules_dot_linear'], accessed_by=DictGetItemGuardAccessor(__import_torch_dot_nn_dot_modules_dot_linear)
| | | +- ID_MATCH: ___check_obj_id(G['__import_torch_dot_nn_dot_modules_dot_linear'], 140423965322720) # return F.linear(input, self.weight, self.bias) # nn/modules/linear.py:125 in forward
| | | +- GuardManager: source=G['__import_torch_dot_nn_dot_modules_dot_linear'].F, accessed_by=GetAttrGuardAccessor(F)
| | | | +- ID_MATCH: ___check_obj_id(G['__import_torch_dot_nn_dot_modules_dot_linear'].F, 140423965324880) # return F.linear(input, self.weight, self.bias) # nn/modules/linear.py:125 in forward
| | | | +- GuardManager: source=G['__import_torch_dot_nn_dot_modules_dot_linear'].F.linear, accessed_by=GetAttrGuardAccessor(linear)
| | | | | +- ID_MATCH: ___check_obj_id(G['__import_torch_dot_nn_dot_modules_dot_linear'].F.linear, 140426763478864) # return F.linear(input, self.weight, self.bias) # nn/modules/linear.py:125 in forward
| | +- GuardManager: source=G['__import_torch_dot_nn_dot_modules_dot_module'], accessed_by=DictGetItemGuardAccessor(__import_torch_dot_nn_dot_modules_dot_module)
| | | +- ID_MATCH: ___check_obj_id(G['__import_torch_dot_nn_dot_modules_dot_module'], 140426663046384) # x = self.layer2(x + self.layer1.bias) # tmp4.py:31 in forward
| | | +- GuardManager: source=G['__import_torch_dot_nn_dot_modules_dot_module'].Buffer, accessed_by=GetAttrGuardAccessor(Buffer)
| | | | +- ID_MATCH: ___check_obj_id(G['__import_torch_dot_nn_dot_modules_dot_module'].Buffer, 79489936) # if isinstance(value, Buffer) or buffers is not None and name in buffers: # nn/modules/module.py:1995 in __setattr__
| | | +- GuardManager: source=G['__import_torch_dot_nn_dot_modules_dot_module'].Module, accessed_by=GetAttrGuardAccessor(Module)
| | | | +- ID_MATCH: ___check_obj_id(G['__import_torch_dot_nn_dot_modules_dot_module'].Module, 80429408) # if isinstance(value, Module): # nn/modules/module.py:1966 in __setattr__
| | | +- GuardManager: source=G['__import_torch_dot_nn_dot_modules_dot_module'].Parameter, accessed_by=GetAttrGuardAccessor(Parameter)
| | | | +- ID_MATCH: ___check_obj_id(G['__import_torch_dot_nn_dot_modules_dot_module'].Parameter, 79484096) # if isinstance(value, Parameter): # nn/modules/module.py:1945 in __setattr__
| | | +- GuardManager: source=G['__import_torch_dot_nn_dot_modules_dot_module']._global_forward_hooks, accessed_by=GetAttrGuardAccessor(_global_forward_hooks)
| | | | +- DICT_LENGTH: not G['__import_torch_dot_nn_dot_modules_dot_module']._global_forward_hooks # x = self.layer2(x + self.layer1.bias) # tmp4.py:31 in forward
| | | +- GuardManager: source=G['__import_torch_dot_nn_dot_modules_dot_module']._global_backward_hooks, accessed_by=GetAttrGuardAccessor(_global_backward_hooks)
| | | | +- DICT_LENGTH: not G['__import_torch_dot_nn_dot_modules_dot_module']._global_backward_hooks # x = self.layer2(x + self.layer1.bias) # tmp4.py:31 in forward
| | | +- GuardManager: source=G['__import_torch_dot_nn_dot_modules_dot_module']._global_forward_pre_hooks, accessed_by=GetAttrGuardAccessor(_global_forward_pre_hooks)
| | | | +- DICT_LENGTH: not G['__import_torch_dot_nn_dot_modules_dot_module']._global_forward_pre_hooks # x = self.layer2(x + self.layer1.bias) # tmp4.py:31 in forward
| | | +- GuardManager: source=G['__import_torch_dot_nn_dot_modules_dot_module']._global_backward_pre_hooks, accessed_by=GetAttrGuardAccessor(_global_backward_pre_hooks)
| | | | +- DICT_LENGTH: not G['__import_torch_dot_nn_dot_modules_dot_module']._global_backward_pre_hooks # x = self.layer2(x + self.layer1.bias) # tmp4.py:31 in forward
DEBUG: GUARDS:
DEBUG:
TREE_GUARD_MANAGER:
+- RootGuardManager
| +- DEFAULT_DEVICE: utils_device.CURRENT_DEVICE == None # _dynamo/output_graph.py:470 in init_ambient_guards
| +- GLOBAL_STATE: ___check_global_state()
| +- TORCH_FUNCTION_MODE_STACK: ___check_torch_function_mode_stack()
| +- GuardManager: source=L['x'], accessed_by=DictGetItemGuardAccessor(x)
| | +- TENSOR_MATCH: check_tensor(L['x'], Tensor, DispatchKeySet(CUDA, BackendSelect, ADInplaceOrView, AutogradCUDA), torch.float32, device=0, requires_grad=False, size=[2048, 1024], stride=[1024, 1]) # x = self.layer2(x + self.layer1.bias) # tmp4.py:31 in forward
| | +- NO_HASATTR: hasattr(L['x'], '_dynamo_dynamic_indices') == False # x = self.layer2(x + self.layer1.bias) # tmp4.py:31 in forward
| | +- NO_TENSOR_ALIASING: check_no_aliasing(L['x'], L['self']._modules['layer1']._parameters['bias'], L['self']._modules['layer2']._parameters['bias'], L['self']._modules['layer2']._parameters['weight'])
| +- GuardManager: source=L['self'], accessed_by=DictGetItemGuardAccessor(self)
| | +- TYPE_MATCH: ___check_type_id(L['self'], 99617504) # x = self.layer2(x + self.layer1.bias) # tmp4.py:31 in forward
| | +- GuardManager: source=L['self'].__dict__, accessed_by=GetGenericDictGuardAccessor
| | | +- GuardManager: source=L['self']._buffers, accessed_by=DictGetItemGuardAccessor(_buffers)
| | | | +- DICT_LENGTH: not L['self']._buffers # buffers = self.__dict__.get("_buffers") # nn/modules/module.py:1994 in __setattr__
| | | +- GuardManager: source=L['self']._modules, accessed_by=DictGetItemGuardAccessor(_modules)
| | | | +- DICT_LENGTH: len(L['self']._modules) == 2 # x = self.layer2(x + self.layer1.bias) # tmp4.py:31 in forward
| | | | +- GuardManager: source=L['self']._modules['layer1'], accessed_by=DictGetItemGuardAccessor(layer1)
| | | | | +- TYPE_MATCH: ___check_type_id(L['self']._modules['layer1'], 81972256) # x = self.layer2(x + self.layer1.bias) # tmp4.py:31 in forward
| | | | | +- GuardManager: source=L['self']._modules['layer1'].__dict__, accessed_by=GetGenericDictGuardAccessor
| | | | | | +- GuardManager: source=L['self']._modules['layer1']._parameters, accessed_by=DictGetItemGuardAccessor(_parameters)
| | | | | | | +- DICT_LENGTH: len(L['self']._modules['layer1']._parameters) == 2 # x = self.layer2(x + self.layer1.bias) # tmp4.py:31 in forward
| | | | | | | +- GuardManager: source=L['self']._modules['layer1']._parameters['weight'], accessed_by=DictGetItemGuardAccessor(weight)
| | | | | | | +- GuardManager: source=L['self']._modules['layer1']._parameters['bias'], accessed_by=DictGetItemGuardAccessor(bias)
| | | | | | | | +- TENSOR_MATCH: check_tensor(L['self']._modules['layer1']._parameters['bias'], Parameter, DispatchKeySet(CUDA, BackendSelect, ADInplaceOrView, AutogradCUDA), torch.float32, device=0, requires_grad=True, size=[1024], stride=[1]) # x = self.layer2(x + self.layer1.bias) # tmp4.py:31 in forward
| | | | | | | | +- NO_TENSOR_ALIASING
| | | | +- GuardManager: source=L['self']._modules['layer2'], accessed_by=DictGetItemGuardAccessor(layer2)
| | | | | +- TYPE_MATCH: ___check_type_id(L['self']._modules['layer2'], 81972256) # x = self.layer2(x + self.layer1.bias) # tmp4.py:31 in forward
| | | | | +- GuardManager: source=L['self']._modules['layer2'].__dict__, accessed_by=GetGenericDictGuardAccessor
| | | | | | +- DICT_CONTAINS: not ___dict_contains('forward', L['self']._modules['layer2'].__dict__) # x = self.layer2(x + self.layer1.bias) # tmp4.py:31 in forward
| | | | | | +- GuardManager: source=L['self']._modules['layer2']._parameters, accessed_by=DictGetItemGuardAccessor(_parameters)
| | | | | | | +- DICT_LENGTH: len(L['self']._modules['layer2']._parameters) == 2 # return F.linear(input, self.weight, self.bias) # nn/modules/linear.py:125 in forward
| | | | | | | +- GuardManager: source=L['self']._modules['layer2']._parameters['weight'], accessed_by=DictGetItemGuardAccessor(weight)
| | | | | | | | +- TENSOR_MATCH: check_tensor(L['self']._modules['layer2']._parameters['weight'], Parameter, DispatchKeySet(CUDA, BackendSelect, ADInplaceOrView, AutogradCUDA), torch.float32, device=0, requires_grad=True, size=[1024, 1024], stride=[1024, 1]) # return F.linear(input, self.weight, self.bias) # nn/modules/linear.py:125 in forward
| | | | | | | | +- NO_TENSOR_ALIASING
| | | | | | | +- GuardManager: source=L['self']._modules['layer2']._parameters['bias'], accessed_by=DictGetItemGuardAccessor(bias)
| | | | | | | | +- TENSOR_MATCH: check_tensor(L['self']._modules['layer2']._parameters['bias'], Parameter, DispatchKeySet(CUDA, BackendSelect, ADInplaceOrView, AutogradCUDA), torch.float32, device=0, requires_grad=True, size=[1024], stride=[1]) # return F.linear(input, self.weight, self.bias) # nn/modules/linear.py:125 in forward
| | | | | | | | +- NO_TENSOR_ALIASING
| | | +- GuardManager: source=L['self'].num_iter, accessed_by=DictGetItemGuardAccessor(num_iter)
| | | | +- TYPE_MATCH: ___check_type_id(L['self'].num_iter, 7644512) # self.num_iter += 1 # tmp4.py:33 in forward
| | | +- GuardManager: source=L['self']._parameters, accessed_by=DictGetItemGuardAccessor(_parameters)
| | | | +- DICT_LENGTH: not L['self']._parameters # x = self.layer2(x + self.layer1.bias) # tmp4.py:31 in forward
| +- GuardManager: source=G, accessed_by=GlobalsGuardAccessor
| | +- GuardManager: source=G['__builtins_dict___2'], accessed_by=DictGetItemGuardAccessor(__builtins_dict___2)
| | | +- GuardManager: source=G['__builtins_dict___2']['super'], accessed_by=DictGetItemGuardAccessor(super)
| | | | +- ID_MATCH: ___check_obj_id(G['__builtins_dict___2']['super'], 7614144) # super().__setattr__(name, value) # nn/modules/module.py:2032 in __setattr__
| | | +- GuardManager: source=G['__builtins_dict___2']['isinstance'], accessed_by=DictGetItemGuardAccessor(isinstance)
| | | | +- ID_MATCH: ___check_obj_id(G['__builtins_dict___2']['isinstance'], 140428831199440) # if isinstance(value, Parameter): # nn/modules/module.py:1945 in __setattr__
| | +- GuardManager: source=G['__import_torch_dot_nn_dot_modules_dot_linear'], accessed_by=DictGetItemGuardAccessor(__import_torch_dot_nn_dot_modules_dot_linear)
| | | +- ID_MATCH: ___check_obj_id(G['__import_torch_dot_nn_dot_modules_dot_linear'], 140423965322720) # return F.linear(input, self.weight, self.bias) # nn/modules/linear.py:125 in forward
| | | +- GuardManager: source=G['__import_torch_dot_nn_dot_modules_dot_linear'].F, accessed_by=GetAttrGuardAccessor(F)
| | | | +- ID_MATCH: ___check_obj_id(G['__import_torch_dot_nn_dot_modules_dot_linear'].F, 140423965324880) # return F.linear(input, self.weight, self.bias) # nn/modules/linear.py:125 in forward
| | | | +- GuardManager: source=G['__import_torch_dot_nn_dot_modules_dot_linear'].F.linear, accessed_by=GetAttrGuardAccessor(linear)
| | | | | +- ID_MATCH: ___check_obj_id(G['__import_torch_dot_nn_dot_modules_dot_linear'].F.linear, 140426763478864) # return F.linear(input, self.weight, self.bias) # nn/modules/linear.py:125 in forward
| | +- GuardManager: source=G['__import_torch_dot_nn_dot_modules_dot_module'], accessed_by=DictGetItemGuardAccessor(__import_torch_dot_nn_dot_modules_dot_module)
| | | +- ID_MATCH: ___check_obj_id(G['__import_torch_dot_nn_dot_modules_dot_module'], 140426663046384) # x = self.layer2(x + self.layer1.bias) # tmp4.py:31 in forward
| | | +- GuardManager: source=G['__import_torch_dot_nn_dot_modules_dot_module'].Buffer, accessed_by=GetAttrGuardAccessor(Buffer)
| | | | +- ID_MATCH: ___check_obj_id(G['__import_torch_dot_nn_dot_modules_dot_module'].Buffer, 79489936) # if isinstance(value, Buffer) or buffers is not None and name in buffers: # nn/modules/module.py:1995 in __setattr__
| | | +- GuardManager: source=G['__import_torch_dot_nn_dot_modules_dot_module'].Module, accessed_by=GetAttrGuardAccessor(Module)
| | | | +- ID_MATCH: ___check_obj_id(G['__import_torch_dot_nn_dot_modules_dot_module'].Module, 80429408) # if isinstance(value, Module): # nn/modules/module.py:1966 in __setattr__
| | | +- GuardManager: source=G['__import_torch_dot_nn_dot_modules_dot_module'].Parameter, accessed_by=GetAttrGuardAccessor(Parameter)
| | | | +- ID_MATCH: ___check_obj_id(G['__import_torch_dot_nn_dot_modules_dot_module'].Parameter, 79484096) # if isinstance(value, Parameter): # nn/modules/module.py:1945 in __setattr__
| | | +- GuardManager: source=G['__import_torch_dot_nn_dot_modules_dot_module']._global_forward_hooks, accessed_by=GetAttrGuardAccessor(_global_forward_hooks)
| | | | +- DICT_LENGTH: not G['__import_torch_dot_nn_dot_modules_dot_module']._global_forward_hooks # x = self.layer2(x + self.layer1.bias) # tmp4.py:31 in forward
| | | +- GuardManager: source=G['__import_torch_dot_nn_dot_modules_dot_module']._global_backward_hooks, accessed_by=GetAttrGuardAccessor(_global_backward_hooks)
| | | | +- DICT_LENGTH: not G['__import_torch_dot_nn_dot_modules_dot_module']._global_backward_hooks # x = self.layer2(x + self.layer1.bias) # tmp4.py:31 in forward
| | | +- GuardManager: source=G['__import_torch_dot_nn_dot_modules_dot_module']._global_forward_pre_hooks, accessed_by=GetAttrGuardAccessor(_global_forward_pre_hooks)
| | | | +- DICT_LENGTH: not G['__import_torch_dot_nn_dot_modules_dot_module']._global_forward_pre_hooks # x = self.layer2(x + self.layer1.bias) # tmp4.py:31 in forward
| | | +- GuardManager: source=G['__import_torch_dot_nn_dot_modules_dot_module']._global_backward_pre_hooks, accessed_by=GetAttrGuardAccessor(_global_backward_pre_hooks)
| | | | +- DICT_LENGTH: not G['__import_torch_dot_nn_dot_modules_dot_module']._global_backward_pre_hooks # x = self.layer2(x + self.layer1.bias) # tmp4.py:31 in forward
```
It appears that we are properly emitting guards in the requires_gradness of the model params in both frames (`0/0` and `0/1`), but we are incorrectly dispatching to `0/1` repeatedly.
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @amjames | high priority,triaged,oncall: pt2,module: dynamo,module: guards | low | Critical |
2,629,209,843 | godot | Translations are not updated when dynamically loading PCK | ### Tested versions
Reproducible in 4.4.dev [ef8d981267702de38ffc24136f9d823d31781c60]
### System information
Windows 11 (10.0.22631)
### Issue description
When including updated translation files as part of an exported PCK file that's meant to be dynamically loaded at runtime, you currently have to manually load each `*.translation` file after having loaded the PCK in order for the updated translations to actually have any effect.
The expected behavior would be for the updated translations to be loaded as part of loading the PCK.
### Steps to reproduce
(The MRP includes a `patch.pck` file that was exported using the patch system introduced in #97118, where the translation with key `HELLO` was changed from "Hello" to "Howdy" before exporting.)
1. Open the MRP.
2. Run the main scene.
3. Note how the label says "Hello".
4. Uncomment the `ResourceLoader.load` line in `main.gd`.
5. Run the main scene again.
6. Note how the label now says "Howdy".
### Minimal reproduction project (MRP)
[localization-patching.zip](https://github.com/user-attachments/files/17600771/localization-patching.zip) | discussion,documentation,topic:gui | low | Minor |
2,629,211,129 | deno | Properly support verbatim module syntax | It's not hooked up properly. | bug | low | Minor |
2,629,218,956 | deno | Source maps don't work with maybe cjs files | https://github.com/denoland/deno/pull/26558#discussion_r1825375077 | bug | low | Minor |
2,629,245,485 | flutter | Camera plugin: Custom codecs & container format support | ### Document Link
https://flutter.dev/go/camera-custom-codecs
### What problem are you solving?
Camera plugin users donโt have control over the output codecs (both video and audio) and the container output format. This prevents them from getting a video specific to their needs, since they are instead limited to the choice made by the system.
How can this be a problem:
- Maybe you need your files in a specific codec / container for further processing
- Maybe your app is used by a lot of older devices and doesnโt support certain codecs or struggles with fast decoding due to older hardware
- Maybe you are just like me and donโt want to pay license fees for the new mpeg versions if you want to add additional metadata later on
- Maybe you need your video or audio file to use a certain codec to get a certain quality
- Maybe your users expect to have the option to choose the codes in which they record
| p: camera,package,c: proposal,team-ecosystem,P2,design doc,triaged-ecosystem,:scroll: | low | Minor |
2,629,307,392 | next.js | Build error with dynamicIO enabled | ### Link to the code that reproduces this issue
https://github.com/revnelson/next-dynamicio-debug
### To Reproduce
Build from repo
### Current vs. Expected behavior
When building a PayloadCMS starter with `dynamicIO` enabled, the following error is produced:
```console
Error occurred prerendering page "/admin/[[...segments]]". Read more: https://nextjs.org/docs/messages/prerender-error
Error: Route "/admin/[[...segments]]" has a `generateMetadata` that depends on Request data (`cookies()`, etc...) or external data (`fetch(...)`, etc...) but the rest of the route was static or only used cached data (`"use cache"`). If you expected this route to be prerenderable update your `generateMetadata` to not use Request data and only use cached external data. Otherwise, add `await connection()` somewhere within this route to indicate explicitly it should not be prerendered.
Export encountered an error on /(payload)/admin/[[...segments]]/page: /admin/[[...segments]], exiting the build.
```
I have opened an [issue](https://github.com/payloadcms/payload/issues/8897) with Payload as they may need to update the core to handle the new dynamicIO paradigm. It was stated in that issue, however, that dynamic APIs (`headers()`) are being used that should exclude the route from pre-rendering.
At a bare minimum, the error produced is unhelpful as you can see in the repo I created for this issue. I have added the suggested `await connection()` at the top of the default exported component but still get the same error.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.0.0: Tue Sep 24 23:39:07 PDT 2024; root:xnu-11215.1.12~1/RELEASE_ARM64_T6000
Available memory (MB): 32768
Available CPU cores: 10
Binaries:
Node: 22.9.0
npm: 10.8.3
Yarn: 1.22.19
pnpm: 9.12.3
Relevant Packages:
next: 15.0.3-canary.3 // Latest available version is detected (15.0.3-canary.3).
eslint-config-next: 15.0.0
react: 19.0.0-rc-603e6108-20241029
react-dom: 19.0.0-rc-603e6108-20241029
typescript: 5.6.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
dynamicIO
### Which stage(s) are affected? (Select all that apply)
next build (local)
### Additional context
_No response_ | linear: next,dynamicIO | low | Critical |
2,629,323,353 | vscode | inline chat a11y diff view causes scroll down in document | 1. set `"inlineChat.accessibleDiffView": "on"`
2. request a change at the top of a file
3. switch tabs and switch back
๐ view is scrolled down, and it also takes more space than needed
https://github.com/user-attachments/assets/b9737d9f-690f-482e-bf8a-05851559a69d | bug,inline-chat | low | Minor |
2,629,327,655 | vscode | Chat: Allow a way for participants to direct requests to other participants | <!-- โ ๏ธโ ๏ธ Do Not Delete This! feature_request_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
Currently there is intent detection built in so that if a request is not directed at any particular participant, the intent detection will try to direct it to the appropriate participant. However, it would be very helpful if participants could tell VSCode "I'm not the right participant to answer this query", allowing it to perform intent detection and assign to a different participant. I think this approach--where a participant just says it can't answer a query--mitigates some of the security risk of allowing participants to call each other.
For example, `@azure` gets asked a lot of workspace-related questions that would be much better answered by the `@workspace` participant. If we could redirect the questions to `@workspace` that would be helpful to end users.
/cc @isidorn | feature-request,api,chat | low | Major |
2,629,386,250 | flutter | [Platform Views][accessibility] Android Talkback cannot focus webview content when disable and re-enable it. | ## Summary
The webview content cannot be focused when the talkback is on.
## Steps to reproduce
1. Start the minimal app with the webview and turn on the talkback.
2. Observe that swiping right can move the focus inside the webview.
3. Turn off talkback and turn on again.
4. Observe that swiping right cannot move the focus inside the webview. Manually touch the webview also cannot move the focus into the webview.
See the attached recording for the above steps:
https://github.com/user-attachments/assets/53f4fbd5-f359-41e2-97b8-d9b3bdfa689b
## Minimal repro code
ran the following commands on 2024-11-01 and modified the code
```dart
flutter create testapp
flutter pub add webview_flutter
```
```dart
import 'package:flutter/material.dart';
import 'package:webview_flutter/webview_flutter.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return const MaterialApp(
title: 'Flutter Demo',
home: MyHomePage(title: 'Flutter Demo Home Page'),
);
}
}
class MyHomePage extends StatefulWidget {
const MyHomePage({super.key, required this.title});
final String title;
@override
State<MyHomePage> createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
final _controller = WebViewController()
..setJavaScriptMode(JavaScriptMode.unrestricted)
..loadRequest(Uri.parse('https://flutter.dev'));
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
backgroundColor: Theme.of(context).colorScheme.inversePrimary,
title: Text(widget.title),
),
body: WebViewWidget(controller: _controller),
);
}
}
```
## Versions
**Flutter**
`[โ] Flutter (Channel stable, 3.24.4, on macOS 14.7 23H124 darwin-arm64, locale en)`
**dependencies**
`webview_flutter: ^4.10.0`
**plugin**
`id "com.android.application" version "8.4.1" apply false`
**gradle-wrapper.properties**
`distributionUrl=https\://services.gradle.org/distributions/gradle-8.10-all.zip`
**Android module**
`Pixel 6, Android 14`
## Impact
It is affecting google internal project release b/372690913
| platform-android,a: accessibility,a: platform-views,e: OS-version specific,has reproducible steps,P2,team-android,triaged-android,found in release: 3.24,found in release: 3.27 | low | Major |
2,629,408,910 | PowerToys | Fancyzones can't remember window based on title | ### Description of the new feature / enhancement
Chrome has the feature to save a page so that it looks like an independant app. e.g., go to youtube.com, go to chrome menu-> Cast, save and share -> Install page as app. This creates an icon that for all intents and purposes, behaves like a standalone windows app...
except for fancyzones. Fancyzones can remember which zone an app opened in, and open it in the same zone next time. The trouble is, it can't tell the difference between a Chrome window, and say... Youtube installed as a Chrome app.
Now I realise this is tricky because at its heart, they are all Chrome. But normal Chrome windows have a windows title that ends with " - Google Chrome" whereas pages installed in apps don't end with " - Google Chrome".
Now it's a pity that Edge browser doesn't have this feature. You can pin a page to the taskbar, but it just opens that page as a tab in a regular browser, rather than making it behave like an app. And I think that's a terrible mistake because in this world where Windows is app-centric and yet the internet is site centric, Chrome really tied those together how it does it. So, I'm guessing that Microsoft having got lost in the weeds on this function won't be able to garner interest. However...
What I'm saying we need is for Fancyzones to be able to remember location based on title... and ideally, unfortunately it would be better if it special cased Google Chrome titles, so Chrome with " - Google Chrome" are all the same, but Chrome without " - Google Chrome" in the title are all considered different. But I guess simply basing it off title in general would be better than nothing.
### Scenario when this would be used?
90% of what I do on my computer is Google Chrome pseudo apps, whether it be Youtube, Google contacts, google news, google translate, Reddit, whatever sites I like to visit I make into apps. Windows is an app-centric operating system. Fancy zones rightly remembers window location based on what app it is. The inability to distinguish the youtube app from the google translate app from whatever, means it is completely broken for what I do.
### Supporting information
https://support.google.com/chrome/answer/9658361?hl=en&co=GENIE.Platform%3DDesktop | Needs-Triage | low | Critical |
2,629,409,851 | angular | Angular's npm README files should contain more useful information | Today, our npm README files are just placeholders without much information:
```markdown
The sources for this package are in the main [Angular](https://github.com/angular/angular) repo. Please file issues and pull requests against that repo.
Usage information and reference details can be found in [Angular documentation](https://angular.dev/overview).
License: MIT
```
npm README landing pages for a package are an important source of information for developers. We should expand these pages to give an overview of the specific package. This applies to all of the packages we publish to npm from this repo; each package should have its own summary.
e.g. https://www.npmjs.com/package/@angular/core | help wanted,good first issue,P3,area: docs | low | Minor |
2,629,421,073 | rust | Multiple alignments on functions (`#![feature(fn_align)]`) | This code specifies two alignments, but applies none. A single `align(256)` does work
```rust
#![feature(fn_align)]
#[repr(align(256), align(256))]
pub fn main() {
let ptr = main as *const u8;
println!("{ptr:?}");
assert_eq!(ptr.align_offset(256), 0);
}
```
See here: https://play.rust-lang.org/?version=nightly&mode=debug&edition=2021&gist=f841ae6318f0f7abdab285ea9ec641ab
CC: https://github.com/rust-lang/rust/issues/82232
The culprit is this line here matching on slices of length 1: https://github.com/rust-lang/rust/blob/145f9cf95de1fbde3fa11e98461310e0373253e6/compiler/rustc_codegen_ssa/src/codegen_attrs.rs#L418-L420
It's a one line fix, but honestly this is trivially resolved with https://github.com/rust-lang/compiler-team/issues/796 which I'm working on. I'll make it a separate PR at some point, but I'll assign myself since it makes sure changes conflict a little less :)
@rustbot claim
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"jdonszelmann"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | T-compiler,C-bug,A-repr,F-fn_align,A-align | low | Critical |
2,629,466,375 | kubernetes | Anonymous volumes not counted against pod ephemeral-storage limits | ### What happened?
Hi, not sure if this is the correct place to report, but we're seeing an issue between K8s and containerd with tracking disk usage against ephmeral-storage limits.
We have K8s (AWS EKS) v1.26.15 with containerd 1.7.22 running on Amazon Linux 2 with cgroups v1. If you apply the below K8s manifest you should get a pod that uses around 1 GiB of disk in either an anonymous volume (coming from the VOLUME instruction in the Dockerfile that created the image), or by switching to alternate value of `DEST_DIR` the container root file system or a named volume.
For the container root filesystem, I see the usage briefly appear in `crictl stats` before the pod is evicted due to exceeding its `ephemeral-storage` limit. For the named volume, `crictl stats` stays at zero but the pod is similarly killed. However for the anonymous volume case `crictl stats` stays similarly on zero but the pod remains running, as presumably K8s is not counting the usage towards the total.
While I can see both volumes in `crictl inspect` under status/mounts I'm not sure if containerd/cri or the kubelet is meant to be reporting the disk usage of volumes. `kubectl get --raw /api/v1/nodes/$MY_NODE/proxy/stats` only shows the named volume, not anonymous ones.
```yaml
apiVersion: v1
kind: Pod
metadata:
name: test-es-limit
namespace: default
spec:
nodeSelector:
kubernetes.io/os: linux
kubernetes.io/arch: amd64
terminationGracePeriodSeconds: 1
containers:
- name: test
image: postgres
env:
- name: DEST_DIR
value: /var/lib/postgresql/data # Anon volume
# value: /var/lib/misc # Container root fs
# value: /data # Named volume
- name: BIG_FILE
value: /usr/lib/postgresql/17/bin/postgres # 9.6 MiB
command:
- bash
- -c
- "for RUN in {1..100}; do cp $BIG_FILE $DEST_DIR/dummy.$RUN ; done ; du -sh $DEST_DIR ; sleep 5000"
resources:
limits:
cpu: 100m
ephemeral-storage: 200Mi # Script uses almost 1 GiB
memory: 128Mi
requests:
cpu: 100m
ephemeral-storage: 200Mi
memory: 128Mi
volumeMounts:
- name: named-storage
mountPath: /data
volumes:
- name: named-storage
emptyDir: {}
```
### What did you expect to happen?
For the pod described by the above YAML to get Evicted as using over storage limits
### How can we reproduce it (as minimally and precisely as possible)?
See included pod YAML
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
v1.26.15
</details>
### Cloud provider
<details>
AWS - EKS v1.26
</details>
### OS version
<details>
```console
NAME="Amazon Linux"
VERSION="2"
ID="amzn"
ID_LIKE="centos rhel fedora"
VERSION_ID="2"
PRETTY_NAME="Amazon Linux 2"
ANSI_COLOR="0;33"
CPE_NAME="cpe:2.3:o:amazon:amazon_linux:2"
HOME_URL="https://amazonlinux.com/"
SUPPORT_END="2025-06-30"
Linux HOSTNAME 5.10.226-214.880.amzn2.x86_64 #1 SMP Tue Oct 8 16:18:15 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
</details>
### Install tools
_No response_
### Container runtime (CRI) and version (if applicable)
<details>
containerd 1.7.22
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
_No response_ | kind/bug,needs-sig,needs-triage | low | Major |
2,629,530,360 | three.js | WebGPURenderer: wrong update of compressed textures | ### Description
Updating compressed textures with the contents of others of different formats by copying them fails sometimes and have visual errors, but the same functionality works fine in WebGL. This functionality is often used when creating an empty texture while the asset loads, and then copying the loaded data to it once it's available.
I created a couple of fiddles with webgpu and webgl to compare. Webgpu only updates 3 times instead of 4 and displays a wrong texture.
### Reproduction steps
(see fiddles)
- Create an empty compressed texture
- Update it by copying others
### Code
See fiddles
### Live example
* [WebGPU fiddle](https://jsfiddle.net/2jm0x617/2/)
* [WebGL fiddle](https://jsfiddle.net/fezc8skx/)
### Screenshots
_No response_
### Version
r170
### Device
Desktop
### Browser
Chrome
### OS
Windows | WebGPU,Needs Investigation | low | Critical |
2,629,581,077 | react | [DevTools] It is incredibly difficult to performance profile React | React version: 18.2.0
1. Have a decently complex application which has a deeply nested or possibly recursive component.
2. Try performace profiling the component in chrome dev tools in the performance tab
## The current behavior
Since Fiber splits rendering into byte sized chunks, the profile is dominated by React overhead.
<img width="536" alt="Screenshot 2024-11-01 at 10 49 20โฏAM" src="https://github.com/user-attachments/assets/08399f72-6aed-497c-9998-41d0571902dc">
Furthermore, its incredibly difficult to even see what code is responsible for taking the bulk of the time since it has been split along the axis of time. This is great for production, but it makes profiling in dev difficult.
## The expected behavior
There should be an option to change the behavior of the scheduler to not time-slice in debug builds so that tools like performance profiling will work well.
Related: https://github.com/facebook/react/issues/25415 which stops the chrome dev tools profiler from working | Status: Unconfirmed | medium | Critical |
2,629,588,763 | kubernetes | [Compatibility Version] alphas with emulated version | Per the compatibility version KEP, alphas are outside the scope of compatibility version.
https://github.com/kubernetes/enhancements/blob/master/keps/sig-architecture/4330-compatibility-versions/README.md#non-goals
> Support --emulation-version for Alpha features. Alpha feature are not designed to be upgradable, so we will not allow alpha features to be enabled when --emulation-version is set.
We current don't have proper safeguards for this.
- [ ] Alpha features should not be permitted to be enabled with emulated version
- [ ] Alpha APIs should not be permitted to be enabled with emulated version
/cc @aaron-prindle @jpbetz
/triage accepted
/sig architecture
/sig api-machinery | sig/api-machinery,sig/architecture,triage/accepted | low | Minor |
2,629,589,745 | pytorch | Mutating custom ops slower than non-mutating custom ops. | ### ๐ Describe the bug
It appears that mutating custom operators are slower than non-mutating operators. Operators with more arguments seem to be affected more.
```
import torch
@torch.library.custom_op("foo::bar2", mutates_args=())
def bar2(a: torch.Tensor,
b: torch.Tensor
) -> torch.Tensor:
return b.clone()
@torch.library.custom_op("foo::baz2", mutates_args=(["a"]))
def baz2(a: torch.Tensor,
b: torch.Tensor,
) -> torch.Tensor:
return b.clone()
@torch.library.custom_op("foo::bar", mutates_args=())
def bar(a: torch.Tensor,
b: torch.Tensor,
c: torch.Tensor,
d: torch.Tensor,
e: torch.Tensor,
f: torch.Tensor,
g: torch.Tensor,
h: torch.Tensor,
i: torch.Tensor,
j: torch.Tensor,
k: torch.Tensor,
l: torch.Tensor,
m: torch.Tensor,
n: torch.Tensor) -> torch.Tensor:
return b.clone()
@torch.library.custom_op("foo::baz", mutates_args=(["a"]))
def baz(a: torch.Tensor,
b: torch.Tensor,
c: torch.Tensor,
d: torch.Tensor,
e: torch.Tensor,
f: torch.Tensor,
g: torch.Tensor,
h: torch.Tensor,
i: torch.Tensor,
j: torch.Tensor,
k: torch.Tensor,
l: torch.Tensor,
m: torch.Tensor,
n: torch.Tensor) -> torch.Tensor:
return b.clone()
a = torch.rand([128,128], device="cuda")
b = torch.rand([128,128], device="cuda")
c = torch.rand([128,128], device="cuda")
d = torch.rand([128,128], device="cuda")
e = torch.rand([128,128], device="cuda")
f = torch.rand([128,128], device="cuda")
g = torch.rand([128,128], device="cuda")
h = torch.rand([128,128], device="cuda")
i = torch.rand([128,128], device="cuda")
j = torch.rand([128,128], device="cuda")
k = torch.rand([128,128], device="cuda")
l = torch.rand([128,128], device="cuda")
m = torch.rand([128,128], device="cuda")
n = torch.rand([128,128], device="cuda")
def test():
from triton.testing import do_bench
iter = 1000
def mutate2():
for z in range(iter):
o = torch.ops.foo.baz2(a, b)
def no_mutate2():
for z in range(iter):
o = torch.ops.foo.bar2(a, b)
def mutate():
for z in range(iter):
o = torch.ops.foo.baz(a, b, c, d, e, f, g, h, i, j, k, l, m, n)
def no_mutate():
for z in range(iter):
o = torch.ops.foo.bar(a, b, c, d, e, f, g, h, i, j, k, l, m, n)
mutate2_time = do_bench(mutate2)
no_mutate2_time = do_bench(no_mutate2)
mutate_time = do_bench(mutate)
no_mutate_time = do_bench(no_mutate)
print(f"mutate2 = {mutate2_time}")
print(f"no_mutate2 = {no_mutate2_time}")
print(f"mutate = {mutate_time}")
print(f"no_mutate = {no_mutate_time}")
test()
```
I get the following results when I run the script:
```
mutate2 = 25.09382438659668
no_mutate2 = 16.89522361755371
mutate = 90.25625610351562
no_mutate = 26.303680419921875
```
### Versions
Collecting environment information...
PyTorch version: 2.5.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.5
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.5.0-35-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.5.82
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 555.42.02
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8462Y+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 8
CPU max MHz: 4100.0000
CPU min MHz: 800.0000
BogoMIPS: 5600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hfi vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 120 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flashinfer==0.1.2+cu121torch2.4
[pip3] mypy==1.11.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] onnx==1.14.1
[pip3] onnxruntime==1.18.1
[pip3] torch==2.5.0
[pip3] torchvision==0.20.0
[pip3] triton==3.1.0
[conda] Could not collect
cc @ezyang @chauhang @penguinwu @zou3519 @bdhirsh @yf225 | triaged,module: custom-operators,oncall: pt2,module: pt2-dispatcher,vllm-compile | low | Critical |
2,629,595,181 | pytorch | [ONNX] Set the is_in_onnx_export flag in dynamo exporter | The dynamo exporter currently does not set the is_in_onnx_export flag during export. We should set the flag so users can selectively enable logic to be exported.
cc @titaiwangms | module: onnx,triaged | low | Minor |
2,629,599,397 | pytorch | [ONNX] Document the registration API | Improve documentation. | module: onnx,triaged | low | Minor |
2,629,601,156 | godot | v4.4 Sound quickly regresses on the Web: freezes, noises, crackles | ### Tested versions
Reproducible in: v4.4.dev3.official [f4af8201b], v4.4.dev2, v4.4.dev1
Not reproducible in: v4.3.stable.official [77dcf97d8]
### System information
Godot v4.4.dev3 - Windows 10.0.19045 - Multi-window, 1 monitor - OpenGL 3 (Compatibility) - GeForce GT 740M - Intel(R) Core(TM) i5-3317U CPU @ 1.70GHz (4 threads)
### Issue description
On the Web over time, the sound begins to freeze, break and crackle terribly. The heavier the stage and the weaker the hardware on which the game is launched, the faster it happens. For example, the sounds in the game I created start to freeze after a minute on a modern computer. On a weak laptop almost immediately. And immediately on a phone. Also, in remote debugging, this degradation of sounds occurs more slowly than when exporting a project.
I tried different solutions, but they all didn't work. Because of this, I had to move the whole project to version 4.3 and that solved the problem.
So at the moment version 4.4 is not playable on web platforms. At least for me.
For testing I created separate projects for 4.4 and 4.3. with a minimum number of objects just so that there is at least some load on the engine.
### Steps to reproduce
Run the project in remote debugging on the Web.
Wait a few minutes (depending on your hardware). My sound began to degrade sharply at the 3rd minute.
If you want, try to run the same project on version 4.3. You won't find any such problems, no matter how much you play.
### Minimal reproduction project (MRP)
[audiotest-4.4.zip](https://github.com/user-attachments/files/17602861/audiotest-4.4.zip)
[audiotest-4.3.zip](https://github.com/user-attachments/files/17602859/audiotest-4.3.zip)
| bug,platform:web,confirmed,topic:audio,regression | low | Critical |
2,629,611,133 | langchain | AzureAISearch Retriever only returns up to 50 docs | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
To reproduce the issue mentioned here. Create an Azure Search AI index and upload any number of documents above 50 that share a search field. This could be source in the metadata. For example the same file name on all chunks. Instantiate the retriver:
```
retriever = AzureAISearchRetriever(
service_name=.AZURE_SEARCH_ENDPOINT,
index_name=AZURE_SEARCH_INDEX_NAME,
api_key=AZURE_SEARCH_KEY,
content_key="content",
top_k=None,
)
```
and invoke a query like:
`retriever.invoke(doc.metadata["source"])`
setting `top_k` to None should return all the results according to the documentation:
> top_k: Optional[int] = None
"""Number of results to retrieve. Set to None to retrieve all results."""
But, because of the default number of 50 set by Azure, the returned results will always be up to 50 at the current implementation.
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Azure AI Search service doesn't return all matches when a query is submitted using the search field as it is documented on their [website](https://learn.microsoft.com/en-us/azure/search/search-pagination-page-layout#paging-results):
> "By default, the search engine returns up to the first 50 matches. The top 50 are determined by search score, assuming the query is full text search or semantic."
From the same documentation we can understand that we need to implement pagination if we want to retrieve all the documents when we query the service:
>"To control the paging of all documents returned in a result set, add $top and $skip parameters to the GET query request, or top and skip to the POST query request. The following list explains the logic.
>Return the first set of 15 matching documents plus a count of total matches: GET /indexes/<INDEX-NAME>/docs?search=<QUERY STRING>&$top=15&$skip=0&$count=true
>Return the second set, skipping the first 15 to get the next 15: $top=15&$skip=15. Repeat for the third set of 15: $top=15&$skip=30"
If we look at the existing code there is no pagination implemented. This makes this retriever to return up to 50 results no matter how many records are in the database. This behavior is not fully documented and can result in unexpected behavior in scenarios where the user intended to retrieve all the documents. This is clear from the function that builds the API query:
```
def _build_search_url(self, query: str) -> str:
url_suffix = get_from_env("", "AZURE_AI_SEARCH_URL_SUFFIX", DEFAULT_URL_SUFFIX)
if url_suffix in self.service_name and "https://" in self.service_name:
base_url = f"{self.service_name}/"
elif url_suffix in self.service_name and "https://" not in self.service_name:
base_url = f"https://{self.service_name}/"
elif url_suffix not in self.service_name and "https://" in self.service_name:
base_url = f"{self.service_name}.{url_suffix}/"
elif (
url_suffix not in self.service_name and "https://" not in self.service_name
):
base_url = f"https://{self.service_name}.{url_suffix}/"
else:
# pass to Azure to throw a specific error
base_url = self.service_name
endpoint_path = f"indexes/{self.index_name}/docs?api-version={self.api_version}"
top_param = f"&$top={self.top_k}" if self.top_k else ""
filter_param = f"&$filter={self.filter}" if self.filter else ""
return base_url + endpoint_path + f"&search={query}" + top_param + filter_param
```
To reproduce the issue mentioned here. Create an Azure Search AI index and upload any number of documents above 50 that share a search field. This could be source in the metadata. For example the same file name on all chunks. Instantiate the retriver:
```
retriever = AzureAISearchRetriever(
service_name=.AZURE_SEARCH_ENDPOINT,
index_name=AZURE_SEARCH_INDEX_NAME,
api_key=AZURE_SEARCH_KEY,
content_key="content",
top_k=None,
)
```
and invoke a query like:
`retriever.invoke(doc.metadata["source"])`
setting `top_k` to None should return all the results according to the documentation:
> top_k: Optional[int] = None
"""Number of results to retrieve. Set to None to retrieve all results."""
But, because of the default number of 50 set by Azure, the returned results will always be up to 50 at the current implementation.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Wed Sep 11 18:02:00 EDT 2024
> Python Version: 3.11.9 (main, Aug 26 2024, 10:40:41) [GCC 8.5.0 20210514 (Red Hat 8.5.0-22)]
Package Information
-------------------
> langchain_core: 0.2.33
> langchain: 0.2.5
> langchain_community: 0.2.5
> langsmith: 0.1.101
> langchain_cli: 0.0.29
> langchain_openai: 0.1.22
> langchain_text_splitters: 0.2.2
> langserve: 0.2.2
Optional packages not installed
-------------------------------
> langgraph
Other Dependencies
------------------
> aiohttp: 3.9.5
> async-timeout: Installed. No version info available.
> dataclasses-json: 0.6.7
> fastapi: 0.110.0
> gitpython: 3.1.43
> httpx: 0.27.0
> jsonpatch: 1.33
> langserve[all]: Installed. No version info available.
> libcst: 1.4.0
> numpy: 1.26.4
> openai: 1.41.0
> orjson: 3.10.5
> packaging: 23.2
> pydantic: 2.6.2
> pyproject-toml: 0.0.10
> PyYAML: 5.3.1
> requests: 2.32.3
> SQLAlchemy: 2.0.27
> sse-starlette: 1.8.2
> tenacity: 8.4.1
> tiktoken: 0.7.0
> tomlkit: 0.12.5
> typer[all]: Installed. No version info available.
> typing-extensions: 4.12.2
> uvicorn: 0.23.2 | ๐ค:bug | low | Critical |
2,629,615,012 | pytorch | Custom operators registered via decorator slower than ops registered via `torch.Library.{define, impl}` | ### ๐ Describe the bug
Custom operators registered via `torch.library.custom_op` seem to be much slower than ops registered via `torch.Library.define` + `torch.Library.impl`
```
import torch
@torch.library.custom_op("foo::bar", mutates_args=())
def bar(a: torch.Tensor,
b: torch.Tensor,
c: torch.Tensor,
d: torch.Tensor,
e: torch.Tensor,
f: torch.Tensor,
g: torch.Tensor,
h: torch.Tensor,
i: torch.Tensor,
j: torch.Tensor,
k: torch.Tensor,
l: torch.Tensor,
m: torch.Tensor,
n: torch.Tensor) -> torch.Tensor:
return b.clone()
@torch.library.custom_op("foo::baz", mutates_args=(["a"]))
def baz(a: torch.Tensor,
b: torch.Tensor,
c: torch.Tensor,
d: torch.Tensor,
e: torch.Tensor,
f: torch.Tensor,
g: torch.Tensor,
h: torch.Tensor,
i: torch.Tensor,
j: torch.Tensor,
k: torch.Tensor,
l: torch.Tensor,
m: torch.Tensor,
n: torch.Tensor) -> torch.Tensor:
return b.clone()
def barbaz(a: torch.Tensor,
b: torch.Tensor,
c: torch.Tensor,
d: torch.Tensor,
e: torch.Tensor,
f: torch.Tensor,
g: torch.Tensor,
h: torch.Tensor,
i: torch.Tensor,
j: torch.Tensor,
k: torch.Tensor,
l: torch.Tensor,
m: torch.Tensor,
n: torch.Tensor) -> torch.Tensor:
return b.clone()
foo_lib = torch.library.Library("foo", "FRAGMENT")
def direct_register_custom_op(
op_name,
op_func,
mutates_args
):
schema_str = torch.library.infer_schema(op_func, mutates_args=mutates_args)
foo_lib.define(op_name + schema_str)
foo_lib.impl(op_name, op_func, "CUDA")
direct_register_custom_op("foo::bar_op", barbaz, mutates_args=())
direct_register_custom_op("foo::baz_op", barbaz, mutates_args=(["a"]))
a = torch.rand([128,128], device="cuda")
b = torch.rand([128,128], device="cuda")
c = torch.rand([128,128], device="cuda")
d = torch.rand([128,128], device="cuda")
e = torch.rand([128,128], device="cuda")
f = torch.rand([128,128], device="cuda")
g = torch.rand([128,128], device="cuda")
h = torch.rand([128,128], device="cuda")
i = torch.rand([128,128], device="cuda")
j = torch.rand([128,128], device="cuda")
k = torch.rand([128,128], device="cuda")
l = torch.rand([128,128], device="cuda")
m = torch.rand([128,128], device="cuda")
n = torch.rand([128,128], device="cuda")
def test():
from triton.testing import do_bench
iter = 1000
def mutate():
for z in range(iter):
o = torch.ops.foo.baz(a, b, c, d, e, f, g, h, i, j, k, l, m, n)
def no_mutate():
for z in range(iter):
o = torch.ops.foo.bar(a, b, c, d, e, f, g, h, i, j, k, l, m, n)
def direct_mutate():
for z in range(iter):
o = torch.ops.foo.baz_op(a, b, c, d, e, f, g, h, i, j, k, l, m, n)
def direct_no_mutate():
for z in range(iter):
o = torch.ops.foo.bar_op(a, b, c, d, e, f, g, h, i, j, k, l, m, n)
mutate_time = do_bench(mutate)
no_mutate_time = do_bench(no_mutate)
direct_mutate_time = do_bench(direct_mutate)
direct_no_mutate_time = do_bench(direct_no_mutate)
print(f"mutate = {mutate_time}")
print(f"no_mutate = {no_mutate_time}")
print(f"direct_mutate = {direct_mutate_time}")
print(f"direct_no_mutate = {direct_no_mutate_time}")
test()
```
Running the script gives me the following results:
```
mutate = 90.21110534667969
no_mutate = 25.86481285095215
direct_mutate = 6.907863140106201
direct_no_mutate = 6.97034215927124
```
### Versions
Collecting environment information...
PyTorch version: 2.5.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.5
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.5.0-35-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.5.82
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 555.42.02
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8462Y+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 8
CPU max MHz: 4100.0000
CPU min MHz: 800.0000
BogoMIPS: 5600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hfi vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 120 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flashinfer==0.1.2+cu121torch2.4
[pip3] mypy==1.11.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] onnx==1.14.1
[pip3] onnxruntime==1.18.1
[pip3] torch==2.5.0
[pip3] torchvision==0.20.0
[pip3] triton==3.1.0
[conda] Could not collect
cc @ezyang @chauhang @penguinwu @zou3519 @bdhirsh @yf225 | triaged,module: custom-operators,oncall: pt2,module: pt2-dispatcher,vllm-compile | low | Critical |
2,629,634,128 | pytorch | Infinite recursion in `torch._inductor.ir.ExternKernel.__str__` | ### ๐ Describe the bug
There appears to be an instance of `torch._inductor.ir.ExternKernel` that has a cycle in its data members that is causing an infinite loop in the `__str__` method.
torch/_inductor/ir.py: line 5049
```
def __str__(self) -> str:
kernel_name = getattr(self, "python_kernel_name", None)
lines = [
f"python_kernel_name={kernel_name!r}",
]
###### The recursion happens here.
lines += [
f"{field.name}={getattr(self, field.name)}"
for field in dataclasses.fields(self)
]
lines.append(f"origin_node={self.origin_node!r}")
return self.str_helper(lines)
```
Called from here (error.operator_str): torch/_inductor/graph.py: line 1005
```
log.info(
"Creating implicit fallback for:\n%s",
error.operator_str(target, args, kwargs),
)
```
I don't have simple reproduction steps but I added some prints to my local repo and verified that `__str__` is re-entering. The kernel that seems to trigger the problem is 'torch.ops._c10d_functional.all_gather_into_tensor.default' although I could not come up with a simple isolated test case using this function.
I've attached a fragment of the log w/added print statements.
[infinite.log](https://github.com/user-attachments/files/17603060/infinite.log)
Here's the hacked up __str__ function:
```
def __str__(self) -> str:
kernel_name = getattr(self, "python_kernel_name", None)
lines = [
f"python_kernel_name={kernel_name!r}",
]
global depth
print(f"KERNEL {kernel_name!r} {depth}")
depth = depth + 1
lines += [
f"{field.name}={getattr(self, field.name)}"
for field in dataclasses.fields(self)
]
lines.append(f"origin_node={self.origin_node!r}")
depth = depth - 1
print(f"DONE KERNEL {kernel_name!r} {depth}")
return self.str_helper(lines)
```
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov
### Versions
Collecting environment information...
PyTorch version: 2.5.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.5
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.5.0-35-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.5.82
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 555.42.02
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8462Y+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 8
CPU max MHz: 4100.0000
CPU min MHz: 800.0000
BogoMIPS: 5600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hfi vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 120 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flashinfer==0.1.2+cu121torch2.4
[pip3] mypy==1.11.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] onnx==1.14.1
[pip3] onnxruntime==1.18.1
[pip3] torch==2.5.0
[pip3] torchvision==0.20.0
[pip3] triton==3.1.0
[conda] Could not collect | high priority,triaged,oncall: pt2,module: inductor,vllm-compile | low | Critical |
2,629,655,075 | godot | [macOS] (some) keyboard input not working in floating editor window after a while | ### Tested versions
Reproducible in v4.4.dev3.official [f4af8201b]
### System information
Godot v4.4.dev3 - macOS 15.0.1 - Multi-window, 3 monitors - OpenGL 3 (Compatibility) - AMD Radeon Pro 5500M OpenGL Engine - Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz (16 threads)
### Issue description
I recently noticed this by using the floating editor window (which I usually don't use), but not 100% sure this is related:
After a while, the keyboard input stops honoring modifier keys. For example, on a French Mac keyboard (which I use), you get a `[` by using `OPTION+SHIFT+(`. Using that key combination (or similar ones) outputs nothing after a while (but a regular `(` continues to work).
Closing the floating editor and going back to the regular editor fixes the problem. So does re-detaching the editor (for a while).
It should be noted that I'm using several monitors, and the floating editor was on a different monitor from the main Godot window every time the issue happened.
### Steps to reproduce
Detach the code editor as a floating window.
Type characters that need modifier keys (may depend on the Locale being used)
Witness it produces nothing ... at some point (exact triggering condition unknown)
### Minimal reproduction project (MRP)
N/A | bug,platform:macos,topic:editor,topic:input | low | Minor |
2,629,665,416 | next.js | Circular Structure Error When passing complex objects with circular reference to another server component or function in Next 15 | ### Link to the code that reproduces this issue
https://github.com/webplantmedia/html-react-parser/tree/master/examples/nextjs
### To Reproduce
I can't seem to pass complex objects to other server components or functions without getting a circular structure error. Specifically, I'm using html-react-parser and manipulating certain elements to render custom jsx. It worked fine and without error in nextjs 14.
layout.tsx
```js
export const metadata = {
title: 'Next.js',
description: 'Generated by Next.js',
}
export default function RootLayout({
children,
}: {
children: React.ReactNode
}) {
return (
<html lang="en">
<body>{children}</body>
</html>
)
}
```
page.tsx
```js
import parse, { Element } from 'html-react-parser';
type Props = {
params: { slug: string };
};
export default async function Page({ params }: Props) {
return (
<main>
<h1 className="title">
{parse(
`
Welcome to <a href="https://nextjs.org">Next.js</a>
and HTMLReactParser!
`,
{
replace(domNode) {
function test(node: any) {
console.log(node);
}
test(domNode);
if (domNode instanceof Element && domNode.name === 'a') {
return (
<a href="https://nextjs.org" rel="noopener noreferrer">
Next.js
</a>
);
}
},
}
)}
</h1>
</main>
);
}
```
Error:
```
Error: Converting circular structure to JSON
--> starting at object with constructor 'Text'
| property 'next' -> object with constructor 'Element'
--- property 'prev' closes the circle
at test (rsc://React/Server/webpack-internal:///(rsc)/./app/page.tsx?0:20:33)
at Object.replace (rsc://React/Server/webpack-internal:///(rsc)/./app/page.tsx?1:22:21)
at Page (rsc://React/Server/webpack-internal:///(rsc)/./app/page.tsx?2:14:84)
at resolveErrorDev (webpack-internal:///(app-pages-browser)/./node_modules/next/dist/compiled/react-server-dom-webpack/cjs/react-server-dom-webpack-client.browser.development.js:1792:63)
at processFullStringRow (webpack-internal:///(app-pages-browser)/./node_modules/next/dist/compiled/react-server-dom-webpack/cjs/react-server-dom-webpack-client.browser.development.js:2071:17)
at processFullBinaryRow (webpack-internal:///(app-pages-browser)/./node_modules/next/dist/compiled/react-server-dom-webpack/cjs/react-server-dom-webpack-client.browser.development.js:2059:7)
at progress (webpack-internal:///(app-pages-browser)/./node_modules/next/dist/compiled/react-server-dom-webpack/cjs/react-server-dom-webpack-client.browser.development.js:2262:17)
```
<img width="779" alt="image" src="https://github.com/user-attachments/assets/6b8e8c1e-c1a0-4667-91d9-8197201ecbef">
package.json
```json
{
"scripts": {
"dev": "next dev",
"build": "next build",
"start": "next start"
},
"dependencies": {
"html-react-parser": "^5.1.18",
"next": "^15.0.2",
"react": "^18.3.1",
"react-dom": "^18.3.1"
},
"devDependencies": {
"@types/node": "22.8.6",
"@types/react": "18.3.12",
"typescript": "5.6.3"
}
}
```
I just pushed a commit with the code. Thanks so much for looking into it! I have based a very big next js project in using this react parser. I'm hoping there is an easy fix to this without having to refactor lots of code.
https://github.com/webplantmedia/html-react-parser/tree/master/examples/nextjs
### Current vs. Expected behavior
The bug is not being able to pass a complex object to different server functions or server components. It was not an issue in Next 14.
### Provide environment information
```bash
chrisb@Chriss-MacBook-Pro nextjs % npm run info
> info
> next info
warning package.json: No license field
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.6.0: Mon Jul 29 21:14:30 PDT 2024; root:xnu-10063.141.2~1/RELEASE_ARM64_T6000
Available memory (MB): 16384
Available CPU cores: 10
Binaries:
Node: 20.15.1
npm: 10.7.0
Yarn: 1.22.22
pnpm: N/A
Relevant Packages:
next: 15.0.2 // Latest available version is detected (15.0.2).
eslint-config-next: N/A
react: 18.3.1
react-dom: 18.3.1
typescript: 5.6.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Not sure
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
I tested with next 15.0.2 | bug | low | Critical |
2,629,679,286 | godot | `RD::texture_create_shared_from_slice` become very slow when used extensively on a `Texture2DArrayRD` | ### Tested versions
- Reproducible in Godot v4.3.1.rc (725f50752)
### System information
Ubuntu 22.04.5 LTS 22.04 - X11 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1070 (nvidia; 535.183.01) - Intel(R) Core(TM) i7-7700K CPU @ 4.20GHz (8 Threads)
### Issue description
I noticed a performance issue in `RenderingDevice::texture_create_shared_from_slice`. In my game I use a `Texture2DArrayRD` and create 9 shared textures for each layer (the mipmaps). When I end up with 2000 shared textures, my game starts jittering a lot.
I profiled my game using [Tracy](https://github.com/wolfpld/tracy) and the issue is with this line of code: `Texture texture = *src_texture;` in `RenderingDevice::texture_create_shared_from_slice`. 80% of the duration of `RenderingDevice::texture_create_shared_from_slice` can be spent there.
The `Texture` class has a member named `slice_trackers` which is a hashmap tracking the shared textures (correct me if I am wrong). It's the copy of this hashmap which slows down the `Texture` copy. On my system the copy takes about 50ns when the hashmap is almost empty and can reach 200ยตs or more when it contains 2500+ elements:

As I can create 100 shared textures per frame, I get frames at 40ms or more...
IMO this can be solved by not copying the `slice_trackers` member. `slice_trackers` is only used by the owner texture and not by the shared ones. In the current implementation we have `Texture texture = *src_texture;` and then `texture->slice_trackers.clear();` comes later in `RD::_texture_make_mutable`.
I am considering creating a PR replacing the line `Texture texture = *src_texture;` by something like `Texture texture = src_texture->duplicate_as_shared_texture();` where `Texture duplicate_as_shared_texture() const` copies every members except `slice_trackers`.
I tested this fix and I got the expected results (this the durations of `RenderingDevice::texture_create_shared_from_slice`):

I am worried about maintainability and if you would like to support such a use case in the first place. So before creating a MRP (which require some work) and a PR I would like to get some feedback first.
### Steps to reproduce
- Create a `Texture2DArrayRD` with 1024 layers
- Call 10 times `RD::texture_create_shared_from_slice` for each layer
### Minimal reproduction project (MRP)
[mrp.zip](https://github.com/user-attachments/files/17607001/mrp.zip)
| enhancement,discussion,topic:rendering,performance | low | Major |
2,629,691,870 | godot | Project setting `debug/shapes/collision/shape_color` requires a restart to take effect, but does not prompt restart | ### Tested versions
4.3
### System information
Windows 10
### Issue description
The project setting `debug/shapes/collision/shape_color` requires a restart to take effect, but does not prompt the user to restart. Likely a similar problem to https://github.com/godotengine/godot/issues/82813.
### Steps to reproduce
- Add new 3D Scene.
- Add a CollisionShape3D. Add a SphereShape resource to the collision shape.
- Observe default bluish teal color for the collision shape.
- Open Project Settings, change `debug/shapes/collision/shape_color` to another color. Close project settings, observe that the shape's outline has not changed.
- Reload current project. Observe that the color has changed.
### Minimal reproduction project (MRP)
N/A | bug,topic:editor | low | Critical |
2,629,706,826 | PowerToys | [Image Resizer]: Option to force Fallback encoder | ### Description of the new feature / enhancement
Add an option to the **Image Resizer** tool to always apply the **Fallback encoder**, regardless if the encoder of the original format is available, so the tool can resize and re-encode images on a different format in a single step.
### Scenario when this would be used?
With the option enabled a user can select a bunch of PNG images on the File Explorer and resize and re-encode them in JPG all in one go. Useful when dealing with several images on various formats that should be optimized, in both format and size, for deployment e.g. web.
### Supporting information
_No response_ | Idea-Enhancement,Help Wanted,Product-Image Resizer | low | Minor |
2,629,707,382 | vscode | `OutputChannel`/`LogOutputChannel` `hide()` method doesn't work | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.96.0-insider
- OS Version: Windows 11/Linux (Tested on Windows Desktop and in Github Codespaces [not web])
## Reproduction
Repro branch here: https://github.com/JustinGrote/_PR-FORK-vscode-extension-samples/tree/issue/cannotHideOutputChannel/helloworld-sample
```typescript
import { ExtensionContext, window } from "vscode";
export function activate(context: ExtensionContext) {
const outputChannel = window.createOutputChannel("OutputChannel");
const logOutputChannel = window.createOutputChannel("LogOutputChannel", {log: true});
outputChannel.hide();
logOutputChannel.hide();
}
```
### Expected
Output Windows are hidden
### Actual

| info-needed | low | Critical |
2,629,723,401 | pytorch | performance bug: flex attention much slower than dense attention | ### ๐ Describe the bug
With a 2D spatial neighborhood pattern, flash attention is orders of magnitude slower than dense attention:
hlc=2
seq_length : 192
flex attention : 0.0015106382369995117 [s]
dense attention : 3.8884878158569336e-05 [s]
hlc=3
seq_length : 768
flex attention : 0.0015071055889129639 [s]
dense attention : 3.041529655456543e-05 [s]
hlc=4
seq_length : 3072
flex attention : 0.020486905336380003 [s]
dense attention : 3.140068054199219e-05 [s]
hlc is a parameter that, essentially, controls the number of cells. The sparsity pattern is 1-ring neighborhood of cells.
```
import code
import time
import warnings
import numpy as np
import torch
from torch.nn.attention.flex_attention import flex_attention, create_mask, create_block_mask
import astropy_healpix as hp
hlc = 3
num_healpix_cells = 12 * 4**hlc
print( f'hlc={hlc}')
print( f'seq_length : {num_healpix_cells}')
num_heads = 8
dim_embed = 128
bs = 4
q = torch.ones( bs, num_heads, num_healpix_cells, dim_embed, dtype=torch.float16, device='cuda')
k = torch.ones( bs, num_heads, num_healpix_cells, dim_embed, dtype=torch.float16, device='cuda')
v = torch.ones( bs, num_heads, num_healpix_cells, dim_embed, dtype=torch.float16, device='cuda')
with warnings.catch_warnings(action="ignore"):
nbours= hp.neighbours( np.arange(num_healpix_cells), 2**hlc, order='nested').transpose()
# build adjacency matrix (smarter ways to do it ...)
nbours_mat = torch.zeros( (num_healpix_cells,num_healpix_cells), dtype=torch.bool, device='cuda')
for i in range(num_healpix_cells) :
for j in nbours[i] :
nbours_mat[i,j] = True if j>=0 else False
# create sparse block matrix for flex attention
def sparse_mask(b, h, q_idx, kv_idx):
# return ddkv_idx in nbours[q_idx]
return nbours_mat[q_idx,kv_idx]
block_mask = create_block_mask( sparse_mask, B=None, H=None, Q_LEN=dim_embed, KV_LEN=dim_embed)
# experiments
# warmup
for i in range( 10):
qp = flex_attention( q, k, v, block_mask=block_mask)
t_start = time.time()
for i in range( 1000):
qp = flex_attention( q, k, v, block_mask=block_mask)
print( f'flex attention : {(time.time() - t_start) / 1000.} [s]', flush=True)
# warmup
for i in range( 10):
with torch.nn.attention.sdpa_kernel( torch.nn.attention.SDPBackend.FLASH_ATTENTION) :
qp = torch.nn.functional.scaled_dot_product_attention( q, k, v)
t_start = time.time()
for i in range( 1000):
with torch.nn.attention.sdpa_kernel( torch.nn.attention.SDPBackend.FLASH_ATTENTION) :
qp = torch.nn.functional.scaled_dot_product_attention( q, k, v)
print( f'dense attention : {(time.time() - t_start) / 1000.} [s]', flush=True)
```
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Red Hat Enterprise Linux release 8.8 (Ootpa) (x86_64)
GCC version: (GCC) 8.5.0 20210514 (Red Hat 8.5.0-18)
Clang version: 15.0.7 (Red Hat 15.0.7-1.module+el8.8.0+17939+b58878af)
CMake version: version 3.20.2
Libc version: glibc-2.28
Python version: 3.11.10 (main, Sep 27 2024, 08:55:04) [GCC 8.5.0 20210514 (Red Hat 8.5.0-18)] (64-bit runtime)
Python platform: Linux-4.18.0-477.43.1.el8_8.x86_64-x86_64-with-glibc2.28
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 256
On-line CPU(s) list: 0-255
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 2
NUMA node(s): 8
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7H12 64-Core Processor
Stepping: 0
CPU MHz: 2600.000
CPU max MHz: 2600.0000
CPU min MHz: 1500.0000
BogoMIPS: 5200.23
Virtualization: AMD-V
L1d cache: 32K
L1i cache: 32K
L2 cache: 512K
L3 cache: 16384K
NUMA node0 CPU(s): 0-15,128-143
NUMA node1 CPU(s): 16-31,144-159
NUMA node2 CPU(s): 32-47,160-175
NUMA node3 CPU(s): 48-63,176-191
NUMA node4 CPU(s): 64-79,192-207
NUMA node5 CPU(s): 80-95,208-223
NUMA node6 CPU(s): 96-111,224-239
NUMA node7 CPU(s): 112-127,240-255
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1
[pip3] triton==3.1.0
[conda] Could not collect
cc @ezyang @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @yf225 @Chillee @drisspg @yanboliang @BoyuanFeng | triaged,oncall: pt2,module: higher order operators,module: pt2-dispatcher,module: flex attention | low | Critical |
2,629,735,013 | vscode | Incorrect scaling detection on Linux (GNOME) | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: v1.95.1
- OS Version: Fedora 41 (GNOME)
Steps to Reproduce:
1. Connect 2 DP monitors to your system
2. Open VS Code on primary screen
3. Change the scaling on the second screen
VSCode detects this change but it doesn't respect the monitor this update was intended for. Sometimes this also results in a broken window where the window size did not update but the content did scale. If this happens you get a window where the content is zoomed and you have no window decorations as they where also scaled. So you only option is to alt+f4 and restart vscode.
| bug,upstream,linux,electron,multi-monitor | low | Critical |
2,629,745,388 | pytorch | [export] `run_decomposition` fails for permute->view sequence | ### ๐ Describe the bug
Here, I came across this issue with MAISI network from MONAI.
To reproduce, you would need to pull branches from :
https://github.com/Project-MONAI/MONAI/pull/8153 and https://github.com/Project-MONAI/model-zoo/pull/701
In https://github.com/Project-MONAI/model-zoo/pull/701/files#diff-03a91f505707ef6644547abb4c5fd665e73003b4e828a185a4c71707f73b4ef5:
if I change line 19 from :
"controlnet": "$trt_compile(@controlnet_def.to(@device), @trained_controlnet_path)"
to:
"controlnet": "$trt_compile(@controlnet_def.to(@device), @trained_controlnet_path, args=@c_trt_args)"
and run :
''python -m monai.bundle run --config_file "['configs/inference.json', 'configs/inference_trt.json', 'configs/256.json']"
in model-zoo/models/maisi_ct_generative,
the following error would come up with 2.6.0.dev20241010+cu124 during export:
E1031 12:58:15.024000 1613897 torch/_subclasses/fake_tensor.py:2051] ValueError: Cannot view a tensor with shape torch.Size([1, 4096, 8, 32]) and strides (1048576, 32, 131072, 1) as a tensor with shape (1, 4096, 256)!
Non-dynamo export is successful.
### Versions
Pytorch nightly.
cc @ezyang @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | oncall: pt2,export-triage-review,oncall: export | medium | Critical |
2,629,790,911 | vscode | `LogOutputChannel` LogLevel property race condition if Global Log Level and Output Panel Log Level are not the same | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.96-insiders
- OS Version: Windows/Linux (Tested locally and in Codespaces)
## Reproduction
1. Start this branch in Codespaces and use Launch task Run Extension [Isolated Profile]
https://github.com/JustinGrote/vscode-extension-issues/tree/issue/logOutputWindowTraceChange
1. Change log level to something other than info
1. Choose "Restart Extension Host"
Relevant Code:
https://github.com/JustinGrote/vscode-extension-issues/blob/issue/logOutputWindowTraceChange/src/extension.ts
## Expected
A `LogOutputChannel` starts with the `logLevel` that the user preference has specified
## Actual
Always starts at `Info` and gets set sometime later, however logs seem to filter normally.
EDIT: This only happens if the default log level and the output pane log level are not the same, and "Set as Default" has not been used to modify the `argv.json`. If default log level and output pane log level are the same, the ondidChange does not fire later and the startup log level is correct.
https://github.com/user-attachments/assets/f7e53748-d024-40cf-a8cb-b47eb7418a0d
## Relevance
I have a custom logger that relies on checking the logLevel and do a noop if it does not meet the required level.
## Potential Fixes
- Update the property before processing any further log calls
- Provide a promise that can be awaited for when the LogOutputChannel is ready to receive logs at the user preferenced log level.
- I considered waiting for an onDidChangeLogLevel but this does not trigger if the user preference specified `info`, so a spurious onDidChangeLogLevel for info as well would also suffice. | bug,log | low | Critical |
2,629,792,147 | godot | Zooming with scroll is broken with the Game view camera override | ### Tested versions
v4.4.dev.custom_build [c6c464cf9]
### System information
Godot v4.4.dev (c6c464cf9) - macOS 14.5.0 - Multi-window, 1 monitor - Metal (Forward+) - integrated Apple M1 Max (Apple7) - Apple M1 Max (10 threads)
### Issue description
It's a bit hard to explain or even understand what exactly is going on but a single mousewheel tick can throw the camera away several hundred meters away. It's not just hyper-sensitive it also appears to be inverted and wrapped around some distance range, so when light scrolling the camera is jittering around uncontrollably
https://github.com/user-attachments/assets/0536362a-4449-4e32-8a6d-3413a686a31f
### Steps to reproduce
1. Run the project
2. Switch to the Game mode
3. Click 3D
4. Click on the camera override button
5. Scroll in game
### Minimal reproduction project (MRP)
I just made a new empty project so I don't think it's needed to debug but here it is just in case:
[new-game-project.zip](https://github.com/user-attachments/files/17603658/new-game-project.zip)
| bug,topic:editor | low | Critical |
2,629,793,439 | PowerToys | Ver 0.85.1: PowerRename no longer appears in Windows Explorer context menus | ### Microsoft PowerToys version
0.85.1
### Installation method
GitHub, PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
PowerRename
### Steps to reproduce
Right-click (or shift-right-click) on one or more files or folders in Windows Explorer.
### โ๏ธ Expected Behavior
Previously, there was a "PowerRename" entry in the context menu.
### โ Actual Behavior
"PowerRename" no longer appears in the context menu.
### Other Software
Windows 11, version 23H2 (OS build 22631.4391) | Issue-Bug,Needs-Triage | low | Minor |
2,629,797,441 | deno | Could not resolve 'npm:@ibm-cloud/platform-services@0.67.0' | ```ts
import * as ibm from "npm:@ibm-cloud/platform-services/iam-identity/v1";
```
```log
error: Unable to load /home/nicolas/.cache/deno/npm/registry.npmjs.org/@ibm-cloud/platform-services/0.67.0/iam-identity/v1 imported from file:///home/user/Programming/deno_test/ibm.ts
Caused by:
No such file or directory (os error 2)
```
Version: Deno 2.0.4
UPDATE: Wrong code snippet
| needs investigation,node resolution,bundler-resolution | low | Critical |
2,629,818,010 | vscode | Multiple `LogOutputChannel` with same name race condition | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.96-insiders
- OS Version: Windows/Linux (Tested locally and in Codespaces)
## Reproduction
Start codespace on https://github.com/JustinGrote/vscode-extension-issues/tree/issue/logAppendOrder and `Run Extension [Isolated Profile]`
[Relevant Code](https://github.com/JustinGrote/vscode-extension-issues/blob/issue/logAppendOrder/src/extension.ts)
## Expected
Logs appear in the order they are sent
```
Log1
Log2
Log1
```
## Actual
Second Log1 appears before log2, presumably because it is "warmed up"

## Notes
If this is expected behavior, it should be documented in `vscode.d.ts` because it can lead to out-of-order logs.
Further, if multiple instances of an output channel of the same name is unsupported, it should also be documented as such or better, provide an exception on creation. | bug,output | low | Critical |
2,629,887,112 | pytorch | OpOverloads are slow? | This came up when I was investigating https://github.com/pytorch/pytorch/issues/139500 (and in parallel @ezyang hypothesized about boxing vs unboxing performance).
Experiment: calling torch.stack on 5 tensors. We can vary the number of tensors, but in general the torch.* variant is faster than the torch.ops.* variant.
```py
import torch
from triton.testing import do_bench
num_tensors = 5
args = [torch.randn([]) for _ in range(num_tensors)]
def run_stack():
for _ in range(1000):
torch.stack(args)
def run_stack_op():
for _ in range(1000):
torch.ops.aten.stack.default(args)
mode = "mean"
print("num_tensors", num_tensors)
print(do_bench(run_stack, return_mode=mode))
print(do_bench(run_stack_op, return_mode=mode))
```
Output:
```
num_tensors 5
5.3403449058532715
8.467509269714355
num_tensors 1
3.627135753631592
6.683566570281982
```
Units are in ms.
I also benchmarked torch.sin vs torch.ops.aten.sin.default, and the results were similar:
```
2.5906755924224854 (torch.sin)
4.57119083404541 (torch.ops.aten.sin.default)
```
Since we make heavy use of OpOverload during PT2 tracing, and because metas *should be* inexpensive (I'm not sure if this is true), compilation time could probably benefit from a faster OpOverload interface
cc @ezyang @chauhang @penguinwu @bdhirsh @yf225 | triaged,module: custom-operators,oncall: pt2,module: pt2-dispatcher | low | Major |
2,629,919,960 | pytorch | inductor/test_move_constructors_to_cuda.py::TestMoveConstructorsToCuda::test_multi_gpu unit test failure | ### ๐ Describe the bug
inductor/test_move_constructors_to_cuda.py::TestMoveConstructorsToCuda::test_multi_gpu FAILED [1.4059s] [ 14%]
==================================== RERUNS ====================================
__________________ TestMoveConstructorsToCuda.test_multi_gpu ___________________
Traceback (most recent call last):
File "/usr/lib/python3.12/unittest/case.py", line 58, in testPartExecutor
yield
File "/usr/lib/python3.12/unittest/case.py", line 634, in run
self._callTestMethod(testMethod)
File "/usr/lib/python3.12/unittest/case.py", line 589, in _callTestMethod
if method() is not None:
^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/testing/_internal/common_utils.py", line 2983, in wrapper
method(*args, **kwargs)
File "/opt/pytorch/pytorch/test/inductor/test_move_constructors_to_cuda.py", line 103, in test_multi_gpu
self._check_fn(foo, True, inp)
File "/opt/pytorch/pytorch/test/inductor/test_move_constructors_to_cuda.py", line 31, in _check_fn
FileCheck().check("cpp_fused").run(code[0])
RuntimeError: Expected to find "cpp_fused" but did not find it
Searched string:
# AOT ID: ['0_inference']
~~~~~~~~~ <--- HERE
from ctypes import c_void_p, c_long, c_int
import torch
From CHECK: cpp_fused
To execute this test, run the following from the base repo dir:
python test/inductor/test_move_constructors_to_cuda.py TestMoveConstructorsToCuda.test_multi_gpu
tested on A100x2 systems
### Versions
nightly versions | triaged | low | Critical |
2,629,922,632 | rust | Odd compiler panic | ### Code
I'm a Rust newbie and working in a largeish production codebase, so difficult to find something minimally reproducible, but getting a pretty gnarly compiler bug internally. Just comes from running `cargo check` (note: no problems in CI! Just on my local machine, a 2021 M1 Pro MacBook Pro.
### Meta
`rustc --version --verbose`:
```
rustc 1.82.0 (f6e511eec 2024-10-15)
binary: rustc
commit-hash: f6e511eec7342f59a25f7c0534f1dbea00d01b14
commit-date: 2024-10-15
host: aarch64-apple-darwin
release: 1.82.0
LLVM version: 19.1.1
```
### Error output
```
thread 'rustc' panicked at compiler/rustc_metadata/src/rmeta/def_path_hash_map.rs:23:54:
called `Option::unwrap()` on a `None` value
stack backtrace:
0: 0x10cb04bdc - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::habbf9c4f641febb1
1: 0x10a0d1770 - core::fmt::write::ha36a8060c13608ea
2: 0x10caf8b0c - std::io::Write::write_fmt::h431832c8ebcc85c9
3: 0x10cb072a4 - std::panicking::default_hook::{{closure}}::h4aa1f60327dfff6a
4: 0x10cb06ef8 - std::panicking::default_hook::h4ebc6eb4ae179807
5: 0x10ac42afc - <alloc[764fc8c78a1bb3e1]::boxed::Box<rustc_driver_impl[d9f1096c2de14668]::install_ice_hook::{closure#0}> as core[fafc87a594706398]::ops::function::Fn<(&dyn for<'a, 'b> core[fafc87a594706398]::ops::function::Fn<(&'a std[d8d90c69e022292b]::panic::PanicHookInfo<'b>,), Output = ()> + core[fafc87a594706398]::marker::Sync + core[fafc87a594706398]::marker::Send, &std[d8d90c69e022292b]::panic::PanicHookInfo)>>::call
6: 0x10cb08428 - std::panicking::rust_panic_with_hook::h6a84efe4dcab239c
7: 0x10cb07818 - std::panicking::begin_panic_handler::{{closure}}::h5eef292190467fef
8: 0x10cb05084 - std::sys::backtrace::__rust_end_short_backtrace::hd7e7925203f20af9
9: 0x10cb07514 - _rust_begin_unwind
10: 0x10f183b60 - core::panicking::panic_fmt::h410d3f147658259b
11: 0x10f183bcc - core::panicking::panic::hee236ca94fc05047
12: 0x10f183ae8 - core::option::unwrap_failed::h187ebe480b20e6be
13: 0x10b70adcc - <rustc_metadata[acfe361cc13a0072]::rmeta::decoder::cstore_impl::provide_cstore_hooks::{closure#0} as core[fafc87a594706398]::ops::function::FnOnce<(rustc_middle[1486d011505b3441]::query::plumbing::TyCtxtAt, rustc_span[12a1c67e1f6abb]::def_id::DefPathHash, rustc_span[12a1c67e1f6abb]::def_id::StableCrateId)>>::call_once
14: 0x10b7f27c4 - <rustc_middle[1486d011505b3441]::ty::context::TyCtxt>::def_path_hash_to_def_id
15: 0x10c0b735c - <rustc_query_impl[d98edaeb063d7c4c]::plumbing::query_callback<rustc_query_impl[d98edaeb063d7c4c]::query_impl::adt_def::QueryType>::{closure#0} as core[fafc87a594706398]::ops::function::FnOnce<(rustc_middle[1486d011505b3441]::ty::context::TyCtxt, rustc_query_system[1bcdf744069b5f02]::dep_graph::dep_node::DepNode)>>::call_once
16: 0x10c1b9f40 - <rustc_query_system[1bcdf744069b5f02]::dep_graph::graph::DepGraphData<rustc_middle[1486d011505b3441]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[d98edaeb063d7c4c]::plumbing::QueryCtxt>
17: 0x10c1b9ee8 - <rustc_query_system[1bcdf744069b5f02]::dep_graph::graph::DepGraphData<rustc_middle[1486d011505b3441]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[d98edaeb063d7c4c]::plumbing::QueryCtxt>
18: 0x10c1b9ee8 - <rustc_query_system[1bcdf744069b5f02]::dep_graph::graph::DepGraphData<rustc_middle[1486d011505b3441]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[d98edaeb063d7c4c]::plumbing::QueryCtxt>
19: 0x10c1b9ee8 - <rustc_query_system[1bcdf744069b5f02]::dep_graph::graph::DepGraphData<rustc_middle[1486d011505b3441]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[d98edaeb063d7c4c]::plumbing::QueryCtxt>
20: 0x10c1b9ee8 - <rustc_query_system[1bcdf744069b5f02]::dep_graph::graph::DepGraphData<rustc_middle[1486d011505b3441]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[d98edaeb063d7c4c]::plumbing::QueryCtxt>
21: 0x10c1b9cd4 - <rustc_query_system[1bcdf744069b5f02]::dep_graph::graph::DepGraphData<rustc_middle[1486d011505b3441]::dep_graph::DepsType>>::try_mark_green::<rustc_query_impl[d98edaeb063d7c4c]::plumbing::QueryCtxt>
22: 0x10c0666d4 - rustc_query_system[1bcdf744069b5f02]::query::plumbing::try_execute_query::<rustc_query_impl[d98edaeb063d7c4c]::DynamicConfig<rustc_query_system[1bcdf744069b5f02]::query::caches::DefaultCache<rustc_type_ir[920e70aa31006d3f]::canonical::Canonical<rustc_middle[1486d011505b3441]::ty::context::TyCtxt, rustc_middle[1486d011505b3441]::ty::ParamEnvAnd<rustc_middle[1486d011505b3441]::traits::query::type_op::ProvePredicate>>, rustc_middle[1486d011505b3441]::query::erase::Erased<[u8; 8usize]>>, false, false, false>, rustc_query_impl[d98edaeb063d7c4c]::plumbing::QueryCtxt, true>
23: 0x10c18a71c - rustc_query_impl[d98edaeb063d7c4c]::query_impl::type_op_prove_predicate::get_query_incr::__rust_end_short_backtrace
24: 0x10c8a8398 - <rustc_middle[1486d011505b3441]::traits::query::type_op::ProvePredicate as rustc_trait_selection[59cf63c55545eaab]::traits::query::type_op::QueryTypeOp>::perform_query
25: 0x10a6c3a10 - <rustc_middle[1486d011505b3441]::traits::query::type_op::ProvePredicate as rustc_trait_selection[59cf63c55545eaab]::traits::query::type_op::QueryTypeOp>::fully_perform_into
26: 0x10a5d5614 - <rustc_infer[6bbdea83bea8e02f]::infer::InferCtxt>::commit_if_ok::<(), rustc_span[12a1c67e1f6abb]::ErrorGuaranteed, rustc_trait_selection[59cf63c55545eaab]::traits::query::type_op::custom::scrape_region_constraints<rustc_middle[1486d011505b3441]::ty::ParamEnvAnd<rustc_middle[1486d011505b3441]::traits::query::type_op::ProvePredicate>, (), <rustc_middle[1486d011505b3441]::ty::ParamEnvAnd<rustc_middle[1486d011505b3441]::traits::query::type_op::ProvePredicate> as rustc_trait_selection[59cf63c55545eaab]::traits::query::type_op::TypeOp>::fully_perform::{closure#1}>::{closure#0}>
27: 0x10a6b555c - <rustc_middle[1486d011505b3441]::ty::ParamEnvAnd<rustc_middle[1486d011505b3441]::traits::query::type_op::ProvePredicate> as rustc_trait_selection[59cf63c55545eaab]::traits::query::type_op::TypeOp>::fully_perform
28: 0x10a6803e0 - <rustc_borrowck[aa07daf8814d9f80]::type_check::TypeChecker>::fully_perform_op::<(), rustc_middle[1486d011505b3441]::ty::ParamEnvAnd<rustc_middle[1486d011505b3441]::traits::query::type_op::ProvePredicate>>
29: 0x10a6810cc - <rustc_borrowck[aa07daf8814d9f80]::type_check::TypeChecker>::normalize_and_prove_instantiated_predicates
30: 0x10a67c5e4 - <rustc_borrowck[aa07daf8814d9f80]::type_check::TypeVerifier as rustc_middle[1486d011505b3441]::mir::visit::Visitor>::visit_const_operand
31: 0x10a67d654 - <rustc_borrowck[aa07daf8814d9f80]::type_check::TypeVerifier as rustc_middle[1486d011505b3441]::mir::visit::Visitor>::visit_body
32: 0x10a67751c - rustc_borrowck[aa07daf8814d9f80]::type_check::type_check
33: 0x10a57f1d8 - rustc_borrowck[aa07daf8814d9f80]::nll::compute_regions
34: 0x10a544db8 - rustc_borrowck[aa07daf8814d9f80]::do_mir_borrowck
35: 0x10a53ba54 - rustc_borrowck[aa07daf8814d9f80]::mir_borrowck
36: 0x10c0e29b4 - rustc_query_impl[d98edaeb063d7c4c]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[d98edaeb063d7c4c]::query_impl::mir_borrowck::dynamic_query::{closure#2}::{closure#0}, rustc_middle[1486d011505b3441]::query::erase::Erased<[u8; 8usize]>>
37: 0x10c120934 - <rustc_query_impl[d98edaeb063d7c4c]::query_impl::mir_borrowck::dynamic_query::{closure#2} as core[fafc87a594706398]::ops::function::FnOnce<(rustc_middle[1486d011505b3441]::ty::context::TyCtxt, rustc_span[12a1c67e1f6abb]::def_id::LocalDefId)>>::call_once
38: 0x10c096250 - rustc_query_system[1bcdf744069b5f02]::query::plumbing::try_execute_query::<rustc_query_impl[d98edaeb063d7c4c]::DynamicConfig<rustc_query_system[1bcdf744069b5f02]::query::caches::VecCache<rustc_span[12a1c67e1f6abb]::def_id::LocalDefId, rustc_middle[1486d011505b3441]::query::erase::Erased<[u8; 8usize]>>, false, false, false>, rustc_query_impl[d98edaeb063d7c4c]::plumbing::QueryCtxt, true>
39: 0x10c170f64 - rustc_query_impl[d98edaeb063d7c4c]::query_impl::mir_borrowck::get_query_incr::__rust_end_short_backtrace
40: 0x10b4fbe7c - <rustc_data_structures[4379925a6ea25aa8]::sync::parallel::ParallelGuard>::run::<(), rustc_data_structures[4379925a6ea25aa8]::sync::parallel::disabled::par_for_each_in<&[rustc_span[12a1c67e1f6abb]::def_id::LocalDefId], <rustc_middle[1486d011505b3441]::hir::map::Map>::par_body_owners<rustc_interface[8c972d485a8e2aa0]::passes::run_required_analyses::{closure#2}::{closure#0}>::{closure#0}>::{closure#0}::{closure#0}::{closure#0}>
41: 0x10b47d6c8 - rustc_interface[8c972d485a8e2aa0]::passes::analysis
42: 0x10c0e99e0 - rustc_query_impl[d98edaeb063d7c4c]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[d98edaeb063d7c4c]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[1486d011505b3441]::query::erase::Erased<[u8; 1usize]>>
43: 0x10c13a2a8 - <rustc_query_impl[d98edaeb063d7c4c]::query_impl::analysis::dynamic_query::{closure#2} as core[fafc87a594706398]::ops::function::FnOnce<(rustc_middle[1486d011505b3441]::ty::context::TyCtxt, ())>>::call_once
44: 0x10c04f4b8 - rustc_query_system[1bcdf744069b5f02]::query::plumbing::try_execute_query::<rustc_query_impl[d98edaeb063d7c4c]::DynamicConfig<rustc_query_system[1bcdf744069b5f02]::query::caches::SingleCache<rustc_middle[1486d011505b3441]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[d98edaeb063d7c4c]::plumbing::QueryCtxt, true>
45: 0x10c16269c - rustc_query_impl[d98edaeb063d7c4c]::query_impl::analysis::get_query_incr::__rust_end_short_backtrace
46: 0x10ac83608 - <rustc_middle[1486d011505b3441]::ty::context::GlobalCtxt>::enter::<rustc_driver_impl[d9f1096c2de14668]::run_compiler::{closure#0}::{closure#1}::{closure#5}, core[fafc87a594706398]::result::Result<(), rustc_span[12a1c67e1f6abb]::ErrorGuaranteed>>
47: 0x10ac23cc4 - <rustc_interface[8c972d485a8e2aa0]::interface::Compiler>::enter::<rustc_driver_impl[d9f1096c2de14668]::run_compiler::{closure#0}::{closure#1}, core[fafc87a594706398]::result::Result<core[fafc87a594706398]::option::Option<rustc_interface[8c972d485a8e2aa0]::queries::Linker>, rustc_span[12a1c67e1f6abb]::ErrorGuaranteed>>
48: 0x10ac38200 - <scoped_tls[db9af8800088675c]::ScopedKey<rustc_span[12a1c67e1f6abb]::SessionGlobals>>::set::<rustc_interface[8c972d485a8e2aa0]::util::run_in_thread_with_globals<rustc_interface[8c972d485a8e2aa0]::interface::run_compiler<core[fafc87a594706398]::result::Result<(), rustc_span[12a1c67e1f6abb]::ErrorGuaranteed>, rustc_driver_impl[d9f1096c2de14668]::run_compiler::{closure#0}>::{closure#1}, core[fafc87a594706398]::result::Result<(), rustc_span[12a1c67e1f6abb]::ErrorGuaranteed>>::{closure#0}::{closure#0}::{closure#0}, core[fafc87a594706398]::result::Result<(), rustc_span[12a1c67e1f6abb]::ErrorGuaranteed>>
49: 0x10ac349fc - std[d8d90c69e022292b]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[8c972d485a8e2aa0]::util::run_in_thread_with_globals<rustc_interface[8c972d485a8e2aa0]::interface::run_compiler<core[fafc87a594706398]::result::Result<(), rustc_span[12a1c67e1f6abb]::ErrorGuaranteed>, rustc_driver_impl[d9f1096c2de14668]::run_compiler::{closure#0}>::{closure#1}, core[fafc87a594706398]::result::Result<(), rustc_span[12a1c67e1f6abb]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[fafc87a594706398]::result::Result<(), rustc_span[12a1c67e1f6abb]::ErrorGuaranteed>>
50: 0x10ac410c0 - <<std[d8d90c69e022292b]::thread::Builder>::spawn_unchecked_<rustc_interface[8c972d485a8e2aa0]::util::run_in_thread_with_globals<rustc_interface[8c972d485a8e2aa0]::interface::run_compiler<core[fafc87a594706398]::result::Result<(), rustc_span[12a1c67e1f6abb]::ErrorGuaranteed>, rustc_driver_impl[d9f1096c2de14668]::run_compiler::{closure#0}>::{closure#1}, core[fafc87a594706398]::result::Result<(), rustc_span[12a1c67e1f6abb]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[fafc87a594706398]::result::Result<(), rustc_span[12a1c67e1f6abb]::ErrorGuaranteed>>::{closure#1} as core[fafc87a594706398]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
51: 0x10cb12d44 - std::sys::pal::unix::thread::Thread::new::thread_start::hd88bc8e95f2ca709
52: 0x199dc72e4 - __pthread_deallocate
error: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: rustc 1.82.0 (f6e511eec 2024-10-15) running on aarch64-apple-darwin
note: compiler flags: --crate-type lib -C embed-bitcode=no -C incremental=[REDACTED] -C strip=debuginfo
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
#0 [type_op_prove_predicate] evaluating `type_op_prove_predicate` `ProvePredicate { predicate: Binder { value: TraitPredicate(<diesel::query_builder::update_statement::UpdateStatement<db_schema::schema::ob_configuration::table, diesel::query_builder::where_clause::WhereClause<diesel::expression::grouped::Grouped<diesel::expression::operators::And<diesel::expression::grouped::Grouped<diesel::expression::operators::And<diesel::expression::grouped::Grouped<diesel::expression::operators::Eq<db_schema::schema::ob_configuration::columns::id, diesel::expression::bound::Bound<diesel::sql_types::Text, &newtypes::id::basic::ObConfigurationId>>>, diesel::expression::grouped::Grouped<diesel::expression::operators::Eq<db_schema::schema::ob_configuration::columns::tenant_id, diesel::expression::bound::Bound<diesel::sql_types::Text, &newtypes::id::basic::TenantId>>>>>, diesel::expression::grouped::Grouped<diesel::expression::operators::Eq<db_schema::schema::ob_configuration::columns::is_live, diesel::expression::bound::Bound<diesel::sql_types::Bool, bool>>>>>>, (core::option::Option<diesel::query_builder::update_statement::changeset::Assign<diesel::query_builder::update_statement::changeset::ColumnWrapperForUpdate<db_schema::schema::ob_configuration::columns::name>, diesel::expression::bound::Bound<diesel::sql_types::Text, alloc::string::String>>>, core::option::Option<diesel::query_builder::update_statement::changeset::Assign<diesel::query_builder::update_statement::changeset::ColumnWrapperForUpdate<db_schema::schema::ob_configuration::columns::status>, diesel::expression::bound::Bound<diesel::sql_types::Text, newtypes::db_types::ob_config::ApiKeyStatus>>>, core::option::Option<diesel::query_builder::update_statement::changeset::Assign<diesel::query_builder::update_statement::changeset::ColumnWrapperForUpdate<db_schema::schema::ob_configuration::columns::verification_checks>, diesel::expression::bound::Bound<diesel::sql_types::Nullable<diesel::pg::types::sql_types::Array<diesel::sql_types::Nullable<diesel::pg::types::sql_types::Jsonb>>>, alloc::vec::Vec<newtypes::db_types::verification_check::VerificationCheck>>>>, core::option::Option<diesel::query_builder::update_statement::changeset::Assign<diesel::query_builder::update_statement::changeset::ColumnWrapperForUpdate<db_schema::schema::ob_configuration::columns::prompt_for_passkey>, diesel::expression::bound::Bound<diesel::sql_types::Bool, bool>>>, core::option::Option<diesel::query_builder::update_statement::changeset::Assign<diesel::query_builder::update_statement::changeset::ColumnWrapperForUpdate<db_schema::schema::ob_configuration::columns::allow_reonboard>, diesel::expression::bound::Bound<diesel::sql_types::Bool, bool>>>, core::option::Option<diesel::query_builder::update_statement::changeset::Assign<diesel::query_builder::update_statement::changeset::ColumnWrapperForUpdate<db_schema::schema::ob_configuration::columns::skip_confirm>, diesel::expression::bound::Bound<diesel::sql_types::Bool, bool>>>)> as diesel::query_builder::AsQuery>, polarity:Positive), bound_vars: [] } }`
#1 [mir_borrowck] borrow-checking `models::ob_configuration::<impl at components/db/core/src/models/ob_configuration.rs:450:1: 450:21>::update`
end of query stack
there was a panic while trying to force a dep node
try_mark_green dep node stack:
#0 adt_sized_constraint(thread 'rustc' panicked at compiler/rustc_metadata/src/rmeta/def_path_hash_map.rs:23:54:
called `Option::unwrap()` on a `None` value
stack backtrace:
0: 0x10cb04bdc - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::habbf9c4f641febb1
1: 0x10a0d1770 - core::fmt::write::ha36a8060c13608ea
2: 0x10caf8b0c - std::io::Write::write_fmt::h431832c8ebcc85c9
3: 0x10cb072a4 - std::panicking::default_hook::{{closure}}::h4aa1f60327dfff6a
4: 0x10cb06ef8 - std::panicking::default_hook::h4ebc6eb4ae179807
5: 0x10ac42afc - <alloc[764fc8c78a1bb3e1]::boxed::Box<rustc_driver_impl[d9f1096c2de14668]::install_ice_hook::{closure#0}> as core[fafc87a594706398]::ops::function::Fn<(&dyn for<'a, 'b> core[fafc87a594706398]::ops::function::Fn<(&'a std[d8d90c69e022292b]::panic::PanicHookInfo<'b>,), Output = ()> + core[fafc87a594706398]::marker::Sync + core[fafc87a594706398]::marker::Send, &std[d8d90c69e022292b]::panic::PanicHookInfo)>>::call
6: 0x10cb08428 - std::panicking::rust_panic_with_hook::h6a84efe4dcab239c
7: 0x10cb07818 - std::panicking::begin_panic_handler::{{closure}}::h5eef292190467fef
8: 0x10cb05084 - std::sys::backtrace::__rust_end_short_backtrace::hd7e7925203f20af9
9: 0x10cb07514 - _rust_begin_unwind
10: 0x10f183b60 - core::panicking::panic_fmt::h410d3f147658259b
11: 0x10f183bcc - core::panicking::panic::hee236ca94fc05047
12: 0x10f183ae8 - core::option::unwrap_failed::h187ebe480b20e6be
13: 0x10b70adcc - <rustc_metadata[acfe361cc13a0072]::rmeta::decoder::cstore_impl::provide_cstore_hooks::{closure#0} as core[fafc87a594706398]::ops::function::FnOnce<(rustc_middle[1486d011505b3441]::query::plumbing::TyCtxtAt, rustc_span[12a1c67e1f6abb]::def_id::DefPathHash, rustc_span[12a1c67e1f6abb]::def_id::StableCrateId)>>::call_once
14: 0x10b7f27c4 - <rustc_middle[1486d011505b3441]::ty::context::TyCtxt>::def_path_hash_to_def_id
15: 0x10b4d9224 - rustc_interface[8c972d485a8e2aa0]::callbacks::dep_node_debug
16: 0x10c2fb210 - <rustc_query_system[1bcdf744069b5f02]::dep_graph::dep_node::DepNode as core[fafc87a594706398]::fmt::Debug>::fmt
17: 0x10a0d1770 - core::fmt::write::ha36a8060c13608ea
18: 0x10caf6eb0 - <&std::io::stdio::Stderr as std::io::Write>::write_fmt::hc885a26bdbfbb5f3
19: 0x10caf7970 - std::io::stdio::_eprint::h1cab3cc779ae9153
20: 0x10f315914 - rustc_query_system[1bcdf744069b5f02]::dep_graph::graph::print_markframe_trace::<rustc_middle[1486d011505b3441]::dep_graph::DepsType>
21: 0x10c1b9fcc - <rustc_query_system[1bcdf744069b5f02]::dep_graph::graph::DepGraphData<rustc_middle[1486d011505b3441]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[d98edaeb063d7c4c]::plumbing::QueryCtxt>
22: 0x10c1b9ee8 - <rustc_query_system[1bcdf744069b5f02]::dep_graph::graph::DepGraphData<rustc_middle[1486d011505b3441]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[d98edaeb063d7c4c]::plumbing::QueryCtxt>
23: 0x10c1b9ee8 - <rustc_query_system[1bcdf744069b5f02]::dep_graph::graph::DepGraphData<rustc_middle[1486d011505b3441]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[d98edaeb063d7c4c]::plumbing::QueryCtxt>
24: 0x10c1b9ee8 - <rustc_query_system[1bcdf744069b5f02]::dep_graph::graph::DepGraphData<rustc_middle[1486d011505b3441]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[d98edaeb063d7c4c]::plumbing::QueryCtxt>
25: 0x10c1b9ee8 - <rustc_query_system[1bcdf744069b5f02]::dep_graph::graph::DepGraphData<rustc_middle[1486d011505b3441]::dep_graph::DepsType>>::try_mark_previous_green::<rustc_query_impl[d98edaeb063d7c4c]::plumbing::QueryCtxt>
26: 0x10c1b9cd4 - <rustc_query_system[1bcdf744069b5f02]::dep_graph::graph::DepGraphData<rustc_middle[1486d011505b3441]::dep_graph::DepsType>>::try_mark_green::<rustc_query_impl[d98edaeb063d7c4c]::plumbing::QueryCtxt>
27: 0x10c0666d4 - rustc_query_system[1bcdf744069b5f02]::query::plumbing::try_execute_query::<rustc_query_impl[d98edaeb063d7c4c]::DynamicConfig<rustc_query_system[1bcdf744069b5f02]::query::caches::DefaultCache<rustc_type_ir[920e70aa31006d3f]::canonical::Canonical<rustc_middle[1486d011505b3441]::ty::context::TyCtxt, rustc_middle[1486d011505b3441]::ty::ParamEnvAnd<rustc_middle[1486d011505b3441]::traits::query::type_op::ProvePredicate>>, rustc_middle[1486d011505b3441]::query::erase::Erased<[u8; 8usize]>>, false, false, false>, rustc_query_impl[d98edaeb063d7c4c]::plumbing::QueryCtxt, true>
28: 0x10c18a71c - rustc_query_impl[d98edaeb063d7c4c]::query_impl::type_op_prove_predicate::get_query_incr::__rust_end_short_backtrace
29: 0x10c8a8398 - <rustc_middle[1486d011505b3441]::traits::query::type_op::ProvePredicate as rustc_trait_selection[59cf63c55545eaab]::traits::query::type_op::QueryTypeOp>::perform_query
30: 0x10a6c3a10 - <rustc_middle[1486d011505b3441]::traits::query::type_op::ProvePredicate as rustc_trait_selection[59cf63c55545eaab]::traits::query::type_op::QueryTypeOp>::fully_perform_into
31: 0x10a5d5614 - <rustc_infer[6bbdea83bea8e02f]::infer::InferCtxt>::commit_if_ok::<(), rustc_span[12a1c67e1f6abb]::ErrorGuaranteed, rustc_trait_selection[59cf63c55545eaab]::traits::query::type_op::custom::scrape_region_constraints<rustc_middle[1486d011505b3441]::ty::ParamEnvAnd<rustc_middle[1486d011505b3441]::traits::query::type_op::ProvePredicate>, (), <rustc_middle[1486d011505b3441]::ty::ParamEnvAnd<rustc_middle[1486d011505b3441]::traits::query::type_op::ProvePredicate> as rustc_trait_selection[59cf63c55545eaab]::traits::query::type_op::TypeOp>::fully_perform::{closure#1}>::{closure#0}>
32: 0x10a6b555c - <rustc_middle[1486d011505b3441]::ty::ParamEnvAnd<rustc_middle[1486d011505b3441]::traits::query::type_op::ProvePredicate> as rustc_trait_selection[59cf63c55545eaab]::traits::query::type_op::TypeOp>::fully_perform
33: 0x10a6803e0 - <rustc_borrowck[aa07daf8814d9f80]::type_check::TypeChecker>::fully_perform_op::<(), rustc_middle[1486d011505b3441]::ty::ParamEnvAnd<rustc_middle[1486d011505b3441]::traits::query::type_op::ProvePredicate>>
34: 0x10a6810cc - <rustc_borrowck[aa07daf8814d9f80]::type_check::TypeChecker>::normalize_and_prove_instantiated_predicates
35: 0x10a67c5e4 - <rustc_borrowck[aa07daf8814d9f80]::type_check::TypeVerifier as rustc_middle[1486d011505b3441]::mir::visit::Visitor>::visit_const_operand
36: 0x10a67d654 - <rustc_borrowck[aa07daf8814d9f80]::type_check::TypeVerifier as rustc_middle[1486d011505b3441]::mir::visit::Visitor>::visit_body
37: 0x10a67751c - rustc_borrowck[aa07daf8814d9f80]::type_check::type_check
38: 0x10a57f1d8 - rustc_borrowck[aa07daf8814d9f80]::nll::compute_regions
39: 0x10a544db8 - rustc_borrowck[aa07daf8814d9f80]::do_mir_borrowck
40: 0x10a53ba54 - rustc_borrowck[aa07daf8814d9f80]::mir_borrowck
41: 0x10c0e29b4 - rustc_query_impl[d98edaeb063d7c4c]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[d98edaeb063d7c4c]::query_impl::mir_borrowck::dynamic_query::{closure#2}::{closure#0}, rustc_middle[1486d011505b3441]::query::erase::Erased<[u8; 8usize]>>
42: 0x10c120934 - <rustc_query_impl[d98edaeb063d7c4c]::query_impl::mir_borrowck::dynamic_query::{closure#2} as core[fafc87a594706398]::ops::function::FnOnce<(rustc_middle[1486d011505b3441]::ty::context::TyCtxt, rustc_span[12a1c67e1f6abb]::def_id::LocalDefId)>>::call_once
43: 0x10c096250 - rustc_query_system[1bcdf744069b5f02]::query::plumbing::try_execute_query::<rustc_query_impl[d98edaeb063d7c4c]::DynamicConfig<rustc_query_system[1bcdf744069b5f02]::query::caches::VecCache<rustc_span[12a1c67e1f6abb]::def_id::LocalDefId, rustc_middle[1486d011505b3441]::query::erase::Erased<[u8; 8usize]>>, false, false, false>, rustc_query_impl[d98edaeb063d7c4c]::plumbing::QueryCtxt, true>
44: 0x10c170f64 - rustc_query_impl[d98edaeb063d7c4c]::query_impl::mir_borrowck::get_query_incr::__rust_end_short_backtrace
45: 0x10b4fbe7c - <rustc_data_structures[4379925a6ea25aa8]::sync::parallel::ParallelGuard>::run::<(), rustc_data_structures[4379925a6ea25aa8]::sync::parallel::disabled::par_for_each_in<&[rustc_span[12a1c67e1f6abb]::def_id::LocalDefId], <rustc_middle[1486d011505b3441]::hir::map::Map>::par_body_owners<rustc_interface[8c972d485a8e2aa0]::passes::run_required_analyses::{closure#2}::{closure#0}>::{closure#0}>::{closure#0}::{closure#0}::{closure#0}>
46: 0x10b47d6c8 - rustc_interface[8c972d485a8e2aa0]::passes::analysis
47: 0x10c0e99e0 - rustc_query_impl[d98edaeb063d7c4c]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[d98edaeb063d7c4c]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[1486d011505b3441]::query::erase::Erased<[u8; 1usize]>>
48: 0x10c13a2a8 - <rustc_query_impl[d98edaeb063d7c4c]::query_impl::analysis::dynamic_query::{closure#2} as core[fafc87a594706398]::ops::function::FnOnce<(rustc_middle[1486d011505b3441]::ty::context::TyCtxt, ())>>::call_once
49: 0x10c04f4b8 - rustc_query_system[1bcdf744069b5f02]::query::plumbing::try_execute_query::<rustc_query_impl[d98edaeb063d7c4c]::DynamicConfig<rustc_query_system[1bcdf744069b5f02]::query::caches::SingleCache<rustc_middle[1486d011505b3441]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[d98edaeb063d7c4c]::plumbing::QueryCtxt, true>
50: 0x10c16269c - rustc_query_impl[d98edaeb063d7c4c]::query_impl::analysis::get_query_incr::__rust_end_short_backtrace
51: 0x10ac83608 - <rustc_middle[1486d011505b3441]::ty::context::GlobalCtxt>::enter::<rustc_driver_impl[d9f1096c2de14668]::run_compiler::{closure#0}::{closure#1}::{closure#5}, core[fafc87a594706398]::result::Result<(), rustc_span[12a1c67e1f6abb]::ErrorGuaranteed>>
52: 0x10ac23cc4 - <rustc_interface[8c972d485a8e2aa0]::interface::Compiler>::enter::<rustc_driver_impl[d9f1096c2de14668]::run_compiler::{closure#0}::{closure#1}, core[fafc87a594706398]::result::Result<core[fafc87a594706398]::option::Option<rustc_interface[8c972d485a8e2aa0]::queries::Linker>, rustc_span[12a1c67e1f6abb]::ErrorGuaranteed>>
53: 0x10ac38200 - <scoped_tls[db9af8800088675c]::ScopedKey<rustc_span[12a1c67e1f6abb]::SessionGlobals>>::set::<rustc_interface[8c972d485a8e2aa0]::util::run_in_thread_with_globals<rustc_interface[8c972d485a8e2aa0]::interface::run_compiler<core[fafc87a594706398]::result::Result<(), rustc_span[12a1c67e1f6abb]::ErrorGuaranteed>, rustc_driver_impl[d9f1096c2de14668]::run_compiler::{closure#0}>::{closure#1}, core[fafc87a594706398]::result::Result<(), rustc_span[12a1c67e1f6abb]::ErrorGuaranteed>>::{closure#0}::{closure#0}::{closure#0}, core[fafc87a594706398]::result::Result<(), rustc_span[12a1c67e1f6abb]::ErrorGuaranteed>>
54: 0x10ac349fc - std[d8d90c69e022292b]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[8c972d485a8e2aa0]::util::run_in_thread_with_globals<rustc_interface[8c972d485a8e2aa0]::interface::run_compiler<core[fafc87a594706398]::result::Result<(), rustc_span[12a1c67e1f6abb]::ErrorGuaranteed>, rustc_driver_impl[d9f1096c2de14668]::run_compiler::{closure#0}>::{closure#1}, core[fafc87a594706398]::result::Result<(), rustc_span[12a1c67e1f6abb]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[fafc87a594706398]::result::Result<(), rustc_span[12a1c67e1f6abb]::ErrorGuaranteed>>
55: 0x10ac410c0 - <<std[d8d90c69e022292b]::thread::Builder>::spawn_unchecked_<rustc_interface[8c972d485a8e2aa0]::util::run_in_thread_with_globals<rustc_interface[8c972d485a8e2aa0]::interface::run_compiler<core[fafc87a594706398]::result::Result<(), rustc_span[12a1c67e1f6abb]::ErrorGuaranteed>, rustc_driver_impl[d9f1096c2de14668]::run_compiler::{closure#0}>::{closure#1}, core[fafc87a594706398]::result::Result<(), rustc_span[12a1c67e1f6abb]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[fafc87a594706398]::result::Result<(), rustc_span[12a1c67e1f6abb]::ErrorGuaranteed>>::{closure#1} as core[fafc87a594706398]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
56: 0x10cb12d44 - std::sys::pal::unix::thread::Thread::new::thread_start::hd88bc8e95f2ca709
57: 0x199dc72e4 - __pthread_deallocate
error: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: rustc 1.82.0 (f6e511eec 2024-10-15) running on aarch64-apple-darwin
note: compiler flags: --crate-type lib -C embed-bitcode=no -C incremental=[REDACTED] -C strip=debuginfo
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
#0 [type_op_prove_predicate] evaluating `type_op_prove_predicate` `ProvePredicate { predicate: Binder { value: TraitPredicate(<diesel::query_builder::update_statement::UpdateStatement<db_schema::schema::ob_configuration::table, diesel::query_builder::where_clause::WhereClause<diesel::expression::grouped::Grouped<diesel::expression::operators::And<diesel::expression::grouped::Grouped<diesel::expression::operators::And<diesel::expression::grouped::Grouped<diesel::expression::operators::Eq<db_schema::schema::ob_configuration::columns::id, diesel::expression::bound::Bound<diesel::sql_types::Text, &newtypes::id::basic::ObConfigurationId>>>, diesel::expression::grouped::Grouped<diesel::expression::operators::Eq<db_schema::schema::ob_configuration::columns::tenant_id, diesel::expression::bound::Bound<diesel::sql_types::Text, &newtypes::id::basic::TenantId>>>>>, diesel::expression::grouped::Grouped<diesel::expression::operators::Eq<db_schema::schema::ob_configuration::columns::is_live, diesel::expression::bound::Bound<diesel::sql_types::Bool, bool>>>>>>, (core::option::Option<diesel::query_builder::update_statement::changeset::Assign<diesel::query_builder::update_statement::changeset::ColumnWrapperForUpdate<db_schema::schema::ob_configuration::columns::name>, diesel::expression::bound::Bound<diesel::sql_types::Text, alloc::string::String>>>, core::option::Option<diesel::query_builder::update_statement::changeset::Assign<diesel::query_builder::update_statement::changeset::ColumnWrapperForUpdate<db_schema::schema::ob_configuration::columns::status>, diesel::expression::bound::Bound<diesel::sql_types::Text, newtypes::db_types::ob_config::ApiKeyStatus>>>, core::option::Option<diesel::query_builder::update_statement::changeset::Assign<diesel::query_builder::update_statement::changeset::ColumnWrapperForUpdate<db_schema::schema::ob_configuration::columns::verification_checks>, diesel::expression::bound::Bound<diesel::sql_types::Nullable<diesel::pg::types::sql_types::Array<diesel::sql_types::Nullable<diesel::pg::types::sql_types::Jsonb>>>, alloc::vec::Vec<newtypes::db_types::verification_check::VerificationCheck>>>>, core::option::Option<diesel::query_builder::update_statement::changeset::Assign<diesel::query_builder::update_statement::changeset::ColumnWrapperForUpdate<db_schema::schema::ob_configuration::columns::prompt_for_passkey>, diesel::expression::bound::Bound<diesel::sql_types::Bool, bool>>>, core::option::Option<diesel::query_builder::update_statement::changeset::Assign<diesel::query_builder::update_statement::changeset::ColumnWrapperForUpdate<db_schema::schema::ob_configuration::columns::allow_reonboard>, diesel::expression::bound::Bound<diesel::sql_types::Bool, bool>>>, core::option::Option<diesel::query_builder::update_statement::changeset::Assign<diesel::query_builder::update_statement::changeset::ColumnWrapperForUpdate<db_schema::schema::ob_configuration::columns::skip_confirm>, diesel::expression::bound::Bound<diesel::sql_types::Bool, bool>>>)> as diesel::query_builder::AsQuery>, polarity:Positive), bound_vars: [] } }`
#1 [mir_borrowck] borrow-checking `models::ob_configuration::<impl at components/db/core/src/models/ob_configuration.rs:450:1: 450:21>::update`
end of query stack
error: could not compile `db` (lib)```
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary><strong>Crazily enough, Cargo build works! </strong></summary>
<p>
```
<backtrace>
```
</p>
</details>
| I-ICE,T-compiler,A-incr-comp,C-bug,S-needs-repro | low | Critical |
2,629,928,241 | pytorch | boxing-unboxing overhead seems significant | https://gist.github.com/zou3519/b987e00a82c7e184b8896a5df7b0bfa9
Benchmarking two cases:
1. torch.ops.mylib.foo operator that has an Autograd key that takes unboxed inputs but a CPU key that boxes (via return to Python)
2. torch.ops.mylib.foo_cpp operator that has an Autograd key and CPU key (in cpp) that take unboxed inputs
```
num_tensors 5
2.7380013465881348 # clone
13.052228927612305 # foo
8.257509231567383 # foo_cpp
```
NB: We have an Autograd key that accepts unboxed inputs to emulate how built-in PyTorch operators work. If I delete the autograd registration for both operators, then it becomes a boxed fallback, which brings the numbers a lot closer together (both at around 8). It looks like one unboxing isn't bad, but a boxing is bad. | triaged,module: dispatch | low | Minor |
2,629,936,631 | langchain | Tool use for fireworks.ai seem to be broken | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
import os
from pathlib import Path
from dotenv import load_dotenv
from langchain_community.tools import WikipediaQueryRun
from langchain_community.utilities import WikipediaAPIWrapper
from langchain_core.messages import HumanMessage
from langchain_fireworks import ChatFireworks
load_dotenv(dotenv_path=Path(__file__).parents[1] / ".env")
LLM_API_KEY = os.environ.get("LLM_API_KEY") # Put api key here if no .env exists
wikipedia = WikipediaQueryRun(api_wrapper=WikipediaAPIWrapper())
tools = [wikipedia]
client_medium = ChatFireworks(
api_key=LLM_API_KEY,
model="accounts/fireworks/models/llama-v3p1-8b-instruct",
temperature=0,
)
llm_with_tools = client_medium.bind_tools(tools, tool_choice="wikipedia") # or tool_choice="any"
result = llm_with_tools.invoke([HumanMessage(content="What is stable diffusion")])
print(result.tool_calls) # returns []
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am working with the fireworks.ai client and I noticed that Llama 8B never uses a tool even when tool_choice is equal to the tool name or to "any" to force a tool use. This contradicts the documentation for this parameter (https://python.langchain.com/docs/how_to/tool_choice/ ) but I am not sure why it is happening.
Thanks !
### System Info
System Information
------------------
> OS: Linux
> OS Version: #47~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Wed Oct 2 16:16:55 UTC 2
> Python Version: 3.11.10 (main, Oct 3 2024, 07:29:13) [GCC 11.2.0]
Package Information
-------------------
> langchain_core: 0.3.15
> langchain: 0.3.6
> langchain_community: 0.3.4
> langsmith: 0.1.139
> langchain_fireworks: 0.2.5
> langchain_openai: 0.2.5
> langchain_text_splitters: 0.3.1
> langgraph: 0.2.39
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.10
> async-timeout: Installed. No version info available.
> dataclasses-json: 0.6.7
> fireworks-ai: 0.15.7
> httpx: 0.27.2
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> langgraph-checkpoint: 2.0.2
> langgraph-sdk: 0.1.35
> numpy: 1.26.4
> openai: 1.53.0
> orjson: 3.10.10
> packaging: 24.1
> pydantic: 2.8.2
> pydantic-settings: 2.6.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.36
> tenacity: 9.0.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
| ๐ค:bug | low | Critical |
2,629,936,741 | next.js | Getting Turbopack fatal error while trying to start and run. | ### Link to the code that reproduces this issue
https://codesandbox.io/p/devbox/withered-https-go8s7s
### To Reproduce
---------------------------
Panic: panicked at turbopack/crates/turbo-tasks-fs/src/glob.rs:179:25:
not yet implemented: glob char sequences are not implemented yet
Backtrace: 0: <unknown>
1: <unknown>
2: <unknown>
3: <unknown>
4: <unknown>
5: <unknown>
6: <unknown>
7: <unknown>
8: <unknown>
9: <unknown>
10: <unknown>
11: <unknown>
12: <unknown>
13: <unknown>
14: <unknown>
15: <unknown>
16: <unknown>
17: <unknown>
18: <unknown>
19: <unknown>
20: <unknown>
21: <unknown>
22: <unknown>
23: <unknown>
24: <unknown>
25: <unknown>
26: <unknown>
27: <unknown>
28: <unknown>
29: <unknown>
30: <unknown>
31: <unknown>
32: <unknown>
33: <unknown>
34: start_thread
35: clone
### Current vs. Expected behavior
While i am trying to start next typescript project i am getting fatal turbopack error, FATAL: An unexpected Turbopack error occurred. Please report the content of log.
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: #1 SMP Debian 5.10.149-1 (2022-10-17)
Available memory (MB): 24048
Available CPU cores: 10
Binaries:
Node: 18.20.4
npm: 10.9.0
Yarn: N/A
pnpm: N/A
Relevant Packages:
next: 15.0.2 // Latest available version is detected (15.0.2).
eslint-config-next: 15.0.2
react: 19.0.0-rc-02c0e824-20241028
react-dom: 19.0.0-rc-02c0e824-20241028
typescript: 5.6.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Turbopack
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
_No response_ | bug,Turbopack | low | Critical |
2,629,961,182 | deno | Wrong Deno Version reported on Windows Registry/Control Panel when upgrade via `deno upgrade`. | Version: Deno 2.0.4

**Windows Registry/Control Panel reported version:** 2.0.2
**Actual version:** 2.0.4 (upgraded via deno upgrade)
This also becomes a problem when you check Deno version in winget which will be same Windows registry/Control Panel.
| windows,needs info | low | Minor |
2,629,999,064 | stable-diffusion-webui | [Bug]: Error upon loading SD3.5 medium | ### Checklist
- [x] The issue exists after disabling all extensions
- [x] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
Tried to load \stableDiffusion35_medium_912387.safetensors, failed
### Steps to reproduce the problem
Load webui
Select model stableDiffusion35_medium_912387.safetensors
### What should have happened?
WebUI should load correctly the model
### What browsers do you use to access the UI ?
Google Chrome
### Sysinfo
[sysinfo-2024-11-01-23-02.json](https://github.com/user-attachments/files/17604740/sysinfo-2024-11-01-23-02.json)
### Console logs
```Shell
venv "C:\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.10.1
Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2
Installing sd-webui-controlnet requirement: changing opencv-python version from 4.10.0.84 to 4.8.0
removing nvidia-cudnn-cu11
Launching Web UI with arguments:
C:\stable-diffusion-webui\venv\lib\site-packages\timm\models\layers\__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
No module 'xformers'. Proceeding without it.
CivitAI Browser+: Aria2 RPC started
2024-11-02 00:00:45,972 - ControlNet - INFO - ControlNet v1.1.415
ControlNet preprocessor location: C:\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads
2024-11-02 00:00:46,062 - ControlNet - INFO - ControlNet v1.1.415
Loading weights [ee6a527295] from C:\stable-diffusion-webui\models\Stable-diffusion\********************.safetensors
Creating model from config: C:\stable-diffusion-webui\configs\v1-inference.yaml
C:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py:797: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
warnings.warn(
[ERROR]: Config states C:\stable-diffusion-webui\config_states\civitai_subfolders.json, "created_at" does not exist
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 16.0s (prepare environment: 5.2s, import torch: 3.5s, import gradio: 1.1s, setup paths: 0.8s, initialize shared: 0.2s, other imports: 0.5s, load scripts: 3.6s, create ui: 0.9s, gradio launch: 0.2s).
Applying attention optimization: Doggettx... done.
Model loaded in 4.8s (load weights from disk: 0.5s, create model: 0.3s, apply weights to model: 3.7s, calculate empty prompt: 0.2s).
Reusing loaded model kizukiAnimeHentai_animeHentaiV4.safetensors [ee6a527295] to load stableDiffusion35_medium_912387.safetensors [11fe06e223]
Loading weights [11fe06e223] from C:\stable-diffusion-webui\models\Stable-diffusion\stableDiffusion35_medium_912387.safetensors
Creating model from config: C:\stable-diffusion-webui\configs\sd3-inference.yaml
C:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py:797: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
warnings.warn(
changing setting sd_model_checkpoint to stableDiffusion35_medium_912387.safetensors [11fe06e223]: RuntimeError
Traceback (most recent call last):
File "C:\stable-diffusion-webui\modules\options.py", line 165, in set
option.onchange()
File "C:\stable-diffusion-webui\modules\call_queue.py", line 14, in f
res = func(*args, **kwargs)
File "C:\stable-diffusion-webui\modules\initialize_util.py", line 181, in <lambda>
shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False)
File "C:\stable-diffusion-webui\modules\sd_models.py", line 977, in reload_model_weights
load_model(checkpoint_info, already_loaded_state_dict=state_dict)
File "C:\stable-diffusion-webui\modules\sd_models.py", line 845, in load_model
load_model_weights(sd_model, checkpoint_info, state_dict, timer)
File "C:\stable-diffusion-webui\modules\sd_models.py", line 440, in load_model_weights
model.load_state_dict(state_dict, strict=False)
File "C:\stable-diffusion-webui\modules\sd_disable_initialization.py", line 223, in <lambda>
module_load_state_dict = self.replace(torch.nn.Module, 'load_state_dict', lambda *args, **kwargs: load_state_dict(module_load_state_dict, *args, **kwargs))
File "C:\stable-diffusion-webui\modules\sd_disable_initialization.py", line 221, in load_state_dict
original(module, state_dict, strict=strict)
File "C:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 2152, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for SD3Inferencer:
size mismatch for model.diffusion_model.joint_blocks.0.x_block.adaLN_modulation.1.weight: copying a param with shape torch.Size([13824, 1536]) from checkpoint, the shape in current model is torch.Size([9216, 1536]).
size mismatch for model.diffusion_model.joint_blocks.0.x_block.adaLN_modulation.1.bias: copying a param with shape torch.Size([13824]) from checkpoint, the shape in current model is torch.Size([9216]).
size mismatch for model.diffusion_model.joint_blocks.1.x_block.adaLN_modulation.1.weight: copying a param with shape torch.Size([13824, 1536]) from checkpoint, the shape in current model is torch.Size([9216, 1536]).
size mismatch for model.diffusion_model.joint_blocks.1.x_block.adaLN_modulation.1.bias: copying a param with shape torch.Size([13824]) from checkpoint, the shape in current model is torch.Size([9216]).
size mismatch for model.diffusion_model.joint_blocks.2.x_block.adaLN_modulation.1.weight: copying a param with shape torch.Size([13824, 1536]) from checkpoint, the shape in current model is torch.Size([9216, 1536]).
size mismatch for model.diffusion_model.joint_blocks.2.x_block.adaLN_modulation.1.bias: copying a param with shape torch.Size([13824]) from checkpoint, the shape in current model is torch.Size([9216]).
size mismatch for model.diffusion_model.joint_blocks.3.x_block.adaLN_modulation.1.weight: copying a param with shape torch.Size([13824, 1536]) from checkpoint, the shape in current model is torch.Size([9216, 1536]).
size mismatch for model.diffusion_model.joint_blocks.3.x_block.adaLN_modulation.1.bias: copying a param with shape torch.Size([13824]) from checkpoint, the shape in current model is torch.Size([9216]).
size mismatch for model.diffusion_model.joint_blocks.4.x_block.adaLN_modulation.1.weight: copying a param with shape torch.Size([13824, 1536]) from checkpoint, the shape in current model is torch.Size([9216, 1536]).
size mismatch for model.diffusion_model.joint_blocks.4.x_block.adaLN_modulation.1.bias: copying a param with shape torch.Size([13824]) from checkpoint, the shape in current model is torch.Size([9216]).
size mismatch for model.diffusion_model.joint_blocks.5.x_block.adaLN_modulation.1.weight: copying a param with shape torch.Size([13824, 1536]) from checkpoint, the shape in current model is torch.Size([9216, 1536]).
size mismatch for model.diffusion_model.joint_blocks.5.x_block.adaLN_modulation.1.bias: copying a param with shape torch.Size([13824]) from checkpoint, the shape in current model is torch.Size([9216]).
size mismatch for model.diffusion_model.joint_blocks.6.x_block.adaLN_modulation.1.weight: copying a param with shape torch.Size([13824, 1536]) from checkpoint, the shape in current model is torch.Size([9216, 1536]).
size mismatch for model.diffusion_model.joint_blocks.6.x_block.adaLN_modulation.1.bias: copying a param with shape torch.Size([13824]) from checkpoint, the shape in current model is torch.Size([9216]).
size mismatch for model.diffusion_model.joint_blocks.7.x_block.adaLN_modulation.1.weight: copying a param with shape torch.Size([13824, 1536]) from checkpoint, the shape in current model is torch.Size([9216, 1536]).
size mismatch for model.diffusion_model.joint_blocks.7.x_block.adaLN_modulation.1.bias: copying a param with shape torch.Size([13824]) from checkpoint, the shape in current model is torch.Size([9216]).
size mismatch for model.diffusion_model.joint_blocks.8.x_block.adaLN_modulation.1.weight: copying a param with shape torch.Size([13824, 1536]) from checkpoint, the shape in current model is torch.Size([9216, 1536]).
size mismatch for model.diffusion_model.joint_blocks.8.x_block.adaLN_modulation.1.bias: copying a param with shape torch.Size([13824]) from checkpoint, the shape in current model is torch.Size([9216]).
size mismatch for model.diffusion_model.joint_blocks.9.x_block.adaLN_modulation.1.weight: copying a param with shape torch.Size([13824, 1536]) from checkpoint, the shape in current model is torch.Size([9216, 1536]).
size mismatch for model.diffusion_model.joint_blocks.9.x_block.adaLN_modulation.1.bias: copying a param with shape torch.Size([13824]) from checkpoint, the shape in current model is torch.Size([9216]).
size mismatch for model.diffusion_model.joint_blocks.10.x_block.adaLN_modulation.1.weight: copying a param with shape torch.Size([13824, 1536]) from checkpoint, the shape in current model is torch.Size([9216, 1536]).
size mismatch for model.diffusion_model.joint_blocks.10.x_block.adaLN_modulation.1.bias: copying a param with shape torch.Size([13824]) from checkpoint, the shape in current model is torch.Size([9216]).
size mismatch for model.diffusion_model.joint_blocks.11.x_block.adaLN_modulation.1.weight: copying a param with shape torch.Size([13824, 1536]) from checkpoint, the shape in current model is torch.Size([9216, 1536]).
size mismatch for model.diffusion_model.joint_blocks.11.x_block.adaLN_modulation.1.bias: copying a param with shape torch.Size([13824]) from checkpoint, the shape in current model is torch.Size([9216]).
size mismatch for model.diffusion_model.joint_blocks.12.x_block.adaLN_modulation.1.weight: copying a param with shape torch.Size([13824, 1536]) from checkpoint, the shape in current model is torch.Size([9216, 1536]).
size mismatch for model.diffusion_model.joint_blocks.12.x_block.adaLN_modulation.1.bias: copying a param with shape torch.Size([13824]) from checkpoint, the shape in current model is torch.Size([9216]).
```
### Additional information
The model is SD3.5 medium, T5 text encoder had been disabled
+--------------------------------------------------------------------------------------------+
| NVIDIA-SMI 566.03 Driver Version: 566.03 CUDA Version: 12.7 |
|-----------------------------------------+------------------------+-------------------------+
| GPU Name Driver-Model | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|========================+==============+===============|
| 0 NVIDIA GeForce RTX 4060 WDDM | 00000000:01:00.0 On | v N/A |
| 0% 47C P2 N/A / 120W | 6700MiB / 8188MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+------------------------+ | bug-report | low | Critical |
2,630,009,034 | godot | CollisionShape2D with top_level has inconsistent position for physics | ### Tested versions
- Reproductible in 4.4 dev3 and 4.3 stable
### System information
Godot v4.4.dev3 - Windows 10 - Vulkan (Forward+) - NVIDIA 3060 Ti
### Issue description
When dynamically adding a collision shape with `top_level` set and then placing it through its `global_position`, the effective position (used in the physics server) seems to differ, thus giving incorrect collisions.
### Steps to reproduce
There are 2 scenes, in the project, for each. In the first one, there are 2 areas overlapping through 2 circle collision shapes. In the second scene one of the areas is left untuched, but the other one is created and placed dynamically at the exam same position as it was in the first node. Important: Note that it is added as a child of an "strangely" offset node.
You can run both scenes with "Visible Collision Shapes", in both case you should see the shapes overlap.
When the scenes are running you can left click to see the overlapping areas in the output console, in the first scene you should see something, while the second one should return an empty array.
### Minimal reproduction project (MRP)
[mrp-toplevelcollisionshape2d.zip](https://github.com/user-attachments/files/17604790/mrp-toplevelcollisionshape2d.zip)
| bug,topic:physics,topic:2d | low | Minor |
2,630,022,348 | flutter | [Feature Request] Create CustomScrollView.padding to use padding over SliverFillRemaining | ### Use case
SliverFillRemaining inside CustomScrollView only check previous used space making usage of SliverPadding above not working as expected.
I would like to use CustomScrollView **padding** property to change the constraints of all sub slivers to make my SliverFillRemaining widget well processing the remaining space.
Example :
Take two lists, both are the same, one is empty and must fit the entire screen height (the red one) and the other one is filled (the green one). Both must have padding. So, the red list should not be scrollable.
On the following demo, the first list is scrollable because the SliverFillRemaining didn't know that a bottom padding is after him. The second list work as expected.
**Video:**
[Screen_recording_20241102_003439.webm](https://github.com/user-attachments/assets/0334abd1-53b5-4eb5-a858-18752fe46a7c)
**Code example:**
<details>
<summary> Expand here </summary>
```dart
import 'package:flutter/material.dart';
void main() => runApp(const MyApp());
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
scrollBehavior: const MaterialScrollBehavior().copyWith(
overscroll: false,
),
home: Scaffold(
appBar: AppBar(
title: const Text("Demo"),
),
body: const Row(
children: [
Expanded(
child: CustomScrollView(
slivers: [
SliverPadding(
padding: EdgeInsets.all(16),
sliver: HardCodedListWidget(id: 1),
),
],
),
),
Expanded(
child: CustomScrollView(
slivers: [
SliverPadding(
padding: EdgeInsets.all(16),
sliver: HardCodedListWidget(id: 2),
),
],
),
),
],
),
),
);
}
}
Future<List<Object>> fetch(int id) async => switch (id) {
2 => List.generate(10, (index) => index),
_ => [],
};
class HardCodedListWidget extends StatefulWidget {
const HardCodedListWidget({required this.id, super.key});
final int id;
@override
State<HardCodedListWidget> createState() => _HardCodedListWidgetState();
}
class _HardCodedListWidgetState extends State<HardCodedListWidget> {
List<Object>? items;
@override
void initState() {
fetch(widget.id).then(
(result) => setState(() => items = result),
);
super.initState();
}
@override
Widget build(BuildContext context) {
if (items == null) {
return const SliverToBoxAdapter(
child: SizedBox(),
);
}
if (items!.isEmpty) {
return SliverFillRemaining(
hasScrollBody: false,
child: Container(
color: Colors.red,
),
);
}
return SliverList.separated(
itemCount: items!.length,
separatorBuilder: (context, index) => const SizedBox(height: 16),
itemBuilder: (context, index) => Container(
color: Colors.green,
height: 100,
),
);
}
}
```
</details>
### Proposal
Create CustomScrollView.padding as for ListView, SingleChildScrollView and GridView. | c: new feature,framework,f: scrolling,c: proposal,P3,team-framework,triaged-framework | low | Major |
2,630,064,338 | godot | Drag and Drop doesn't work when using the pen (works with trackpad) | ### Tested versions
v4.3.stable.official [77dcf97d8]
### System information
Newest macOS / Mac mini M1
### Issue description
I use a trackpad (don't have a mouse connected) and never had any problems with Godot on that front.
A few days ago I bought a graphic tablet and it works as expected with all the apps I use. But in Godot, drag and dropping fails to activate.
For example, if I try to drag and drop a Texture (from File panel) to a TextureRect, when I use my trackpad it works no problem, but using the pen it just doesn't happen.
### Steps to reproduce
Using the pen:
Point to a (for example) bitmap file.
Press the pen, hold it and start dragging the file towards the destination (at this point, the bitmap file should get glued to the cursor, but it doesn't happen).
Release the pen at the point where the file is meant to be droppedโฆ
โฆthere will be no result.
### Minimal reproduction project (MRP)
https://github.com/user-attachments/assets/f7216794-6186-4f4b-bb39-8bb1b5092e36
In the video, I first drag and drop an image to the TextureRec (the circle around the mouse pointer indicates the button is held down). Then I try to do the same thing with a pen (again, the circle around the mouse, which is drawn by the OS, indicates that the pressing is registered as expected)โฆ
โฆbut when I let go of the pen, nothing happens.
I noticed that the blue highlight stays in the File panel for the whole time if that matters. | bug,topic:input | low | Minor |
2,630,069,670 | svelte | Deprecate `{@debug }`...? | ### Describe the problem
Svelte 5 introduced the [`$inspect`](https://svelte.dev/docs/svelte/$inspect) rune, which has very similar functionality to the debug tag. This makes me question the necessity of the [`{@debug }`](https://svelte.dev/docs/svelte/@debug) tag.
### Describe the proposed solution
I feel like the debug tag _probably_ isn't needed, but the only thing that `{@debug }` has that `$inspect` doesn't is the ability to log on every state change. I think the best way to replace this would be to just make calling `$inspect` without any arguments have the same behavior.
The only issue I see with doing this is that Svelte 5 has already released and this could upset some people.
### Importance
nice to have | breaking change,needs discussion | low | Critical |
2,630,081,596 | electron | zoom level is reset on location.hash change | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
33.0.2
### What operating system(s) are you using?
macOS
### Operating System Version
macOS 15.1 (24B83)
### What arch are you using?
arm64 (including Apple Silicon)
### Last Known Working Electron version
_No response_
### Expected Behavior
Pressing cmd +/- to change the zoom level should always work regardless of `window.location.hash` changing.
### Actual Behavior
The internal zoom level is reset when changing `window.location.hash`, but the apparent zoom level is not. When the user next presses cmd +/-, the zoom level snaps back to 100% rather than adjusting incrementally.
For example, in this demo I am pressing **only** `cmd +` to increase the zoom level. I am not pressing `cmd -`, but the zoom decreases on its own.
https://github.com/user-attachments/assets/4bc4316b-dc63-4619-8a6d-9676a92821cb
### Testcase Gist URL
https://gist.github.com/jtbandes/15c1565745f0dcf61e1923bd9225379b
### Additional Information
This is a re-submission of https://github.com/electron/electron/issues/42333 with a testcase gist.
This is related to https://github.com/electron/electron/issues/40354 which was fixed in https://github.com/electron/electron/pull/40650 | platform/macOS,bug :beetle:,has-repro-gist,33-x-y | low | Critical |
2,630,101,724 | neovim | in gui, startup messages don't show up in command-line | ### Problem
Noticed this because `W325: Ignoring swapfile from Nvim...` didn't show up. You can see it if you manually do `:mesages` after startup.
Issue #24705 is probably related. Using the test below, you can see
- `TUI`, `AFTER UIEnter` shows up in the command-line at startup.
- `goneovim` a defer of around 50ms is needed.
- `neovide` a defer of around 150ms is needed.
### Steps to reproduce
Put the following somewhere in initialization, and notice if you see the `AFTER UIEnter` message.
```lua
vim.api.nvim_create_autocmd("UIEnter", {
callback = function(ev)
vim.print("AFTER UIEnter")
return true
end
})
```
### Expected behavior
The message should show up in any `gui` on the initial screen.
### Nvim version (nvim -v)
https://github.com/neovim/neovim/commit/b34e137e43d359c8db4fb76028dea3b410842aff
### Vim (not Nvim) behaves the same?
NA
### Operating system/version
ubuntu
### Terminal name/version
gnome-termincal
### $TERM environment variable
xterm-256color
### Installation
make install | bug,startup,messages | low | Minor |
2,630,111,376 | transformers | Feature to configure `stop_strings` in `generation_config.json` or other config files | ### Feature request
The transformer library should offer a way to configure `stop_strings` and the tokenizer for it.
`model.generate()` can take a `stop_strings` argument to use custom stop tokens for generation, but a tokenizer object needs to be passed as well.
```
model.generate(...,
stop_strings=["<stop token>"],
tokenizer=tokenizer)
```
If we add `stop_strings` to `generation_config.json`, which can be loaded correctly [code](https://github.com/huggingface/transformers/blob/33868a057c02f0368ba63bd1edb746be38fe3d90/src/transformers/generation/configuration_utils.py#L144-L145), it will return the following error, as it requires a tokenizer object, which cannot be defined in the config file.
```
>>> from transformers import AutoTokenizer, AutoModelForCausalLM
>>> tokenizer = AutoTokenizer.from_pretrained(model_path)
>>> model = AutoModelForCausalLM.from_pretrained(model_path, device_map="auto")
>>> model.generate(**tokenizer("Hi how are you?", return_tensors="pt", return_token_type_ids=False))
...
ValueError: There are one or more stop strings, either in the arguments to `generate` or in the model's generation config, but we could not locate a tokenizer. When generating with stop strings, you must pass the model's tokenizer to the `tokenizer` argument of `generate`.
```
### Motivation
The user shouldn't be bothered by adding extra arguments to `generate()` or `pipeline`.
For example, [nvidia/Mistral-NeMo-Minitron-8B-Instruct](https://huggingface.co/nvidia/Mistral-NeMo-Minitron-8B-Instruct) needs to use `stop_strings` but so many people simply calls `generate()` without `stop_strings` and share complaints.
### Your contribution
I'd be happy to create a PR but need guidance for the design choice. | Feature request,Generation | low | Critical |
2,630,119,001 | neovim | mapping callback error report doesn't indicate where the error came from | ### Problem
The error report should mention the mapping that generated the error
(or at least that it was a mapping). Some better examples are below.
There's no indication that this error came from a mapping callback
```
E5108: Error executing lua: [string ":source (no file)"]:2: in F2()
stack traceback:
[C]: in function 'error'
[string ":source (no file)"]:2: in function 'F2'
[string ":source (no file)"]:5: in function 'F1'
[string ":source (no file)"]:8: in function 'handler'
[string ":source (no file)"]:15: in function <[string ":source (no file)"]:12>
```
### Steps to reproduce
Source the following and mouse click.
```lua
local function F2()
error("in F2()")
end
local function F1()
F2()
end
local function handler()
F1()
end
vim.keymap.set('n', '<LeftMouse>', function()
handler()
end)
```
### Expected behavior
The error report should mention the mapping that generated the error.
Here's examples for `autocommand` and `on_key` callbacks.
The first one looks like the gold standard.
```
Error detected while processing OptionSet Autocommands for "*":
Error executing lua callback: [string ":source (no file)"]:2: in F2()
stack traceback:
[C]: in function 'error'
[string ":source (no file)"]:2: in function 'F2'
[string ":source (no file)"]:5: in function 'F1'
[string ":source (no file)"]:8: in function 'handler'
[string ":source (no file)"]:13: in function <[string ":source (no file)"]:12>
```
```
Error executing vim.on_key() callbacks: vim/_editor.lua:0:
With ns_id 11: [string ":source (no file)"]:6: in F2()
stack traceback:
[C]: in function 'error'
[string ":source (no file)"]:6: in function 'F2'
[string ":source (no file)"]:9: in function 'F1'
[string ":source (no file)"]:12: in function <[string ":source (no file)"]:11>
[C]: in function 'xpcall'
vim/_editor.lua: in function <vim/_editor.lua:0>
```
### Nvim version (nvim -v)
https://github.com/neovim/neovim/commit/b34e137e43d359c8db4fb76028dea3b410842aff
### Vim (not Nvim) behaves the same?
NA
### Operating system/version
ubuntu
### Terminal name/version
gnome-terminal
### $TERM environment variable
xterm-256color
### Installation
make install | bug,lua,mappings | low | Critical |
2,630,120,367 | ui | [feat]: Fix sidebar error in astro | ### Feature description
TLDR;
Astro will not render if you do not make the following change:
Pls use `import { type VariantProps, cva } from "class-variance-authority"; ` istead of `import { VariantProps, cva } from "class-variance-authority";
### Affected component/components
Sidebar
### Additional Context
Error detail:
hook.js:608 [astro-island] Error hydrating /src/dashboard/main.tsx SyntaxError: The requested module '/node_modules/.vite/deps/class-variance-authority.js?v=0c4d54d3' does not provide an export named 'VariantProps' (at sidebar.tsx:3:10)
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | low | Critical |
2,630,121,691 | yt-dlp | [NicoNico] Unable to fetch data: HTTP Error 400: Bad Request - geo-restriction not being detected by yt-dlp | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
USA, Japan
### Provide a description that is worded well enough to be understood
Whenever I try and download any video from NicoNico, it always results in "Unable to fetch data: HTTP Error 400: Bad Request (caused by <HTTPError 400: Bad Request>)" even if the video isn't blocked behind a login requirement.
I tried a few different videos but none download, opening them in my browser loads and plays them just fine.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', '--check-formats', '--no-warnings', '--cookies-from-browser', 'firefox', '--no-check-certificate', 'https://www.nicovideo.jp/watch/so44275623']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.10.22 from yt-dlp/yt-dlp [67adeb7ba] (win_exe)
[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.22621-SP0 (OpenSSL 1.1.1k 25 Mar 2021)
[debug] exe versions: ffmpeg N-117657-gfe21944656-20241026 (setts), ffprobe N-117657-gfe21944656-20241026
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.35.5, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
Extracting cookies from firefox
[debug] Extracting cookies from: "C:\Users\sunka\AppData\Roaming\Mozilla\Firefox\Profiles\0c2fe4fr.default-release\cookies.sqlite"
Extracted 2534 cookies from firefox
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1839 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.10.22 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.10.22 from yt-dlp/yt-dlp)
[niconico] Extracting URL: https://www.nicovideo.jp/watch/so44275623
[niconico] so44275623: Downloading webpage
[niconico] so44275623: Downloading API JSON
ERROR: [niconico] so44275623: Unable to fetch data: HTTP Error 400: Bad Request (caused by <HTTPError 400: Bad Request>)
File "yt_dlp\extractor\common.py", line 741, in extract
File "yt_dlp\extractor\niconico.py", line 463, in _real_extract
File "yt_dlp\extractor\common.py", line 1151, in download_content
File "yt_dlp\extractor\common.py", line 1111, in download_handle
File "yt_dlp\extractor\common.py", line 961, in _download_webpage_handle
File "yt_dlp\extractor\common.py", line 910, in _request_webpage
File "yt_dlp\extractor\common.py", line 897, in _request_webpage
File "yt_dlp\YoutubeDL.py", line 4165, in urlopen
File "yt_dlp\networking\common.py", line 117, in send
File "yt_dlp\networking\_helper.py", line 208, in wrapper
File "yt_dlp\networking\common.py", line 340, in send
File "yt_dlp\networking\_requests.py", line 365, in _send
yt_dlp.networking.exceptions.HTTPError: HTTP Error 400: Bad Request
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "yt_dlp\extractor\niconico.py", line 451, in _real_extract
File "yt_dlp\extractor\common.py", line 961, in _download_webpage_handle
File "yt_dlp\extractor\common.py", line 910, in _request_webpage
yt_dlp.utils.ExtractorError: Unable to download webpage: HTTP Error 400: Bad Request (caused by <HTTPError 400: Bad Request>)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "yt_dlp\extractor\common.py", line 897, in _request_webpage
File "yt_dlp\YoutubeDL.py", line 4165, in urlopen
File "yt_dlp\networking\common.py", line 117, in send
File "yt_dlp\networking\_helper.py", line 208, in wrapper
File "yt_dlp\networking\common.py", line 340, in send
File "yt_dlp\networking\_requests.py", line 365, in _send
yt_dlp.networking.exceptions.HTTPError: HTTP Error 400: Bad Request
```
| geo-blocked,site-bug | low | Critical |
2,630,130,339 | flutter | Web: some config values ignored when supplying onEntryPointLoaded to _flutter.loader.load() | ### Steps to reproduce
1. Create a Flutter web project
2. Follow the [instructions for embedding in a web page](https://docs.flutter.dev/platform-integration/web/embedding-flutter-web#enable-multi-view-mode)
* This involves a custom `onEntryPointLoaded` function
3. Follow the [customization instructions](https://docs.flutter.dev/platform-integration/web/initialization#the-_flutter-loader-load-api) to set a custom `assetBase` and `entryPointBaseUrl`
### Expected results
The app is successfully hosted at my custom path.
### Actual results
The following is logged in the browser console.
```
GET http://localhost:4321/assets/FontManifest.json 404 (Not Found)
```
Assets aren't respecting the `assetBase` path I supplied.
This is because certain values in the `config` have to be manually passed to the `initializeEngine` call when using `onEntryPointLoaded`. If you don't use `onEntryPointLoaded` then all of the `config` values get passed automatically.
This is a hard API to use right - I initially thought it was a documentation bug ([#11341](https://github.com/flutter/website/issues/11341)).
`entryPointBaseUrl` ***does*** have to be passed in the `config` to `_flutter.loader.load()`. But `assetBase` has to be passed directly to `initializeEngine`.
I'm not sure how to make it right in a backward compatible way right now though.
### Code sample
<details open><summary>Code sample</summary>
This bug involves hosting a Flutter app at an arbitrary path in a web site. I don't know how to simplify that for a quick and easy repro here.
If you follow the steps above, you'll wind up with something like this in your `flutter_bootstrap.js`.
```js
{{flutter_js}}
{{flutter_build_config}}
_flutter.loader.load({
config: {
entryPointBaseUrl: '/subpath/',
assetBase: '/subpath/',
},
onEntrypointLoaded: async function onEntrypointLoaded(engineInitializer) {
let engine = await engineInitializer.initializeEngine({
multiViewEnabled: true, // Enables embedded mode.
});
let app = await engine.runApp();
// Make this `app` object available to your JS app.
app.addView({
hostElement: document.querySelector('#flutter-element'),
});
}
});
```
Your `main.dart` should look like this:
```dart
import 'package:flutter/material.dart';
void main() {
runWidget(
const WebGateway(child: MainApp()
));
}
class MainApp extends StatelessWidget {
const MainApp({super.key});
@override
Widget build(BuildContext context) {
return const MaterialApp(
home: Scaffold(
body: Center(
child: Text('Hello Astro!'),
),
),
);
}
}
class WebGateway extends StatefulWidget {
final Widget child;
const WebGateway({super.key, required this.child});
@override
State<WebGateway> createState() {
return _WebGatewayState();
}
}
class _WebGatewayState extends State<WebGateway> with WidgetsBindingObserver {
late Widget child;
@override
void initState() {
super.initState();
WidgetsBinding.instance.addObserver(this);
_updateView();
}
@override
void didUpdateWidget(WebGateway oldWidget) {
super.didUpdateWidget(oldWidget);
_updateView();
}
@override
void didChangeMetrics() {
_updateView();
}
void _updateView() {
final flutterView = WidgetsBinding.instance.platformDispatcher.views.single;
setState(() {
child = View(view: flutterView, child: widget.child);
});
}
@override
void dispose() {
WidgetsBinding.instance.removeObserver(this);
super.dispose();
}
@override
Widget build(BuildContext context) {
return child;
}
}
```
</details>
### Screenshots or Video
_No response_
### Logs
_No response_
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[โ] Flutter (Channel stable, 3.24.4, on macOS 14.6.1 23G93 darwin-arm64, locale en-US)
โข Flutter version 3.24.4 on channel stable at /Users/christian/Development/flutter
โข Upstream repository https://github.com/flutter/flutter.git
โข Framework revision 603104015d (8 days ago), 2024-10-24 08:01:25 -0700
โข Engine revision db49896cf2
โข Dart version 3.5.4
โข DevTools version 2.37.3
[โ] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
โข Android SDK at /Users/christian/Library/Android/sdk
โข Platform android-34, build-tools 34.0.0
โข Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
โข Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
โข All Android licenses accepted.
[โ] Xcode - develop for iOS and macOS (Xcode 16.1)
โข Xcode at /Applications/Xcode.app/Contents/Developer
โข Build 16B40
โข CocoaPods version 1.15.2
[โ] Chrome - develop for the web
โข Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[โ] Android Studio (version 2024.1)
โข Android Studio at /Applications/Android Studio.app/Contents
โข Flutter plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/9212-flutter
โข Dart plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/6351-dart
โข Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
[โ] IntelliJ IDEA Community Edition (version 2024.2.1)
โข IntelliJ at /Applications/IntelliJ IDEA CE.app
โข Flutter plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/9212-flutter
โข Dart plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/6351-dart
[โ] VS Code (version 1.95.0)
โข VS Code at /Applications/Visual Studio Code.app/Contents
โข Flutter extension version 3.98.0
[โ] Connected device (3 available)
โข macOS (desktop) โข macos โข darwin-arm64 โข macOS 14.6.1 23G93
darwin-arm64
โข Mac Designed for iPad (desktop) โข mac-designed-for-ipad โข darwin โข macOS 14.6.1 23G93
darwin-arm64
โข Chrome (web) โข chrome โข web-javascript โข Google Chrome
130.0.6723.92
[โ] Network resources
โข All expected network resources are available.
โข No issues found!
```
</details>
| a: assets,platform-web,has reproducible steps,P2,team-web,triaged-web,found in release: 3.24,found in release: 3.27 | low | Critical |
2,630,133,119 | electron | [Bug] Behaviour changed for middle-click window title in gnome 46 | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
30.5.1 (the version used in the latest vscode)
### What operating system(s) are you using?
Ubuntu
### Operating System Version
ubuntu 24.04
### What arch are you using?
x64
### Last Known Working Electron version
_No response_
### Expected Behavior
I use the mouse middle click (on a window's titlebar) to lower the window - so I can easily cycle through open windows. This setting is set via the shell, or the gnome tweaks tool.
### Actual Behavior
That changed in when upgrading to gnome 46. It doesn't honor that gnome setting anymore. So a middle click on the titlebar does nothing.
I've experienced this bug in a number of electron-based apps, including vscode.
### Testcase Gist URL
_No response_
### Additional Information
_No response_ | platform/linux,bug :beetle:,blocked/upstream โ | low | Critical |
2,630,186,813 | neovim | foldtextresult() is inconsistent with the new 'foldtext' set to empty string feature | ### Problem
When 'foldtext' is set to an empty string per PR #20750, calling the `foldtextresult()` function with the line number of a closed fold returns something like "+-- 4 lines folded". I wonder if it would be more consistent with it's documentation to just return the actual buffer line's text --basically like `getline()`.
### Steps to reproduce
1. nvim --clean
2. `:set foldtext=`
3. paste some text in the buffer, e.g.
> START FOLD AFTER THIS LINE
> Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
> Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
> Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
> Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
> END FOLD BEFORE THIS LINE
5. fold some of the text manually ('foldmethod' doesn't matter)
6. `:echom foldtextresult(2)`
7. given the above text, result will be "+-- 4 lines folded"
### Expected behavior
Return a string of the text being displayed for the closed fold--essentially what `getline()` would give you for that line.
### Nvim version (nvim -v)
v0.11.0-dev-1075+gb34e137e43
### Vim (not Nvim) behaves the same?
nvim only feature
### Operating system/version
Windows 11
### Terminal name/version
Windows Terminal
### $TERM environment variable
NA
### Installation
Download and extract from releases | bug,folds | low | Minor |
2,630,195,605 | godot | Crash when inserting rotation keyframe | ### Tested versions
Godot Engine v4.3.stable.flathub
### System information
Godot v4.3.stable (77dcf97d8) - Freedesktop SDK 24.08 (Flatpak runtime) - X11 - Vulkan (Forward+) - dedicated AMD Radeon RX 6600 (RADV NAVI23) - 12th Gen Intel(R) Core(TM) i5-12600K (16 Threads)
### Issue description
crash occurs when attempting to insert a rotation keyframe on a bone in this hierarchy
spine -> spine.001 -> spine.002 -> spine.003 -> upper_arm.L
the skeleton and model mesh are imported from an FBX i downloaded off of itch.io
here: [https://dblob-ua.itch.io/low-poly-characterchar-ronin-01](url)
### Steps to reproduce
1. download the file and import it to godot
2. open it as a new inherited scene
3. place the animation track as a child to Skeleton3D
4. create a new animation
5. add upper_arm.L and upper_arm.R rotation to the track
6. insert a keyframe
godot will crash.
### Minimal reproduction project (MRP)
[keyframe_crash.zip](https://github.com/user-attachments/files/17605539/keyframe_crash.zip)
| bug,topic:editor,crash,topic:animation | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.