id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k ⌀ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,542,140,836 | rust | compiletest: warn/error on redundant `check-fail` directives | > check-fail here is redundant
_Originally posted by @compiler-errors in https://github.com/rust-lang/rust/pull/130718#discussion_r1770625658_
Some test suites have a *default* test behavior, like `//@ check-fail`, in which case specifying that explicitly in the test is redundant and useless noise. When compiletest directive handling is worked, we should warn or error on redundant directives like these and also explain *why* it's redundant, e.g. "ui test mode is check-fail by default".
Remark: this check should not be added before reworking how compiletest directives are handled as it's not just one test suite or directive. | C-enhancement,T-bootstrap,E-medium,A-compiletest | low | Critical |
2,542,203,321 | vscode | [html] Vscode detect wrong scope for script with type=module | Type: <b>Bug</b>
1. create simple html file
<!DOCTYPE html>
<html>
<head>
<script>let a = 10;</script>
<script type="module">let a = 10;</script>
</head>
</html>
2. get wrong "problems" message: Cannot redeclare block-scoped variable 'a'.


### Expected behavior
No problems, because script with type="module" have own scope, so in this case no redelared variable.
VS Code version: Code 1.93.1 (38c31bc77e0dd6ae88a4e9cc93428cc27a56ba40, 2024-09-11T17:20:05.685Z)
OS version: Windows_NT x64 10.0.19045
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|AMD FX(tm)-8350 Eight-Core Processor (8 x 3991)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|15.97GB (3.94GB free)|
|Process Argv|--disable-extensions --crash-reporter-id d754423a-bca8-4bd8-88a2-702aab8631b1|
|Screen Reader|no|
|VM|0%|
</details>Extensions disabled<details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805:30301674
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
a9j8j154:30646983
962ge761:30959799
pythongtdpath:30769146
welcomedialogc:30910334
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
2e7ec940:31000449
pythontbext0:30879054
accentitlementst:30995554
dsvsc016:30899300
dsvsc017:30899301
dsvsc018:30899302
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
bdiig495:31013172
a69g1124:31058053
dvdeprecation:31068756
dwnewjupyter:31046869
impr_priority:31102340
nativerepl1:31139838
refactort:31108082
pythonrstrctxt:31112756
flighttreat:31134774
wkspc-onlycs-t:31132770
wkspc-ranged-t:31125599
pme_test_t:31118333
fje88620:31121564
```
</details>
<!-- generated by issue reporter --> | bug,html | low | Critical |
2,542,279,009 | flutter | [two_dimensional_scrollables] : Proposal to add `shrinkWrap` to `TableView` | ### Use case
[two_dimensional_scrollables]
### Proposal
If you can provide a shrinkWrap parameter like a listview, I'd like to put the tableview into a Column. I will be very grateful to you | c: new feature,framework,package,c: proposal,P3,team-framework,triaged-framework,p: two_dimensional_scrollables | low | Minor |
2,542,295,798 | godot | [.Net / GDScript Interop] Cannot Call GDScript-Lambda-Backed `Callable` in C# | ### Tested versions
v4.3.stable.mono.official [77dcf97d8]
### System information
Godot v4.3.stable.mono - Windows 10.0.17763 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 4060 Ti (NVIDIA; 32.0.15.6109) - AMD Ryzen 9 5900X 12-Core Processor (24 Threads)
### Issue description
This is a niche use case, but we should document it if it's not supported.
In C# script, when trying to `Call()` a `Callable` that is backed by a `GDScript` `Lambda Expression`, Godot will produce the following error instead of actually making the call.
```
E 0:00:00:0894 main.gd:4 @ _ready(): Attempt to call callable 'null::null' on a null instance.
<C# Source> /root/godot/modules/mono/glue/GodotSharp/GodotSharp/Core/NativeInterop/ExceptionUtils.cs:160 @ void Godot.NativeInterop.ExceptionUtils.DebugCheckCallError(Godot.NativeInterop.godot_callable&, Godot.NativeInterop.godot_variant**, int, Godot.NativeInterop.godot_variant_call_error)
<Stack Trace> main.gd:4 @ _ready()
```
### Steps to reproduce
1. Create a C# Godot Project.
2. Create `main.gd`.
3. Create a scene, attach the `main.gd` to the root node, and save it to a file.
4. Create `Helper.cs`.
5. Run the project.
### Minimal reproduction project (MRP)
**main.gd**
```gdscript
extends Node
func _ready():
Helper.new().CallCallable(func(): print("Hello World"));
```
**Helper.cs**
```csharp
using Godot;
[GlobalClass]
public partial class Helper : Node
{
public void CallCallable(Callable callable) => callable.Call();
}
``` | bug,topic:core,topic:gdscript,needs testing,topic:dotnet | low | Critical |
2,542,299,878 | tensorflow | Build Failure on AWS Graviton3 with Custom oneDNN (oneDNN-3.6-rc): Invalid Preprocessing Directives in dnnl_config.h | ### Issue type
Build/Install
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
source
### TensorFlow version
tf v2.17.0
### Custom code
No
### OS platform and distribution
Ubuntu 22.04.2 LTS
### Mobile device
_No response_
### Python version
3.10.12
### Bazel version
6.5.0
### GCC/compiler version
gcc version 11.4.0
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
I am unable to build TensorFlow with the latest oneDNN or custom oneDNN (oneDNN-3.6-rc) on AWS Graviton3 (aarch64) CPU. The build process fails with several compilation errors related to invalid preprocessing directives in the dnnl_config.h file.
I expected the build to complete successfully with the custom oneDNN settings, allowing TensorFlow to run efficiently on the AWS Graviton3 (aarch64) architecture.
### Standalone code to reproduce the issue
```shell
Clone the TensorFlow repository:
git clone https://github.com/tensorflow/tensorflow.git
cd tensorflow
git checkout v2.17.0
Modify the relevant files as follows:
Update oneDNN version in tensorflow/workspace2.bzl.
Adjust mkldnn_acl.BUILD for versioning.
root@8c5bdc6a1bd7:/workdir/tensorflow# git diff
diff --git a/tensorflow/workspace2.bzl b/tensorflow/workspace2.bzl
index fd29dff05f3..7ed30157970 100644
--- a/tensorflow/workspace2.bzl
+++ b/tensorflow/workspace2.bzl
@@ -205,36 +205,24 @@ def _tf_repositories():
tf_http_archive(
name = "onednn",
build_file = "//third_party/mkl_dnn:mkldnn_v1.BUILD",
- sha256 = "5131ac559a13daa6e2784d20ab24e4607e55aa6da973518086326a647d389425",
- strip_prefix = "oneDNN-3.4.2",
- urls = tf_mirror_urls("https://github.com/oneapi-src/oneDNN/archive/refs/tags/v3.4.2.tar.gz"),
+ sha256 = "568428621a4912dd2159eaee97f646259c655acc271dc57bd75478daa9672ea5",
+ strip_prefix = "oneDNN-3.6-rc",
+ urls = tf_mirror_urls("https://github.com/oneapi-src/oneDNN/archive/refs/tags/v3.6-rc.tar.gz"),
)
tf_http_archive(
name = "mkl_dnn_acl_compatible",
build_file = "//third_party/mkl_dnn:mkldnn_acl.BUILD",
- patch_file = [
- "//third_party/mkl_dnn:onednn_acl_threadcap.patch",
- "//third_party/mkl_dnn:onednn_acl_reorder.patch",
- "//third_party/mkl_dnn:onednn_acl_thread_local_scheduler.patch",
- "//third_party/mkl_dnn:onednn_acl_fp32_bf16_reorder.patch",
- "//third_party/mkl_dnn:onednn_acl_bf16_capability_detection_for_ubuntu20.04.patch",
- "//third_party/mkl_dnn:onednn_acl_indirect_conv.patch",
- ],
- sha256 = "2f76b407ef8893cca71340f88cd800019a1f14f8ac1bbdbb89a84be1370b52e3",
- strip_prefix = "oneDNN-3.2.1",
- urls = tf_mirror_urls("https://github.com/oneapi-src/oneDNN/archive/refs/tags/v3.2.1.tar.gz"),
+ sha256 = "568428621a4912dd2159eaee97f646259c655acc271dc57bd75478daa9672ea5",
+ strip_prefix = "oneDNN-3.6-rc",
+ urls = tf_mirror_urls("https://github.com/oneapi-src/oneDNN/archive/refs/tags/v3.6-rc.tar.gz"),
)
tf_http_archive(
name = "compute_library",
- patch_file = [
- "//third_party/compute_library:compute_library.patch",
- "//third_party/compute_library:acl_thread_local_scheduler.patch",
- ],
- sha256 = "c4ca329a78da380163b2d86e91ba728349b6f0ee97d66e260a694ef37f0b0d93",
- strip_prefix = "ComputeLibrary-23.05.1",
- urls = tf_mirror_urls("https://github.com/ARM-software/ComputeLibrary/archive/v23.05.1.tar.gz"),
+ sha256 = "e7e1b554129748c3aadf1a85de48d332afbef7c6c0c3c5be77a1cfb58311c57b",
+ strip_prefix = "ComputeLibrary-24.08.1",
+ urls = tf_mirror_urls("https://github.com/ARM-software/ComputeLibrary/archive/refs/tags/v24.08.1.tar.gz")
)
tf_http_archive(
diff --git a/third_party/mkl_dnn/mkldnn_acl.BUILD b/third_party/mkl_dnn/mkldnn_acl.BUILD
index d67b62a98d2..083b3d7a627 100644
--- a/third_party/mkl_dnn/mkldnn_acl.BUILD
+++ b/third_party/mkl_dnn/mkldnn_acl.BUILD
@@ -128,8 +128,8 @@ expand_template(
out = "include/oneapi/dnnl/dnnl_version.h",
substitutions = {
"@DNNL_VERSION_MAJOR@": "3",
- "@DNNL_VERSION_MINOR@": "2",
- "@DNNL_VERSION_PATCH@": "1",
+ "@DNNL_VERSION_MINOR@": "6",
+ "@DNNL_VERSION_PATCH@": "0",
"@DNNL_VERSION_HASH@": "N/A",
},
template = "include/oneapi/dnnl/dnnl_version.h.in",
(END)
Attempt to build TensorFlow:
taskset -c 16-32 bazel build //tensorflow/tools/pip_package:wheel --repo_env=WHEEL_NAME=tensorflow_cpu --config=mkl_aarch64_threadpool --jobs=33 --local_cpu_resources=16 --verbose_failures -s
```
### Relevant log output
```shell
ERROR: /root/.cache/bazel/_bazel_root/58adfe0c0193ce259b2b32549c3d3a4f/external/mkl_dnn_acl_compatible/BUILD.bazel:138:11: Compiling src/common/batch_normalization.cpp failed: (Exit 1): gcc failed: error executing command (from target @mkl_dnn_acl_compatible//:mkl_dnn_acl)
(cd /root/.cache/bazel/_bazel_root/58adfe0c0193ce259b2b32549c3d3a4f/execroot/org_tensorflow && \
exec env - \
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \
PWD=/proc/self/cwd \
PYTHON_BIN_PATH=/usr/bin/python3 \
PYTHON_LIB_PATH=/usr/lib/python3/dist-packages \
TF2_BEHAVIOR=1 \
/usr/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 '-D_FORTIFY_SOURCE=1' -DNDEBUG -ffunction-sections -fdata-sections '-std=c++14' -MD -MF bazel-out/aarch64-opt/bin/external/mkl_dnn_acl_compatible/_objs/mkl_dnn_acl/batch_normalization.pic.d '-frandom-seed=bazel-out/aarch64-opt/bin/external/mkl_dnn_acl_compatible/_objs/mkl_dnn_acl/batch_normalization.pic.o' -fPIC -DENABLE_NEON -DARM_COMPUTE_CPU_ENABLED -DARM_COMPUTE_ENABLE_NEON -DARM_COMPUTE_ENABLE_I8MM -DENABLE_FP32_KERNELS -DENABLE_QASYMM8_KERNELS -DENABLE_QASYMM8_SIGNED_KERNELS -DENABLE_QSYMM16_KERNELS -DENABLE_INTEGER_KERNELS -DENABLE_NHWC_KERNELS -DENABLE_NCHW_KERNELS -DARM_COMPUTE_GRAPH_ENABLED -DARM_COMPUTE_ENABLE_SVEF32MM -DARM_COMPUTE_ENABLE_FIXED_FORMAT_KERNELS -D_GLIBCXX_USE_NANOSLEEP -DARM_COMPUTE_OPENMP_SCHEDULER '-DDNNL_AARCH64_USE_ACL=1' '-DBAZEL_CURRENT_REPOSITORY="mkl_dnn_acl_compatible"' -iquote external/mkl_dnn_acl_compatible -iquote bazel-out/aarch64-opt/bin/external/mkl_dnn_acl_compatible -iquote external/compute_library -iquote bazel-out/aarch64-opt/bin/external/compute_library -Ibazel-out/aarch64-opt/bin/external/compute_library/include/_virtual_includes/include -isystem external/mkl_dnn_acl_compatible/include -isystem bazel-out/aarch64-opt/bin/external/mkl_dnn_acl_compatible/include -isystem external/mkl_dnn_acl_compatible/src -isystem bazel-out/aarch64-opt/bin/external/mkl_dnn_acl_compatible/src -isystem external/mkl_dnn_acl_compatible/src/common -isystem bazel-out/aarch64-opt/bin/external/mkl_dnn_acl_compatible/src/common -isystem external/mkl_dnn_acl_compatible/src/cpu -isystem bazel-out/aarch64-opt/bin/external/mkl_dnn_acl_compatible/src/cpu -isystem external/mkl_dnn_acl_compatible/src/cpu/aarch64/xbyak_aarch64/src -isystem bazel-out/aarch64-opt/bin/external/mkl_dnn_acl_compatible/src/cpu/aarch64/xbyak_aarch64/src -isystem external/mkl_dnn_acl_compatible/src/cpu/aarch64/xbyak_aarch64/xbyak_aarch64 -isystem bazel-out/aarch64-opt/bin/external/mkl_dnn_acl_compatible/src/cpu/aarch64/xbyak_aarch64/xbyak_aarch64 -isystem external/mkl_dnn_acl_compatible/src/cpu/gemm -isystem bazel-out/aarch64-opt/bin/external/mkl_dnn_acl_compatible/src/cpu/gemm -isystem external/compute_library/arm_compute/runtime -isystem bazel-out/aarch64-opt/bin/external/compute_library/arm_compute/runtime -isystem external/compute_library/src/core/NEON/kernels/arm_gemm -isystem bazel-out/aarch64-opt/bin/external/compute_library/src/core/NEON/kernels/arm_gemm -isystem external/compute_library/src/core/NEON/kernels/assembly -isystem bazel-out/aarch64-opt/bin/external/compute_library/src/core/NEON/kernels/assembly -isystem external/compute_library/src/core/NEON/kernels/convolution/common -isystem bazel-out/aarch64-opt/bin/external/compute_library/src/core/NEON/kernels/convolution/common -isystem external/compute_library/src/core/NEON/kernels/convolution/winograd -isystem bazel-out/aarch64-opt/bin/external/compute_library/src/core/NEON/kernels/convolution/winograd -isystem external/compute_library/src/core/cpu/kernels/assembly -isystem bazel-out/aarch64-opt/bin/external/compute_library/src/core/cpu/kernels/assembly -isystem external/compute_library/src/cpu/kernels/assembly -isystem bazel-out/aarch64-opt/bin/external/compute_library/src/cpu/kernels/assembly -isystem external/compute_library/src/core/NEON/kernels/arm_conv -isystem bazel-out/aarch64-opt/bin/external/compute_library/src/core/NEON/kernels/arm_conv -Wno-all -Wno-extra -Wno-deprecated -Wno-deprecated-declarations -Wno-ignored-attributes -Wno-array-bounds -Wunused-result '-Werror=unused-result' -Wswitch '-Werror=switch' '-Wno-error=unused-but-set-variable' -DAUTOLOAD_DYNAMIC_KERNELS '-std=c++17' -fopenmp-simd -fexceptions -UUSE_MKL -UUSE_CBLAS -fno-canonical-system-headers -Wno-builtin-macro-redefined '-D__DATE__="redacted"' '-D__TIMESTAMP__="redacted"' '-D__TIME__="redacted"' -c external/mkl_dnn_acl_compatible/src/common/batch_normalization.cpp -o bazel-out/aarch64-opt/bin/external/mkl_dnn_acl_compatible/_objs/mkl_dnn_acl/batch_normalization.pic.o)
# Configuration: 286713d3e237c869e8689debb2d6b060b16fc87de4d5e6ded144ba62ae251131
# Execution platform: @local_execution_config_platform//:platform
In file included from external/mkl_dnn_acl_compatible/include/oneapi/dnnl/dnnl_common_types.h:31,
from external/mkl_dnn_acl_compatible/include/oneapi/dnnl/dnnl_common.h:23,
from external/mkl_dnn_acl_compatible/include/oneapi/dnnl/dnnl.h:23,
from external/mkl_dnn_acl_compatible/src/common/batch_normalization.cpp:18:
bazel-out/aarch64-opt/bin/external/mkl_dnn_acl_compatible/include/oneapi/dnnl/dnnl_config.h:112:2: error: invalid preprocessing directive #cmakedefine
112 | #cmakedefine DNNL_GPU_VENDOR DNNL_VENDOR_${DNNL_GPU_VENDOR}
| ^~~~~~~~~~~
bazel-out/aarch64-opt/bin/external/mkl_dnn_acl_compatible/include/oneapi/dnnl/dnnl_config.h:158:2: error: invalid preprocessing directive #cmakedefine
158 | #cmakedefine DNNL_SYCL_GENERIC
| ^~~~~~~~~~~
bazel-out/aarch64-opt/bin/external/mkl_dnn_acl_compatible/include/oneapi/dnnl/dnnl_config.h:181:2: error: invalid preprocessing directive #cmakedefine
181 | #cmakedefine DNNL_DISABLE_GPU_REF_KERNELS
| ^~~~~~~~~~~
bazel-out/aarch64-opt/bin/external/mkl_dnn_acl_compatible/include/oneapi/dnnl/dnnl_config.h:195:2: error: invalid preprocessing directive #cmakedefine01
195 | #cmakedefine01 BUILD_GROUP_NORMALIZATION
| ^~~~~~~~~~~~~
bazel-out/aarch64-opt/bin/external/mkl_dnn_acl_compatible/include/oneapi/dnnl/dnnl_config.h:206:2: error: invalid preprocessing directive #cmakedefine01
206 | #cmakedefine01 BUILD_SDPA
| ^~~~~~~~~~~~~
bazel-out/aarch64-opt/bin/external/mkl_dnn_acl_compatible/include/oneapi/dnnl/dnnl_config.h:224:2: error: invalid preprocessing directive #cmakedefine01
224 | #cmakedefine01 BUILD_XE2
| ^~~~~~~~~~~~~
bazel-out/aarch64-opt/bin/external/mkl_dnn_acl_compatible/include/oneapi/dnnl/dnnl_config.h:226:2: error: invalid preprocessing directive #cmakedefine01
226 | #cmakedefine01 BUILD_GEMM_KERNELS_ALL
| ^~~~~~~~~~~~~
bazel-out/aarch64-opt/bin/external/mkl_dnn_acl_compatible/include/oneapi/dnnl/dnnl_config.h:227:2: error: invalid preprocessing directive #cmakedefine01
227 | #cmakedefine01 BUILD_GEMM_KERNELS_NONE
| ^~~~~~~~~~~~~
bazel-out/aarch64-opt/bin/external/mkl_dnn_acl_compatible/include/oneapi/dnnl/dnnl_config.h:228:2: error: invalid preprocessing directive #cmakedefine01
228 | #cmakedefine01 BUILD_GEMM_SSE41
| ^~~~~~~~~~~~~
bazel-out/aarch64-opt/bin/external/mkl_dnn_acl_compatible/include/oneapi/dnnl/dnnl_config.h:229:2: error: invalid preprocessing directive #cmakedefine01
229 | #cmakedefine01 BUILD_GEMM_AVX2
| ^~~~~~~~~~~~~
bazel-out/aarch64-opt/bin/external/mkl_dnn_acl_compatible/include/oneapi/dnnl/dnnl_config.h:230:2: error: invalid preprocessing directive #cmakedefine01
230 | #cmakedefine01 BUILD_GEMM_AVX512
| ^~~~~~~~~~~~~
SUBCOMMAND: # @boringssl//:crypto [action 'Compiling src/crypto/pem/pem_lib.c [for tool]', configuration: 6c76bd453e22b21125a2028c36fb69b9de59167ea2a1dca88d8da721e8db0553, execution platform: @local_execution_config_platform//:platform]
(cd /root/.cache/bazel/_bazel_root/58adfe0c0193ce259b2b32549c3d3a4f/execroot/org_tensorflow && \
exec env - \
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \
PWD=/proc/self/cwd \
/usr/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 '-D_FORTIFY_SOURCE=1' -DNDEBUG -ffunction-sections -fdata-sections -MD -MF bazel-out/aarch64-opt-exec-50AE0418/bin/external/boringssl/_objs/crypto/pem_lib.pic.d '-frandom-seed=bazel-out/aarch64-opt-exec-50AE0418/bin/external/boringssl/_objs/crypto/pem_lib.pic.o' -fPIC '-DBAZEL_CURRENT_REPOSITORY="boringssl"' -iquote external/boringssl -iquote bazel-out/aarch64-opt-exec-50AE0418/bin/external/boringssl -isystem external/boringssl/src/include -isystem bazel-out/aarch64-opt-exec-50AE0418/bin/external/boringssl/src/include -g0 -w -DBORINGSSL_IMPLEMENTATION -Wa,--noexecstack -Wall -Werror '-Wformat=2' -Wsign-compare -Wmissing-field-initializers -Wwrite-strings -Wshadow -fno-common '-D_XOPEN_SOURCE=700' '-std=c11' -Wmissing-prototypes -Wold-style-definition -Wstrict-prototypes -fno-canonical-system-headers -Wno-builtin-macro-redefined '-D__DATE__="redacted"' '-D__TIMESTAMP__="redacted"' '-D__TIME__="redacted"' -c external/boringssl/src/crypto/pem/pem_lib.c -o bazel-out/aarch64-opt-exec-50AE0418/bin/external/boringssl/_objs/crypto/pem_lib.pic.o)
# Configuration: 6c76bd453e22b21125a2028c36fb69b9de59167ea2a1dca88d8da721e8db0553
# Execution platform: @local_execution_config_platform//:platform
Target //tensorflow/tools/pip_package:wheel failed to build
INFO: Elapsed time: 165.542s, Critical Path: 28.45s
INFO: 5206 processes: 1083 internal, 4123 local.
FAILED: Build did NOT complete successfully
```
| stat:awaiting tensorflower,type:build/install,comp:mkl,subtype: ubuntu/linux,2.17 | medium | Critical |
2,542,319,650 | go | crypto: drop pre-AVX2 amd64 assembly | AVX2 was introduced in 2013 by the Haswell architecture, and was supported by all server models and most desktop models. The previous architectures, Ivy Bridge and Sandy Bridge, were discontinued in 2015.
We carry at least four assembly optimized implementations specifically for pre-AVX2 amd64: crypto/sha1, crypto/sha256, crypto/512, and x/crypto/chacha20poly1305. (In other words, we have *both* AVX2 and pre-AVX2 assembly for each of those.) I don't think at this point they are worth their maintenance cost. Performance sensitive workloads are almost certainly running on post-2015 processors.
I think we should drop those assembly implementations and replace them with the generic Go ones. To be clear, we'll still *support* pre-AVX2 machines, they will just be less optimized.
/cc @golang/security @cpu | NeedsDecision | low | Major |
2,542,358,534 | pytorch | Any updates on AsyncCollectiveTensor support for all-gather along non-zero dims? | https://github.com/pytorch/pytorch/blob/e9bfbf78d5d89df1ec59cb82d7f78b85f9014a98/torch/distributed/_functional_collectives.py#L208
cc @XilunWu @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed | low | Minor |
2,542,378,792 | react | Bug: useEffect and Event Handler Timing Regressions in React 18 Legacy Mode | ## Summary
A change in useEffect and event handler timing causes regressions when upgrading to React 18 in legacy mode (React 18 in concurrent mode doesn't have the regression).
**React version**: 18.3.1 (affects versions since 18.0.0)
## Steps to Reproduce
- Open the [sandbox](https://codesandbox.io/p/sandbox/minimal-report-react-18-legacy-forked-27c5qj). https://codesandbox.io/p/sandbox/minimal-report-react-18-legacy-forked-27c5qj
- Type fast in the input field. Notice that the input does not work properly, and letters are being skipped.
This example uses React 18 in legacy mode. The pattern involves a controlled input directly updating its value through the DOM, which seems to be breaking in React 18 when using legacy mode.
```
useEffect(() => {
if (inputRef.current) {
inputRef.current.value = value;
}
}, [value]);
```
## Current Behavior
- React 17: Works correctly (baseline).
- React 18 (legacy mode): Inputs break; letters are skipped.
- React 18 (concurrent mode): Works correctly (same as React 17).
## Expected Behavior
No breaking changes should occur that are specific to React 18 legacy mode. The behavior should be consistent between legacy mode and concurrent mode.
----
### Investigation
I suspect the issue is related to the following change from the [React 18 upgrade guide](https://react.dev/blog/2022/03/08/react-18-upgrade-guide):
> Other Breaking Changes: consistent useEffect timing: React now always synchronously flushes effect functions if the update was triggered during a discrete user input event such as a click or a keydown event. Previously, the behavior wasn’t always predictable or consistent.
There are no detailed examples I could find to fully understand this change, so I’m not entirely sure if this is the root cause. However, this broken example might indicate unintended behavior in legacy mode.
### Context
We are in the process of upgrading a large React project with many independent `ReactDOM.render` calls to React 18. Our initial plan was to upgrade to React 18 and allow teams to transition to the new renderer independently. However, the upgrade has resulted in several end-to-end test failures, mostly due to this breaking change in the pattern shown in the example.
Two real-world components using this pattern that are affected:
- [React-Monaco-Editor Integration](https://github.com/react-monaco-editor/react-monaco-editor/blob/4a7d7657a6359b648025a8bc30cd7d81e496ecef/src/editor.tsx#L98-L121)
- [Search Box Component in Elastic UI](https://github.com/elastic/eui/blob/b736b904b4c84742bfd9658f588911bceb248f2e/packages/eui/src/components/search_bar/search_box.tsx#L42-L47)
### Workaround
I temporarily resolved the issue by replacing `useEffect` with `useLayoutEffect`. This fixes the problem, but I am unsure if this is the best solution without requiring a significant refactor. I’m also uncertain if this workaround should be applied only in legacy mode while retaining the original useEffect for concurrent mode.
There is also a concern about other possible issues that didn't show up in our test that could have been caused by this change.
### Ask for Assistance
- Could the team investigate if there is an underlying bug in React 18 legacy mode that needs to be addressed?
- If not, could you please provide more details on this change and suggest guidance on fixing similar patterns or what to watch out for?
Any assistance or guidance on this issue would be greatly appreciated as it is impacting our upgrade path for a large project.
| Status: Unconfirmed | low | Critical |
2,542,391,462 | ui | [bug]: ThemeProvider component does not listen to live color scheme preference changes | ### Describe the bug
The [ThemeProvider component](https://ui.shadcn.com/docs/dark-mode/vite) in the docs does not listen to changes to the user's preference. If the user changes their color scheme preference in their browser/system, they have to refresh the app to get the updated theme.
### Affected component/components
Docs/Dark mode/ThemeProvider
### How to reproduce
1. Go to https://ui.shadcn.com/docs/dark-mode/vite.
2. The code snippet can be found under `Dark mode > 1. Create a theme provider`.
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
N/A
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,542,439,763 | flutter | iPhone x crashes using Impeller, `webview_flutter` and `ImageFilter.blur` | ### Steps to reproduce
1. Check out the [example project](https://github.com/karvulf/impeller_webview_bug)
2. Install the app on `iPhone X` (I reproduced the bug only with that device)
3. Tap on the `TextButton`
4. Opens WebView
5. Navigate back
6. the whole app is frozen and logs the following errors:
```shell
Execution of the command buffer was aborted due to an error during execution. Ignored (for causing prior/excessive GPU errors) (00000004:kIOGPUCommandBufferCallbackErrorSubmissionsIgnored)
```
### Expected results
I would expect that navigating to the webview or back shouldn't cause any issues.
### Actual results
Instead the app crashes and nothing works anymore after leaving the WebView.
If I disable Impeller or remove the `BackdropFilter`, then this issue doesn't happen.
I get the logs:
```
Execution of the command buffer was aborted due to an error during execution. Ignored (for causing prior/excessive GPU errors) (00000004:kIOGPUCommandBufferCallbackErrorSubmissionsIgnored)
```
### Code sample
<details open><summary>Code sample</summary>
I uploaded the project **[here](https://github.com/karvulf/impeller_webview_bug)**.
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
https://github.com/user-attachments/assets/a1cd89af-67f3-4572-a499-30016c70d1b3
</details>
### Logs
<details open><summary>Logs</summary>
```console
Launching lib/main.dart on iPhone X von André in debug mode...
Automatically signing iOS for device deployment using specified development team in Xcode project:
Running pod install...
Running Xcode build...
Xcode build done. 19,5s
Installing and launching...
Debug service listening on ws://127.0.0.1:50026/Q691wqquTH4=/ws
Syncing files to device iPhone X von André...
Execution of the command buffer was aborted due to an error during execution. Caused GPU Address Fault Error (0000000b:kIOGPUCommandBufferCallbackErrorPageFault)
Execution of the command buffer was aborted due to an error during execution. Caused GPU Hang Error (00000003:kIOGPUCommandBufferCallbackErrorHang)
Execution of the command buffer was aborted due to an error during execution. Ignored (for causing prior/excessive GPU errors) (00000004:kIOGPUCommandBufferCallbackErrorSubmissionsIgnored)
Execution of the command buffer was aborted due to an error during execution. Ignored (for causing prior/excessive GPU errors) (00000004:kIOGPUCommandBufferCallbackErrorSubmissionsIgnored)
Execution of the command buffer was aborted due to an error during execution. Ignored (for causing prior/excessive GPU errors) (00000004:kIOGPUCommandBufferCallbackErrorSubmissionsIgnored)
Execution of the command buffer was aborted due to an error during execution. Ignored (for causing prior/excessive GPU errors) (00000004:kIOGPUCommandBufferCallbackErrorSubmissionsIgnored)
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.24.3, on macOS 14.6.1 23G93 darwin-arm64, locale de-DE)
• Flutter version 3.24.3 on channel stable at /Users/andre/fvm/versions/stable
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 2663184aa7 (vor 12 Tagen), 2024-09-11 16:27:48 -0500
• Engine revision 36335019a8
• Dart version 3.5.3
• DevTools version 2.37.3
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at /Users/andre/Library/Android/sdk
• Platform android-35, build-tools 35.0.0
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 15.4)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 15F31d
• CocoaPods version 1.15.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.1)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
[✓] IntelliJ IDEA Ultimate Edition (version 2024.2.1)
• IntelliJ at /Applications/IntelliJ IDEA.app
• Flutter plugin version 81.1.3
• Dart plugin version 242.21829.3
[✓] VS Code (version 1.93.0)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension can be installed from:
🔨 https://marketplace.visualstudio.com/items?itemName=Dart-Code.flutter
[✓] Connected device (6 available)
• iPhone X von André (mobile) • 7333c1ebe0f2e7026cd14f9fd9a556e1540cf63f • ios • iOS 16.7.10 20H350
• iPhone von André (mobile) • 00008120-000A0DE82E32201E • ios • iOS 18.0 22A3354
• iPhone 15 Pro Max (mobile) • D02072F5-0351-4361-8A99-26C774099F0E • ios • com.apple.CoreSimulator.SimRuntime.iOS-17-5 (simulator)
• macOS (desktop) • macos • darwin-arm64 • macOS 14.6.1 23G93 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 14.6.1 23G93 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 129.0.6668.58
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| c: crash,e: device-specific,platform-ios,engine,a: platform-views,P1,e: impeller,team-engine,triaged-engine,slimpeller,e: impeller-naughty-driver | medium | Critical |
2,542,459,214 | rust | Failed to normalize `sp_std::rc::Rc<sp_std::prelude::Box...>` maybe try to call `try_normalize_erasing_regions` instead | [rustc-ice-2024-09-23T12_11_53-34162.txt](https://github.com/user-attachments/files/17097370/rustc-ice-2024-09-23T12_11_53-34162.txt)
<!--
Thank you for finding an Internal Compiler Error! 🧊 If possible, try to provide
a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for
how to create smaller examples.
http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/
-->
### Code
https://github.com/paritytech/polkadot-sdk/blob/b9eb68bcb5ab93e58bcba4425975ad00374da2bc/substrate/frame/system/src/lib.rs#L448-L636
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.81.0 (eeb90cda1 2024-09-04)
binary: rustc
commit-hash: eeb90cda1969383f56a2637cbd3037bdf598841c
commit-date: 2024-09-04
host: aarch64-apple-darwin
release: 1.81.0
LLVM version: 18.1.7
```
### Error output
```
compiler/rustc_middle/src/ty/normalize_erasing_regions.rs:168:90: Failed to normalize sp_std::rc::Rc<sp_std::prelude::Box<dyn [Binder { value: Trait(core::ops::Fn<(&<Runtime as frame_system::Config>::RuntimeCall,)>), bound_vars: [Region(BrAnon)] }, Binder { value: Projection(Output = bool), bound_vars: [Region(BrAnon)] }] + '{erased}, sp_std::alloc::alloc::Global>, sp_std::alloc::alloc::Global>, maybe try to call `try_normalize_erasing_regions` instead
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary><strong>Backtrace</strong></summary>
<p>
```
0: 0x1052ab73c - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::h243268f17d714c7f
1: 0x1052ee688 - core::fmt::write::hb3cfb8a30e72d7ff
2: 0x1052a1720 - std::io::Write::write_fmt::hfb2314975de9ecf1
3: 0x1052adc4c - std::panicking::default_hook::{{closure}}::h14c7718ccf39d316
4: 0x1052ad870 - std::panicking::default_hook::hc62e60da3be2f352
5: 0x10eea45b8 - <alloc[47bc6d386d7ae45f]::boxed::Box<rustc_driver_impl[54c40c94c6cfc0b2]::install_ice_hook::{closure#0}> as core[f827f14b5e761a5d]::ops::function::Fn<(&dyn for<'a, 'b> core[f827f14b5e761a5d]::ops::function::Fn<(&'a std[4f7d7c3ef984657a]::panic::PanicHookInfo<'b>,), Output = ()> + core[f827f14b5e761a5d]::marker::Sync + core[f827f14b5e761a5d]::marker::Send, &std[4f7d7c3ef984657a]::panic::PanicHookInfo)>>::call
6: 0x1052ae868 - std::panicking::rust_panic_with_hook::h09e8a656f11e82b2
7: 0x10ef3327c - std[4f7d7c3ef984657a]::panicking::begin_panic::<rustc_errors[886d83f994b4d71c]::ExplicitBug>::{closure#0}
8: 0x10ef302b0 - std[4f7d7c3ef984657a]::sys::backtrace::__rust_end_short_backtrace::<std[4f7d7c3ef984657a]::panicking::begin_panic<rustc_errors[886d83f994b4d71c]::ExplicitBug>::{closure#0}, !>
9: 0x113212960 - std[4f7d7c3ef984657a]::panicking::begin_panic::<rustc_errors[886d83f994b4d71c]::ExplicitBug>
10: 0x10ef4651c - <rustc_errors[886d83f994b4d71c]::diagnostic::BugAbort as rustc_errors[886d83f994b4d71c]::diagnostic::EmissionGuarantee>::emit_producing_guarantee
11: 0x10fb4a26c - rustc_middle[5a798f9924bfd2e0]::util::bug::opt_span_bug_fmt::<rustc_span[ab16d476329f5d04]::span_encoding::Span>::{closure#0}
12: 0x10fb48ef8 - rustc_middle[5a798f9924bfd2e0]::ty::context::tls::with_opt::<rustc_middle[5a798f9924bfd2e0]::util::bug::opt_span_bug_fmt<rustc_span[ab16d476329f5d04]::span_encoding::Span>::{closure#0}, !>::{closure#0}
13: 0x10fb48ec4 - rustc_middle[5a798f9924bfd2e0]::ty::context::tls::with_context_opt::<rustc_middle[5a798f9924bfd2e0]::ty::context::tls::with_opt<rustc_middle[5a798f9924bfd2e0]::util::bug::opt_span_bug_fmt<rustc_span[ab16d476329f5d04]::span_encoding::Span>::{closure#0}, !>::{closure#0}, !>
14: 0x1132b07e8 - rustc_middle[5a798f9924bfd2e0]::util::bug::bug_fmt
15: 0x10fe527a0 - <rustc_middle[5a798f9924bfd2e0]::ty::context::TyCtxt>::normalize_erasing_regions::<rustc_middle[5a798f9924bfd2e0]::ty::Ty>
16: 0x10fecb028 - <core[f827f14b5e761a5d]::iter::adapters::map::Map<core[f827f14b5e761a5d]::iter::adapters::enumerate::Enumerate<core[f827f14b5e761a5d]::slice::iter::Iter<rustc_middle[5a798f9924bfd2e0]::ty::FieldDef>>, <rustc_mir_dataflow[dc82ff89d62403a5]::elaborate_drops::DropCtxt<rustc_mir_transform[ed8a8c9edc8f1ca0]::elaborate_drops::Elaborator>>::move_paths_for_fields::{closure#0}> as core[f827f14b5e761a5d]::iter::traits::iterator::Iterator>::fold::<(), core[f827f14b5e761a5d]::iter::traits::iterator::Iterator::for_each::call<(rustc_middle[5a798f9924bfd2e0]::mir::syntax::Place, core[f827f14b5e761a5d]::option::Option<rustc_mir_dataflow[dc82ff89d62403a5]::move_paths::MovePathIndex>), <alloc[47bc6d386d7ae45f]::vec::Vec<(rustc_middle[5a798f9924bfd2e0]::mir::syntax::Place, core[f827f14b5e761a5d]::option::Option<rustc_mir_dataflow[dc82ff89d62403a5]::move_paths::MovePathIndex>)>>::extend_trusted<core[f827f14b5e761a5d]::iter::adapters::map::Map<core[f827f14b5e761a5d]::iter::adapters::enumerate::Enumerate<core[f827f14b5e761a5d]::slice::iter::Iter<rustc_middle[5a798f9924bfd2e0]::ty::FieldDef>>, <rustc_mir_dataflow[dc82ff89d62403a5]::elaborate_drops::DropCtxt<rustc_mir_transform[ed8a8c9edc8f1ca0]::elaborate_drops::Elaborator>>::move_paths_for_fields::{closure#0}>>::{closure#0}>::{closure#0}>
17: 0x10fd8899c - <alloc[47bc6d386d7ae45f]::vec::Vec<(rustc_middle[5a798f9924bfd2e0]::mir::syntax::Place, core[f827f14b5e761a5d]::option::Option<rustc_mir_dataflow[dc82ff89d62403a5]::move_paths::MovePathIndex>)> as alloc[47bc6d386d7ae45f]::vec::spec_from_iter::SpecFromIter<(rustc_middle[5a798f9924bfd2e0]::mir::syntax::Place, core[f827f14b5e761a5d]::option::Option<rustc_mir_dataflow[dc82ff89d62403a5]::move_paths::MovePathIndex>), core[f827f14b5e761a5d]::iter::adapters::map::Map<core[f827f14b5e761a5d]::iter::adapters::enumerate::Enumerate<core[f827f14b5e761a5d]::slice::iter::Iter<rustc_middle[5a798f9924bfd2e0]::ty::FieldDef>>, <rustc_mir_dataflow[dc82ff89d62403a5]::elaborate_drops::DropCtxt<rustc_mir_transform[ed8a8c9edc8f1ca0]::elaborate_drops::Elaborator>>::move_paths_for_fields::{closure#0}>>>::from_iter
18: 0x10fde9d94 - <rustc_mir_dataflow[dc82ff89d62403a5]::elaborate_drops::DropCtxt<rustc_mir_transform[ed8a8c9edc8f1ca0]::elaborate_drops::Elaborator>>::open_drop_for_adt_contents
19: 0x10fde8eb8 - <rustc_mir_dataflow[dc82ff89d62403a5]::elaborate_drops::DropCtxt<rustc_mir_transform[ed8a8c9edc8f1ca0]::elaborate_drops::Elaborator>>::elaborate_drop
20: 0x10fdf5fc8 - <rustc_mir_transform[ed8a8c9edc8f1ca0]::elaborate_drops::ElaborateDrops as rustc_middle[5a798f9924bfd2e0]::mir::MirPass>::run_pass
21: 0x10fd638f4 - rustc_mir_transform[ed8a8c9edc8f1ca0]::pass_manager::run_passes_inner
22: 0x10fe6bda0 - rustc_mir_transform[ed8a8c9edc8f1ca0]::run_analysis_to_runtime_passes
23: 0x10fe6ba5c - rustc_mir_transform[ed8a8c9edc8f1ca0]::mir_drops_elaborated_and_const_checked
24: 0x110312668 - rustc_query_impl[5e7782f17777a7c9]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[5e7782f17777a7c9]::query_impl::mir_drops_elaborated_and_const_checked::dynamic_query::{closure#2}::{closure#0}, rustc_middle[5a798f9924bfd2e0]::query::erase::Erased<[u8; 8usize]>>
25: 0x11035fac0 - <rustc_query_impl[5e7782f17777a7c9]::query_impl::mir_drops_elaborated_and_const_checked::dynamic_query::{closure#2} as core[f827f14b5e761a5d]::ops::function::FnOnce<(rustc_middle[5a798f9924bfd2e0]::ty::context::TyCtxt, rustc_span[ab16d476329f5d04]::def_id::LocalDefId)>>::call_once
26: 0x1102c1304 - rustc_query_system[5f1672c0485b57da]::query::plumbing::try_execute_query::<rustc_query_impl[5e7782f17777a7c9]::DynamicConfig<rustc_query_system[5f1672c0485b57da]::query::caches::VecCache<rustc_span[ab16d476329f5d04]::def_id::LocalDefId, rustc_middle[5a798f9924bfd2e0]::query::erase::Erased<[u8; 8usize]>>, false, false, false>, rustc_query_impl[5e7782f17777a7c9]::plumbing::QueryCtxt, false>
27: 0x11038e3e4 - rustc_query_impl[5e7782f17777a7c9]::query_impl::mir_drops_elaborated_and_const_checked::get_query_non_incr::__rust_end_short_backtrace
28: 0x10f73b264 - rustc_interface[1340bb505392beac]::passes::analysis
29: 0x110312df8 - rustc_query_impl[5e7782f17777a7c9]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[5e7782f17777a7c9]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[5a798f9924bfd2e0]::query::erase::Erased<[u8; 1usize]>>
30: 0x1103618ac - <rustc_query_impl[5e7782f17777a7c9]::query_impl::analysis::dynamic_query::{closure#2} as core[f827f14b5e761a5d]::ops::function::FnOnce<(rustc_middle[5a798f9924bfd2e0]::ty::context::TyCtxt, ())>>::call_once
31: 0x110278348 - rustc_query_system[5f1672c0485b57da]::query::plumbing::try_execute_query::<rustc_query_impl[5e7782f17777a7c9]::DynamicConfig<rustc_query_system[5f1672c0485b57da]::query::caches::SingleCache<rustc_middle[5a798f9924bfd2e0]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[5e7782f17777a7c9]::plumbing::QueryCtxt, false>
32: 0x11038a9cc - rustc_query_impl[5e7782f17777a7c9]::query_impl::analysis::get_query_non_incr::__rust_end_short_backtrace
33: 0x10ee8f0b4 - <rustc_interface[1340bb505392beac]::queries::QueryResult<&rustc_middle[5a798f9924bfd2e0]::ty::context::GlobalCtxt>>::enter::<core[f827f14b5e761a5d]::result::Result<(), rustc_span[ab16d476329f5d04]::ErrorGuaranteed>, rustc_driver_impl[54c40c94c6cfc0b2]::run_compiler::{closure#0}::{closure#1}::{closure#5}>
34: 0x10eea61a0 - <rustc_interface[1340bb505392beac]::interface::Compiler>::enter::<rustc_driver_impl[54c40c94c6cfc0b2]::run_compiler::{closure#0}::{closure#1}, core[f827f14b5e761a5d]::result::Result<core[f827f14b5e761a5d]::option::Option<rustc_interface[1340bb505392beac]::queries::Linker>, rustc_span[ab16d476329f5d04]::ErrorGuaranteed>>
35: 0x10ee9a98c - <scoped_tls[df49f867320abf2e]::ScopedKey<rustc_span[ab16d476329f5d04]::SessionGlobals>>::set::<rustc_interface[1340bb505392beac]::util::run_in_thread_with_globals<rustc_interface[1340bb505392beac]::interface::run_compiler<core[f827f14b5e761a5d]::result::Result<(), rustc_span[ab16d476329f5d04]::ErrorGuaranteed>, rustc_driver_impl[54c40c94c6cfc0b2]::run_compiler::{closure#0}>::{closure#1}, core[f827f14b5e761a5d]::result::Result<(), rustc_span[ab16d476329f5d04]::ErrorGuaranteed>>::{closure#0}::{closure#0}::{closure#0}, core[f827f14b5e761a5d]::result::Result<(), rustc_span[ab16d476329f5d04]::ErrorGuaranteed>>
36: 0x10eea5b34 - rustc_span[ab16d476329f5d04]::create_session_globals_then::<core[f827f14b5e761a5d]::result::Result<(), rustc_span[ab16d476329f5d04]::ErrorGuaranteed>, rustc_interface[1340bb505392beac]::util::run_in_thread_with_globals<rustc_interface[1340bb505392beac]::interface::run_compiler<core[f827f14b5e761a5d]::result::Result<(), rustc_span[ab16d476329f5d04]::ErrorGuaranteed>, rustc_driver_impl[54c40c94c6cfc0b2]::run_compiler::{closure#0}>::{closure#1}, core[f827f14b5e761a5d]::result::Result<(), rustc_span[ab16d476329f5d04]::ErrorGuaranteed>>::{closure#0}::{closure#0}::{closure#0}>
37: 0x10eec38fc - std[4f7d7c3ef984657a]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[1340bb505392beac]::util::run_in_thread_with_globals<rustc_interface[1340bb505392beac]::interface::run_compiler<core[f827f14b5e761a5d]::result::Result<(), rustc_span[ab16d476329f5d04]::ErrorGuaranteed>, rustc_driver_impl[54c40c94c6cfc0b2]::run_compiler::{closure#0}>::{closure#1}, core[f827f14b5e761a5d]::result::Result<(), rustc_span[ab16d476329f5d04]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[f827f14b5e761a5d]::result::Result<(), rustc_span[ab16d476329f5d04]::ErrorGuaranteed>>
38: 0x10eea37dc - <<std[4f7d7c3ef984657a]::thread::Builder>::spawn_unchecked_<rustc_interface[1340bb505392beac]::util::run_in_thread_with_globals<rustc_interface[1340bb505392beac]::interface::run_compiler<core[f827f14b5e761a5d]::result::Result<(), rustc_span[ab16d476329f5d04]::ErrorGuaranteed>, rustc_driver_impl[54c40c94c6cfc0b2]::run_compiler::{closure#0}>::{closure#1}, core[f827f14b5e761a5d]::result::Result<(), rustc_span[ab16d476329f5d04]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[f827f14b5e761a5d]::result::Result<(), rustc_span[ab16d476329f5d04]::ErrorGuaranteed>>::{closure#1} as core[f827f14b5e761a5d]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
39: 0x1052b6fa4 - std::sys::pal::unix::thread::Thread::new::thread_start::h1bd1b9c95010bf71
40: 0x18c1672e4 - __pthread_deallocate
```
</p>
</details>
| I-ICE,T-compiler,C-bug,E-needs-bisection | low | Critical |
2,542,506,538 | vscode | Hide the update button from extension view when the extension is disabled globally | I was cleaning up my extensions just now and I have this action to update an extension that's disabled:

Since it's disabled, shouldn't we hide this? I only enable this extension when I'm using the application in question and would expect to update it only when I re-enable it, otherwise I would clikc update, and then maybe need to click it to "dismiss" the update button every week.
I see disabling as a slightly more convenient version of uninstalling if I plan on using it again, such that I don't need to go find it again via search. We could still keep the update in the extension details as the user is drilling in:
 | feature-request,extensions | low | Minor |
2,542,564,380 | godot | Atlas texture region not resetting back to initial state when reload_current_scene() | ### Tested versions
- Reproducible in v4.3.stable.mono.official [77dcf97d8]
- Only started using Godot so don't know if this is a regression or not
### System information
Godot v4.3.stable.mono - Windows 10.0.22631 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1070 (NVIDIA; 32.0.15.6109) - AMD Ryzen 7 7800X3D 8-Core Processor (16 Threads)
### Issue description
When using an atlas texture for a TextureRect, it is possible to set a "Region" for the texture to only display a portion of a spritesheet. This region can be set in the editor and then changed at runtime. When calling "reload_current_scene" (or switching to another packed scene and then ack again), the entire scene reloads except for the atlas texture. The atlas texture will stick on the last region it was set to during runtime.
### Steps to reproduce
1. Create a TextureRect. choose a new AtlasTexture as the texture and choose a region for it
2. Advance the AtlasTexture to a different region at runtime
3. Reload the scene
Expected
The atlasTexture returns to its original region as defined in the editor
Actual
The atlasTexture remains in its last runtime state until the game is fully restarted
How to use reproduction:
- Observe stopwatch at 0 seconds.

- Press "Advance texture atlas frame" button
- The stopwatch icon will go forward a few seconds and a new texture will become visible

- Press "Reload scene" button
- The stopwatch icon won't return to 0 seconds but the other texture will become invisible again (demonstrating that the scene did reload)

### Minimal reproduction project (MRP)
GitHub repro: https://github.com/BurkusCat/reloadsceneatlastexture
Zip repro: [reloadsceneatlastexture.zip](https://github.com/user-attachments/files/17097743/reloadsceneatlastexture.zip)
| discussion,topic:core,confirmed | low | Major |
2,542,590,463 | ollama | downloadChunk does not pass the Authorization header to the registry | ### What is the issue?
In #5994 , `regOpts` were removed from the `blobDownload.downloadChunk` method as unnecessary. While it's true that the ollama library only offers public blobs, the lack of regOpts means that all `GET /v2/<image>/blobs/...` requests with a `Range` header cannot be accompanied by an `Authorization` header.
This removal now breaks the mirrored behavior with [blobUpload.uploadPart](https://github.com/ollama/ollama/blob/main/server/upload.go#L152) where the RegOpts and corresponding `Authorization header` are still passed to each invocation.
I tried adding the `Authorization: Bearer` token to the request, but it looks like pulls from `registry.ollama.ai` are breaking with this change, most likely because the header is being passed to the cloudflarestorage.com URLs.
```
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.3.11 | bug | low | Minor |
2,542,606,926 | ollama | Unreliable free memory resulting in models not running | ### What is the issue?
From what I understand, new versions of ollama compare the expected memory requirements of a model with the amount of free memory seen by ollama, and prints an error message if the model memory requirements are larger. This make a lof of sense.
However, the free memory on Linux is (from what I understand) is not a very reliable estimate. For the same model on the same machine, I have had cases where ollama ran successfully, or reported insufficient memory.
Is it possible to disable this feature entirely?
### OS
Linux
### GPU
_No response_
### CPU
_No response_
### Ollama version
latest mainline | feature request,linux | low | Critical |
2,542,640,060 | TypeScript | Inconsistent typechecking with require() in JS and TS | ### 🔎 Search Terms
require import js ts module esm esmodule cjs commonjs
### 🕗 Version & Regression Information
- This happens in the nightly version of TS
### ⏯ Playground Link
Multiple files not supported in playground, see bug workbench
### 💻 Code
```ts repro
// @types: ["node"]
// @allowJs
// @checkJs
// @filename: module-cjs-js.js
const Value = "module-cjs-js";
module.exports = { Value };
// @filename: module-cjs-ts.ts
const Value = "module-cjs-ts";
module.exports = { Value };
// @filename: module-esm-js.js
const Value = "module-esm-js";
export { Value };
// @filename: module-esm-ts.ts
const Value = "module-esm-ts";
export { Value };
// @filename: main-js.js
const ConstRequireCjsJs = require("./module-cjs-js");
const ConstRequireEsmJs = require("./module-esm-js");
const ConstRequireCjsTs = require("./module-cjs-ts");
const ConstRequireEsmTs = require("./module-esm-ts");
console.log(ConstRequireCjsJs.Value); // (alias) const Value: "module-cjs-js"
// ^?
console.log(ConstRequireEsmJs.Value); // (alias) const Value: "module-esm-js"
// ^?
console.log(ConstRequireCjsTs.Value); // Error: Property 'Value' does not exist on type 'typeof import("./module-cjs-ts")'
// ^?
console.log(ConstRequireEsmTs.Value); // (alias) const Value: "module-esm-ts"
// ^?
import * as ImportFromCjsJs from "./module-cjs-js";
import * as ImportFromEsmJs from "./module-esm-js";
import * as ImportFromCjsTs from "./module-cjs-ts";
import * as ImportFromEsmTs from "./module-esm-ts";
console.log(ImportFromCjsJs.Value); // (alias) const Value: "module-cjs-js"
// ^?
console.log(ImportFromEsmJs.Value); // (alias) const Value: "module-esm-js"
// ^?
console.log(ImportFromCjsTs.Value); // Error: Property 'Value' does not exist on type 'typeof import("./module-cjs-ts")'
// ^?
console.log(ImportFromEsmTs.Value); // (alias) const Value: "module-esm-ts"
// ^?
// @filename: main-ts.ts
const ConstRequireCjsJs = require("./module-cjs-js");
const ConstRequireEsmJs = require("./module-esm-js");
const ConstRequireCjsTs = require("./module-cjs-ts");
const ConstRequireEsmTs = require("./module-esm-ts");
console.log(ConstRequireCjsJs.Value); // any
// ^?
console.log(ConstRequireEsmJs.Value); // any
// ^?
console.log(ConstRequireCjsTs.Value); // any
// ^?
console.log(ConstRequireEsmTs.Value); // any
// ^?
import * as ImportFromCjsJs from "./module-cjs-js";
import * as ImportFromEsmJs from "./module-esm-js";
import * as ImportFromCjsTs from "./module-cjs-ts";
import * as ImportFromEsmTs from "./module-esm-ts";
console.log(ImportFromCjsJs.Value); // (alias) const Value: "module-cjs-js"
// ^?
console.log(ImportFromEsmJs.Value); // (alias) const Value: "module-esm-js"
// ^?
console.log(ImportFromCjsTs.Value); // Error: Property 'Value' does not exist on type 'typeof import("./module-cjs-ts")'
// ^?
console.log(ImportFromEsmTs.Value); // (alias) const Value: "module-esm-ts"
// ^?
import ImportRequireCjsJs = require("./module-cjs-js");
import ImportRequireEsmJs = require("./module-esm-js");
import ImportRequireCjsTs = require("./module-cjs-ts");
import ImportRequireEsmTs = require("./module-esm-ts");
console.log(ImportRequireCjsJs.Value); // (alias) const Value: "module-cjs-js"
// ^?
console.log(ImportRequireEsmJs.Value); // (alias) const Value: "module-esm-js"
// ^?
console.log(ImportRequireCjsTs.Value); // Error: Property 'Value' does not exist on type 'typeof import("./module-cjs-ts")'
// ^?
console.log(ImportRequireEsmTs.Value); // (alias) const Value: "module-esm-ts"
// ^?
```
[Workbench Repro](https://www.typescriptlang.org/dev/bug-workbench/?checkJs=true&allowJs=true&module=1&types=%5B%22node%22%5D#code/PTAEAEBcE8AcFMDOAuUBtARAOwPYBN4MBdAKBAgEMAbKnAdwClEyxwBjAC3jYGsmSWEAGYBLKvCwUAtvFRT8AV3EBaAFaIAdOpJscWRJFAA1agvigAvKAzqMAbhLy8S+BvgAPWDgBOkRJdAAb2NTcwBfB0FwUXFJGTlFFT8NPx09AxCqMwCMP3tHRNcPL19-K2CTLPDI8mixCWlZUCkKESxlZNTdfUMAWUKmAO94AEcFEWGACgwNYCcXNUQMAEoHboz+53EAFTLQYbGJ+GnZ+aSl1ZJNlyYNSrM7UHIKLGhBUA-PgD0AfivC3Z3UKPZ6vd6fD6-AS1GINeLNVrtdRaZjrPoDPYHcZTGZzQqLFZrdLorbwXZDUbY464s7wDoXBzXcS3e7wEFgSbUEQURDLUBozJmVA2JagERSEqGVngiFQplkzSs9mgACi3m8PlQAAUNQhfNBQAByVmG0B4HBIUC4QweEQZPSgGAII1O+A4IRiiU+SAnPGk+krQ0y74-IA)
### 🙁 Actual behavior
Type resolution is inconsistent when using `require()` from .js and .ts files:
* CommonJS modules with .ts extension have no properties regardless of import type (no error if ESModule.ts or CommonJS.js)
* Using `require()` in a .ts file is always unchecked (checked in a .js file or in .ts file with `import X =` syntax)
| MainExt | Type | Ext | RequireOrImport | Issue |
|---------|------|-----|-------------------------|--------|
| JS | CJS | JS | const X = require("Y") | |
| JS | ESM | JS | const X = require("Y") | |
| JS | CJS | TS | const X = require("Y") | Error |
| JS | ESM | TS | const X = require("Y") | |
| JS | CJS | JS | import * as X from "Y" | |
| JS | ESM | JS | import * as X from "Y" | |
| JS | CJS | TS | import * as X from "Y" | Error |
| JS | ESM | TS | import * as X from "Y" | |
| JS | CJS | JS | const X = require("Y") | |
| JS | ESM | JS | const X = require("Y") | |
| JS | CJS | TS | const X = require("Y") | Error |
| JS | ESM | TS | const X = require("Y") | |
| TS | CJS | JS | const X = require("Y") | Any |
| TS | ESM | JS | const X = require("Y") | Any |
| TS | CJS | TS | const X = require("Y") | Any |
| TS | ESM | TS | const X = require("Y") | Any |
| TS | CJS | JS | import * as X from "Y" | |
| TS | ESM | JS | import * as X from "Y" | |
| TS | CJS | TS | import * as X from "Y" | Error |
| TS | ESM | TS | import * as X from "Y" | |
| TS | CJS | JS | import X = require("Y") | |
| TS | ESM | JS | import X = require("Y") | |
| TS | CJS | TS | import X = require("Y") | Error |
| TS | ESM | TS | import X = require("Y") | |
### 🙂 Expected behavior
I expected `require()` to be typechecked in .ts files because it's typechecked in .js files.
I expected imports of CommonJS .ts files to work because CommonJS .js files work and are typechecked.
### Additional information about the issue
This problem happens when porting an existing Node.JS codebase that uses CommonJS require() modules to TypeScript. It's not possible to port the code without also forcing it into ES Modules because:
* CommonJS .ts files don't work
* require() from .ts files is not typechecked | Needs More Info,Has Repro | low | Critical |
2,542,646,892 | langchain | I am getting pydantic Validation error when expecting tools in a response | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The Following code:
```python
from datetime import datetime
from typing import List
from langchain_core.messages import (
SystemMessage,
HumanMessage,
ToolMessage,
trim_messages,
)
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
# from openai import BaseModel
from pydantic.v1 import BaseModel, Field
## My messages in this format are accepted
def build_message_list(self) -> List:
# Convert internal history to a list of SystemMessage, HumanMessage, AIMessage, ToolMessage
messages = [
SystemMessage(
content=""" \n Do Not make Vague assumptions use
appropriate tools provided whenever required to extract new or next Question."""
)
]
for entry in self.history:
if entry["role"] == "human":
messages.append(HumanMessage(content=entry["content"]))
elif entry["role"] == "assistant":
messages.append(AIMessage(content=entry["content"]))
elif entry["role"] == "tool":
messages.append(ToolMessage(content=entry["content"]))
return messages
# this is the function where i should expect a AIMessage.
def invoke_tool_or_model(self, messages) -> dict:
"""Handles whether to invoke a tool or continue with the LLM."""
last_message = messages[-1]
if hasattr(last_message, "tool_calls") and last_message.tool_calls:
return self.call_tools(last_message.tool_calls)
else:
prompt = self.format_trimmed_history(messages)
response = self.llm.invoke(prompt)
return response
```
### Error Message and Stack Trace (if applicable)
the json response is giving the error inside the "tool_calls" where the accepted format is not decoded.
```
D:\Work\Company_Product_Brainstorming\Sample_projects\GPTInterviewer\Final\envinter\Lib\site-packages\pydantic\main.py:390: UserWarning: Pydantic serializer warnings:
Expected `str` but got `dict` with value `{'category': 'math'}` - serialized value may not be as expected
return self.__pydantic_serializer__.to_python(
2024-09-23 18:39:55.381 Uncaught app exception
Traceback (most recent call last):
File "D:\Work\Company_Product_Brainstorming\Sample_projects\GPTInterviewer\Final\envinter\Lib\site-packages\streamlit\runtime\scriptrunner\exec_code.py", line 88, in exec_func_with_error_handling
result = func()
^^^^^^
File "D:\Work\Company_Product_Brainstorming\Sample_projects\GPTInterviewer\Final\envinter\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 590, in code_to_exec
exec(code, module.__dict__)
File "D:\Work\Company_Product_Brainstorming\Sample_projects\GPTInterviewer\Final\ai_interviewer_streamlit\new_interviewer.py", line 256, in <module>
main()
File "D:\Work\Company_Product_Brainstorming\Sample_projects\GPTInterviewer\Final\ai_interviewer_streamlit\new_interviewer.py", line 249, in main
handle_chat(interviewer)
File "D:\Work\Company_Product_Brainstorming\Sample_projects\GPTInterviewer\Final\ai_interviewer_streamlit\new_interviewer.py", line 200, in handle_chat
response = interviewer.text_to_text(user_input)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Work\Company_Product_Brainstorming\Sample_projects\GPTInterviewer\Final\ai_interviewer_streamlit\new_interviewer.py", line 82, in text_to_text
response = self.invoke_tool_or_model(trimmed_messages)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Work\Company_Product_Brainstorming\Sample_projects\GPTInterviewer\Final\ai_interviewer_streamlit\new_interviewer.py", line 114, in invoke_tool_or_model
response = self.llm.invoke(prompt)
^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Work\Company_Product_Brainstorming\Sample_projects\GPTInterviewer\Final\envinter\Lib\site-packages\langchain_core\runnables\base.py", line 5343, in invoke
return self.bound.invoke(
^^^^^^^^^^^^^^^^^^
File "D:\Work\Company_Product_Brainstorming\Sample_projects\GPTInterviewer\Final\envinter\Lib\site-packages\langchain_core\language_models\chat_models.py", line 284, in invoke
self.generate_prompt(
File "D:\Work\Company_Product_Brainstorming\Sample_projects\GPTInterviewer\Final\envinter\Lib\site-packages\langchain_core\language_models\chat_models.py", line 784, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Work\Company_Product_Brainstorming\Sample_projects\GPTInterviewer\Final\envinter\Lib\site-packages\langchain_core\language_models\chat_models.py", line 641, in generate
raise e
File "D:\Work\Company_Product_Brainstorming\Sample_projects\GPTInterviewer\Final\envinter\Lib\site-packages\langchain_core\language_models\chat_models.py", line 631, in generate
self._generate_with_cache(
File "D:\Work\Company_Product_Brainstorming\Sample_projects\GPTInterviewer\Final\envinter\Lib\site-packages\langchain_core\language_models\chat_models.py", line 853, in _generate_with_cache
result = self._generate(
^^^^^^^^^^^^^^^
File "D:\Work\Company_Product_Brainstorming\Sample_projects\GPTInterviewer\Final\envinter\Lib\site-packages\langchain_openai\chat_models\base.py", line 671, in _generate
return self._create_chat_result(response, generation_info)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Work\Company_Product_Brainstorming\Sample_projects\GPTInterviewer\Final\envinter\Lib\site-packages\langchain_openai\chat_models\base.py", line 708, in _create_chat_result
message = _convert_dict_to_message(res["message"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Work\Company_Product_Brainstorming\Sample_projects\GPTInterviewer\Final\envinter\Lib\site-packages\langchain_openai\chat_models\base.py", line 127, in _convert_dict_to_message
return AIMessage(
^^^^^^^^^^
File "D:\Work\Company_Product_Brainstorming\Sample_projects\GPTInterviewer\Final\envinter\Lib\site-packages\langchain_core\messages\ai.py", line 94, in __init__
super().__init__(content=content, **kwargs)
File "D:\Work\Company_Product_Brainstorming\Sample_projects\GPTInterviewer\Final\envinter\Lib\site-packages\langchain_core\messages\base.py", line 76, in __init__
super().__init__(content=content, **kwargs)
File "D:\Work\Company_Product_Brainstorming\Sample_projects\GPTInterviewer\Final\envinter\Lib\site-packages\langchain_core\load\serializable.py", line 110, in __init__
super().__init__(*args, **kwargs)
File "D:\Work\Company_Product_Brainstorming\Sample_projects\GPTInterviewer\Final\envinter\Lib\site-packages\pydantic\main.py", line 212, in __init__
validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pydantic_core._pydantic_core.ValidationError: 1 validation error for AIMessage
invalid_tool_calls.0.args
Input should be a valid string [type=string_type, input_value={'category': 'math'}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.9/v/string_type
```
### Description
* Currently i'm trying to fetch tools arguments from langchain response which can then use to call the function and return the answer from the tool to the llm.
* Also i used akjindal53244/Llama-3.1-Storm-8B as my llm model with koboldcpp as backend server.
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.19045
> Python Version: 3.12.5 | packaged by conda-forge | (main, Aug 8 2024, 18:24:51) [MSC v.1940 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.3.5
> langchain: 0.2.16
> langchain_community: 0.2.16
> langsmith: 0.1.125
> langchain_openai: 0.2.0
> langchain_text_splitters: 0.2.2
Optional packages not installed
-------------------------------
> langgraph
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.5
> async-timeout: Installed. No version info available.
> dataclasses-json: 0.6.7
> httpx: 0.27.2
> jsonpatch: 1.33
> numpy: 1.26.4
> openai: 1.46.0
> orjson: 3.10.7
> packaging: 23.2
> pydantic: 2.8.2
> PyYAML: 6.0.2
> requests: 2.32.3
> SQLAlchemy: 2.0.32
> tenacity: 8.5.0
> tiktoken: 0.7.0
> typing-extensions: 4.12.2
| 🤖:bug,investigate | low | Critical |
2,542,664,885 | deno | What about `deno cache --only-runtime`? | If I have a ./main.ts with:
```ts
import type { ChatCompletion, ChatCompletionCreateParamsNonStreaming, Message } from "npm:@types/openai";
```
It will `deno run` just fine, but when I attempt to generate a lockfile with `deno cache` the script will error:
```
$ deno cache ./main.ts
error: npm package '@types/openai' does not exist.
```
I ideally need a method to generate a lockfile for any script that can be executed successfully with `deno run`. Is there any way to that?
Thanks! | suggestion | low | Critical |
2,542,673,503 | vscode | Clicking an OSC 8 hyperlink to a folder in the terminal will open the native file explorer instead of VS Code's explorer | Context: https://github.com/pnpm/pnpm/issues/8513
Relevant part of log:
```
�]8;;file:///Users/k/p/archive-webpage-browser-extension/node_modules/.pnpm_patches/prettier@3.3.3��[34m/Users/k/p/archive-webpage-browser-extension/node_modules/.pnpm_patches/prettier@3.3.3�[39m�]8;;�
```
Trace log:
```
2024-09-09 21:19:54.968 [trace] [RPC Request] PtyService#setTerminalLayoutInfo({"workspaceId":"e06f3c1d35667b267a8dfe016d2f8457","tabs":[{"isActive":true,"activePersistentProcessId":17,"terminals":[{"relativeSize":1,"terminal":17}]}]})
2024-09-09 21:19:54.969 [trace] [RPC Response] PtyService#setTerminalLayoutInfo undefined
2024-09-09 21:19:56.697 [trace] [RPC Request] PtyService#input(17, "\u001bOA")
2024-09-09 21:19:56.698 [trace] node-pty.IPty#write �OA
2024-09-09 21:19:56.698 [trace] [RPC Response] PtyService#input undefined
2024-09-09 21:19:56.699 [trace] node-pty.IPty#onData pnpm patch prettier
2024-09-09 21:19:56.706 [trace] [RPC Event] PtyService#_onProcessData.fire({"id":17,"event":"pnpm patch prettier"})
2024-09-09 21:19:56.707 [trace] PromptInputModel#_sync: |
2024-09-09 21:19:56.707 [trace] PromptInputModel#onDidChangeInput pnpm patch prettier|
2024-09-09 21:19:56.707 [trace] PromptInputModel#_sync: pnpm patch prettier|
2024-09-09 21:19:57.276 [trace] [RPC Request] PtyService#input(17, "\r")
2024-09-09 21:19:57.276 [trace] node-pty.IPty#write
2024-09-09 21:19:57.277 [trace] [RPC Response] PtyService#input undefined
2024-09-09 21:19:57.277 [trace] node-pty.IPty#onData �[?1l�>
2024-09-09 21:19:57.277 [trace] node-pty.IPty#onData �[?2004l
2024-09-09 21:19:57.280 [trace] node-pty.IPty#onData �]0;🚀 ~/p/archive-webpage-browser-extension pnpm patch prettier 🚀�
2024-09-09 21:19:57.282 [trace] [RPC Event] PtyService#_onProcessData.fire({"id":17,"event":"\u001b[?1l\u001b>\u001b[?2004l\r\r\n\u001b]0;🚀 ~/p/archive-webpage-browser-extension pnpm patch prettier 🚀\u0007"})
2024-09-09 21:19:57.282 [trace] PromptInputModel#_sync: pnpm patch prettier|
2024-09-09 21:19:57.283 [trace] PromptInputModel#_sync: pnpm patch prettier|
2024-09-09 21:19:57.283 [trace] node-pty.IPty#onData �]633;E;pnpm patch prettier;63e24406-e8af-4468-9b3c-16883b312790��]633;C�
2024-09-09 21:19:57.289 [trace] [RPC Event] PtyService#_onProcessData.fire({"id":17,"event":"\u001b]633;E;pnpm patch prettier;63e24406-e8af-4468-9b3c-16883b312790\u0007\u001b]633;C\u0007"})
2024-09-09 21:19:57.290 [trace] [RPC Event] PtyService#_onDidChangeProperty.fire({"id":17,"property":{"type":"title","value":"node"}})
2024-09-09 21:19:57.290 [trace] [RPC Event] PtyService#_onDidChangeProperty.fire({"id":17,"property":{"type":"shellType"}})
2024-09-09 21:19:57.290 [debug] CommandDetectionCapability#setCommandLine pnpm patch prettier true
2024-09-09 21:19:57.291 [debug] CommandDetectionCapability#handleCommandExecuted 0 18
2024-09-09 21:19:57.291 [trace] PromptInputModel#onDidFinishInput pnpm patch prettier
2024-09-09 21:19:57.291 [trace] PromptInputModel#onDidChangeInput pnpm patch prettier
2024-09-09 21:19:57.638 [trace] node-pty.IPty#onData Patch: You can now edit the package at:
�]8;;file:///Users/k/p/archive-webpage-browser-extension/node_modules/.pnpm_patches/prettier@3.3.3��[34m/Users/k/p/archive-webpage-browser-extension/node_modules/.pnpm_patches/prettier@3.3.3�[39m�]8;;�
To commit your changes, run:
�[32mpnpm patch-commit '/Users/k/p/archive-webpage-browser-extension/node_modules/.pnpm_patches/prettier@3.3.3'�[39m
2024-09-09 21:19:57.643 [trace] [RPC Event] PtyService#_onProcessData.fire({"id":17,"event":"Patch: You can now edit the package at:\r\n\r\n \u001b]8;;file:///Users/k/p/archive-webpage-browser-extension/node_modules/.pnpm_patches/prettier@3.3.3\u0007\u001b[34m/Users/k/p/archive-webpage-browser-extension/node_modules/.pnpm_patches/prettier@3.3.3\u001b[39m\u001b]8;;\u0007\r\n\r\nTo commit your changes, run:\r\n\r\n \u001b[32mpnpm patch-commit '/Users/k/p/archive-webpage-browser-extension/node_modules/.pnpm_patches/prettier@3.3.3'\u001b[39m\r\n\r\n"})
2024-09-09 21:19:57.646 [trace] node-pty.IPty#onData �[1m�[7m%�[27m�[1m�[0m
2024-09-09 21:19:57.647 [trace] node-pty.IPty#onData �]0;~/p/archive-webpage-browser-extension�
2024-09-09 21:19:57.647 [trace] node-pty.IPty#onData �]633;D;0�
2024-09-09 21:19:57.648 [trace] node-pty.IPty#onData �]633;P;Cwd=/Users/k/p/archive-webpage-browser-extension�
2024-09-09 21:19:57.652 [trace] [RPC Event] PtyService#_onProcessData.fire({"id":17,"event":"\u001b[1m\u001b[7m%\u001b[27m\u001b[1m\u001b[0m \r \r\u001b]0;~/p/archive-webpage-browser-extension\u0007\u001b]633;D;0\u0007\u001b]633;P;Cwd=/Users/k/p/archive-webpage-browser-extension\u0007"})
2024-09-09 21:19:57.654 [debug] CommandDetectionCapability#handleCommandFinished 0 undefined pnpm patch prettier [object Object]
2024-09-09 21:19:57.684 [trace] node-pty.IPty#onData
�[0m�[27m�[24m�[J�]633;A��[01;32m➜ �[36marchive-webpage-browser-extension�[00m �[01;34mgit:(�[31mmain�[34m)�[00m �]633;B��[K
2024-09-09 21:19:57.684 [trace] node-pty.IPty#onData �[?1h�=�[?2004h
2024-09-09 21:19:57.689 [trace] [RPC Event] PtyService#_onProcessData.fire({"id":17,"event":"\r\u001b[0m\u001b[27m\u001b[24m\u001b[J\u001b]633;A\u0007\u001b[01;32m➜ \u001b[36marchive-webpage-browser-extension\u001b[00m \u001b[01;34mgit:(\u001b[31mmain\u001b[34m)\u001b[00m \u001b]633;B\u0007\u001b[K\u001b[?1h\u001b=\u001b[?2004h"})
2024-09-09 21:19:57.691 [debug] CommandDetectionCapability#handlePromptStart 0 26
2024-09-09 21:19:57.691 [debug] CommandDetectionCapability#onCommandFinished [object Object]
2024-09-09 21:19:57.691 [trace] PromptInputModel#onDidStartInput |
2024-09-09 21:19:57.691 [trace] PromptInputModel#onDidChangeInput |
2024-09-09 21:19:57.691 [debug] CommandDetectionCapability#handleCommandStart 48 26
2024-09-09 21:19:57.691 [trace] PromptInputModel#_sync: |
2024-09-09 21:19:57.691 [trace] PromptInputModel#_sync: |
2024-09-09 21:19:57.697 [trace] [RPC Event] PtyService#_onDidChangeProperty.fire({"id":17,"property":{"type":"title","value":"zsh"}})
2024-09-09 21:19:57.697 [trace] [RPC Event] PtyService#_onDidChangeProperty.fire({"id":17,"property":{"type":"shellType","value":"zsh"}})
2024-09-09 21:19:58.201 [trace] [RPC Request] PtyService#updateTitle(17, "zsh", 1)
2024-09-09 21:19:58.201 [trace] [RPC Response] PtyService#updateTitle undefined
``` | bug,help wanted,terminal-links | low | Critical |
2,542,820,587 | react | [DevTools Bug]: No way to debug suspense events | ### Website or app
n/a
### Repro steps
The react dev-tools have not been updated with support for debugging suspense issues.
For examples:
- In the profiler, you can see that a suspense even happened and caused a re-render, but you cannot see which component actually caused the suspense (ie. called `use` or similar) to trigger.
- There doesn't appear to be any kind of logging that can be turned on of suspense events (which would tell you which component suspended)
### How often does this bug happen?
Every time
### DevTools package (automated)
_No response_
### DevTools version (automated)
_No response_
### Error message (automated)
_No response_
### Error call stack (automated)
_No response_
### Error component stack (automated)
_No response_
### GitHub query string (automated)
_No response_ | Type: Bug,Status: Unconfirmed,Component: Developer Tools | medium | Critical |
2,542,861,925 | storybook | [Bug]: Storybook preview hooks can only be called inside decorators and story functions. | ### Describe the bug
When attempting to use a custom `render` function to create a Story for a controlled component, I am getting the following error.
```
Storybook preview hooks can only be called inside decorators and story functions.
```
This previously worked fine, so I am curious what changed.
My Story code is as follows. I am just trying to use a basic implementation of `useState`. The Story renders correctly, but as soon as I press the `Button`, this error is displayed.
From what I can tell, this is being caused by `@storybook/addon-themes`. You can see my setup in the reproduction.
```ts
// .storybook/preview.ts
import type { Preview, ReactRenderer } from '@storybook/react';
import { withThemeByDataAttribute } from '@storybook/addon-themes';
const preview: Preview = {
parameters: {
controls: {
matchers: {
color: /(background|color)$/i,
date: /Date$/i,
},
},
},
decorators: [
withThemeByDataAttribute<ReactRenderer>({
themes: {
light: 'light',
dark: 'dark',
auto: 'auto',
},
defaultTheme: 'light',
attributeName: 'data-color-mode',
}),
],
};
export default preview;
```
```tsx
type Story = StoryObj<typeof meta>;
export const Controlled: Story = {
args: {
size: 'small',
label: 'Button',
},
render: function Render(args) {
const [pressed, setPressed] = React.useState(false);
return (
<>
<Button {...args} onClick={() => setPressed(!pressed)} />
<h1>{pressed.toString()}</h1>
</>
);
},
};
```
### Reproduction link
https://stackblitz.com/edit/github-rjewa6?file=.storybook%2Fpreview.ts
### Reproduction steps
1. Run the Storybook.
2. Navigate to the Button -> Controlled Story
3. Press the button
### System
```bash
Storybook Environment Info:
System:
OS: macOS 15.0
CPU: (14) arm64 Apple M3 Max
Shell: 5.9 - /bin/zsh
Binaries:
Node: 20.16.0 - ~/.nvm/versions/node/v20.16.0/bin/node
npm: 10.8.1 - ~/.nvm/versions/node/v20.16.0/bin/npm
pnpm: 9.6.0 - ~/Library/pnpm/.tools/pnpm/9.6.0/bin/pnpm <----- active
Browsers:
Chrome: 129.0.6668.58
Safari: 18.0
```
### Additional context
_No response_ | bug,sev:S2,addon: themes | medium | Critical |
2,542,925,712 | pytorch | SAC doesn't support nesting with different recompute plans | Using selective activation checkpoint (SAC) in a nested fashion came up because it might be useful when using float8 in conjunction with torchtitan:
(1) The torchtitan repo uses SAC to recompute all ops [except matmuls](https://github.com/pytorch/torchtitan/blob/main/torchtitan/parallelisms/parallelize_llama.py#L205)
(2) When torchtitan uses `Float8Linear` layers, however, this will cause any `torch.abs(torch.max())` calls that float8 uses during quantization to be recomputed in the backward.
One option is to tweak the outer SAC region linked above to also mark `abs/max` ops as always saved for backward (at least for max, if you are reducing to a single tensor then saving it for backward is cheap). But in general, it might be cleaner to express these two cases separately: user code that calls `aten.abs` (or `aten.max` in a situation where might not actually be better to save) may want to be treated differently from an inner `Float8Linear` layer that always knows its max() should be saved and not recomputed.
Another alternative would just be to make sure that the partitioner figures out that it should not recompute the `aten.max()` call from `Float8Linear`, since (a) it was tagged with `PREFER_RECOMPUTE` (giving the partitioner flexibility to ignore the user intent), and (b) it should hopefully be clear that saving it is better.
A third option (the reason for this issue) is to nest SAC: e.g., have the outer region express that matmuls must be saved for backward, and have a smaller region, local to `Float8Linear`, that just says "always recompute abs/max, leave all other ops alone". In particular, the outer SAC will mark `aten.max` with `PREFER_RECOMPUTE`, while the inner SAC will mark `aten.max` with `MUST_SAVE`, which should override the PREFER annotation.
I'm not sure that this works today though. Small repro:
```
import functools
import torch
from torch.utils.checkpoint import (
CheckpointPolicy,
create_selective_checkpoint_contexts,
)
def _save_sin(ctx, func, *args, **kwargs):
return CheckpointPolicy.MUST_SAVE if func in [torch.ops.aten.sin.default] else CheckpointPolicy.PREFER_RECOMPUTE
def _save_cos(ctx, func, *args, **kwargs):
return CheckpointPolicy.MUST_SAVE if func in [torch.ops.aten.cos.default] else CheckpointPolicy.PREFER_RECOMPUTE
def save_sin():
return create_selective_checkpoint_contexts(_save_sin)
def save_cos():
return create_selective_checkpoint_contexts(_save_cos)
def g(tmp):
return tmp.sin().cos().sin()
def f(x):
out1 = x.sin().cos().sin()
out2 = torch.utils.checkpoint.checkpoint(g, x+1, context_fn=save_cos, use_reentrant=False)
return out1, out2
@torch.compile(backend="aot_eager")
def f_checkpointed(x):
return torch.utils.checkpoint.checkpoint(f, x, context_fn=save_sin, use_reentrant=False)
#torch.cuda.memory._record_memory_history(max_entries=100)
x = torch.randn(16, 16, device='cuda', requires_grad=True)
out = f_checkpointed(x)
#torch.cuda.memory._dump_snapshot(f"mem_prof.pickle")
```
The outer `f()` is supposed to save its first `sin()` output, and the inner `g()` should save its first `cos()` output. When I run with compile, I get this fw graph:
```
===== Forward graph 0 =====
/home/hirsheybar/local/a/pytorch/torch/fx/_lazy_graph_module.py class GraphModule(torch.nn.Module):
def forward(self, primals_1: "f32[16, 16][16, 1]cuda:0"):
# File: /home/hirsheybar/local/a/pytorch/tmp3.py:27 in f, code: out1 = x.sin().cos().sin()
sin: "f32[16, 16][16, 1]cuda:0" = torch.ops.aten.sin.default(primals_1)
cos: "f32[16, 16][16, 1]cuda:0" = torch.ops.aten.cos.default(sin)
sin_1: "f32[16, 16][16, 1]cuda:0" = torch.ops.aten.sin.default(cos); cos = None
# File: /home/hirsheybar/local/a/pytorch/tmp3.py:28 in f, code: out2 = torch.utils.checkpoint.checkpoint(g, x+1, context_fn=save_cos, use_reentrant=False)
add: "f32[16, 16][16, 1]cuda:0" = torch.ops.aten.add.Tensor(primals_1, 1)
# File: /home/hirsheybar/local/a/pytorch/tmp3.py:24 in g, code: return tmp.sin().cos().sin()
sin_2: "f32[16, 16][16, 1]cuda:0" = torch.ops.aten.sin.default(add); add = None
cos_1: "f32[16, 16][16, 1]cuda:0" = torch.ops.aten.cos.default(sin_2)
sin_3: "f32[16, 16][16, 1]cuda:0" = torch.ops.aten.sin.default(cos_1); cos_1 = None
return (sin_1, sin_3, primals_1, sin, sin_2)
===== Backward graph 0 =====
<eval_with_key>.3 class GraphModule(torch.nn.Module):
def forward(self, primals_1: "f32[16, 16][16, 1]cuda:0", sin: "f32[16, 16][16, 1]cuda:0", sin_2: "f32[16, 16][16, 1]cuda:0", tangents_1: "f32[16, 16][16, 1]cuda:0", tangents_2: "f32[16, 16][16, 1]cuda:0"):
# File: /home/hirsheybar/local/a/pytorch/tmp3.py:24 in g, code: return tmp.sin().cos().sin()
cos_1: "f32[16, 16][16, 1]cuda:0" = torch.ops.aten.cos.default(sin_2)
cos_2: "f32[16, 16][16, 1]cuda:0" = torch.ops.aten.cos.default(cos_1); cos_1 = None
mul: "f32[16, 16][16, 1]cuda:0" = torch.ops.aten.mul.Tensor(tangents_2, cos_2); tangents_2 = cos_2 = None
sin_4: "f32[16, 16][16, 1]cuda:0" = torch.ops.aten.sin.default(sin_2); sin_2 = None
neg: "f32[16, 16][16, 1]cuda:0" = torch.ops.aten.neg.default(sin_4); sin_4 = None
mul_1: "f32[16, 16][16, 1]cuda:0" = torch.ops.aten.mul.Tensor(mul, neg); mul = neg = None
# File: /home/hirsheybar/local/a/pytorch/tmp3.py:28 in f, code: out2 = torch.utils.checkpoint.checkpoint(g, x+1, context_fn=save_cos, use_reentrant=False)
add: "f32[16, 16][16, 1]cuda:0" = torch.ops.aten.add.Tensor(primals_1, 1)
# File: /home/hirsheybar/local/a/pytorch/tmp3.py:24 in g, code: return tmp.sin().cos().sin()
cos_3: "f32[16, 16][16, 1]cuda:0" = torch.ops.aten.cos.default(add); add = None
mul_2: "f32[16, 16][16, 1]cuda:0" = torch.ops.aten.mul.Tensor(mul_1, cos_3); mul_1 = cos_3 = None
# File: /home/hirsheybar/local/a/pytorch/tmp3.py:27 in f, code: out1 = x.sin().cos().sin()
cos: "f32[16, 16][16, 1]cuda:0" = torch.ops.aten.cos.default(sin)
cos_4: "f32[16, 16][16, 1]cuda:0" = torch.ops.aten.cos.default(cos); cos = None
mul_3: "f32[16, 16][16, 1]cuda:0" = torch.ops.aten.mul.Tensor(tangents_1, cos_4); tangents_1 = cos_4 = None
sin_5: "f32[16, 16][16, 1]cuda:0" = torch.ops.aten.sin.default(sin); sin = None
neg_1: "f32[16, 16][16, 1]cuda:0" = torch.ops.aten.neg.default(sin_5); sin_5 = None
mul_4: "f32[16, 16][16, 1]cuda:0" = torch.ops.aten.mul.Tensor(mul_3, neg_1); mul_3 = neg_1 = None
cos_5: "f32[16, 16][16, 1]cuda:0" = torch.ops.aten.cos.default(primals_1); primals_1 = None
mul_5: "f32[16, 16][16, 1]cuda:0" = torch.ops.aten.mul.Tensor(mul_4, cos_5); mul_4 = cos_5 = None
# File: /home/hirsheybar/local/a/pytorch/tmp3.py:27 in f, code: out1 = x.sin().cos().sin()
add_1: "f32[16, 16][16, 1]cuda:0" = torch.ops.aten.add.Tensor(mul_2, mul_5); mul_2 = mul_5 = None
return (add_1,)
```
Where it looks like we properly saved the `sin()` outputs for bw, but we are not saving the `cos()` output from `g()` (and are recomputing it instead).
Today, SAC is implemented with a set of `TorchDispatchModes` ([link](https://github.com/pytorch/pytorch/blob/main/torch/utils/checkpoint.py#L1270)). When running with nested regions of SAC, we will end up with multiple instances of these modes on the mode stack, each with their own separate recompute logic. If we want to support this, we would need some way of:
(1) allowing all of these modes to run, and get a chance to specify their recompute preferences (we seem to already [do this](https://github.com/pytorch/pytorch/blob/0e19522122b0d1aa36ac4eceb53d1d5d2cf1caf9/torch/utils/checkpoint.py#L1292))
(2) aggregating the results in the outer-most mode to make a final call about what should be recomputed or saved. One annoyance is that we will need to figure out a way to share this state properly across the modes, e.g. in some side-car state.
One meta question: is this something we want to support? (I imagine this might be useful to the float8/torchtitan case, cc @soulitzer @ezyang @albanD @gqchen @pearu @nikitaved @Varal7 @xmfan @vkuzo, although they might have some other workarounds in mind). | module: checkpoint,module: autograd,triaged,needs design | low | Minor |
2,542,978,567 | transformers | Object detection training/fine-tuning for Owl-vit/Owlv2 | ### Feature request
Currently the Owl-vit models support inference and CLIP-style contrastive pre-training, but don't provide a way to train (or fine-tune) the detection part of the model. According to [the paper](https://arxiv.org/pdf/2205.06230), detection training is similar to Detr.
### Motivation
It would be really awesome to be able to train or fine-tune one of these already-existing open-vocabulary object detection models.
### Your contribution
I may be able to help some with this, not sure at present | Good Second Issue,Feature request,Vision | low | Minor |
2,543,039,065 | go | runtime: significant heap profiler memory usage increase in Go 1.23 | ### Go version
go1.23.1
### Output of `go env` in your module/workspace:
```shell
n/a
```
### What did you do?
We upgraded our Go services to Go 1.23.1. All of our services use continuous profiling and have the heap profiler enabled. Go 1.23 increased the default call stack depth for the heap profiler (and others) from 32 frames to 128 frames.
### What did you see happen?
We saw a significant increase in memory usage for one of our services, in particular the `/memory/classes/profiling/buckets:bytes` runtime metric:
<img width="838" alt="Screenshot 2024-09-23 at 10 37 58" src="https://github.com/user-attachments/assets/f6f6eb96-32b5-4a68-8e05-30feea7ec110">
The maximum went from ~50MiB to almost 4GiB, an 80x increase. We also saw a significant increase in the time to serialize the heap profile, from <1 second to over 20 seconds.
We set the environment variable `GODEBUG=profstackdepth=32` to get the old limit, and the profiling bucket memory usage went back down.
### What did you expect to see?
We were surprised at first to see such a significant memory usage increase. However, the affected program is doing just about the worst-case thing for the heap profiler. It parses complex, deeply-nested XML. This results in a massive number of unique, deep stack traces due to the mutual recursion in the XML parser. And the heap profiler never frees any stack trace it collects, so the cumulative size of the buckets becomes significant as more and more unique stack traces are observed.
See this [gist](https://gist.github.com/nsrip-dd/39fa3bbd20439e07a1abf4de709741fd) for a (kind of kludgy) example program which sees a 100x increase in bucket size from Go 1.22 to Go 1.23.
I'm mainly filing this issue to document this behavior. Manually setting `GODEBUG=profstackdepth=32` mitigates the issue. I don't think anything necessarily needs to change in the runtime right now, unless this turns out to be a widespread problem.
cc @felixge | NeedsInvestigation,compiler/runtime | medium | Critical |
2,543,046,382 | rust | `os::unix::process::Command::exec` sometimes allocates, violating async signal safety | I'm trying to build a small linux container runtime as part of another project. I'd like to do the moral equivalent of the following (extracted out and untested):
```rust
fn spawn_in_container(cmd: std::process::Command) -> anyhow::Result<u32> {
let mut args = clone3::Clone3::default();
args.exit_signal(libc::SIGCHLD as _)
.flag_newuser()
.flag_newns()
.flag_newpid();
match unsafe { args.call().context("clone3")? } {
0 => unsafe { self.child_after_fork(cmd) },
pid => return Ok(pid),
}
}
// SAFETY: blah blah blah we can't allocate or anything else
unsafe fn child_after_fork(cmd: std::process::Command) -> ! {
// ... various container setup
// If successful, this never returns.
let e = cmd.exec();
std::process::abort();
}
```
[`do_exec`](https://github.com/rust-lang/rust/blob/c22a4215a0f6fb676d3774d3763d9da1462414f5/library/std/src/sys/pal/unix/process/process_unix.rs#L288) in `process_unix.rs` makes a big deal about the (un)safety of this operation, so I thought that it would be safe to use [`Command::exec`](https://doc.rust-lang.org/std/os/unix/process/trait.CommandExt.html#tymethod.exec). Unfortunately, I just caught a deadlock:
```
#0 0x000072ca4efb0c0b in __lll_lock_wait_private () from target:/usr/lib/libc.so.6
#1 0x000072ca4efc5138 in malloc () from target:/usr/lib/libc.so.6
#2 0x00006458e3b79d7f in alloc::alloc::alloc () at library/alloc/src/alloc.rs:100
#3 alloc::alloc::Global::alloc_impl () at library/alloc/src/alloc.rs:183
#4 alloc::alloc::{impl#1}::allocate () at library/alloc/src/alloc.rs:243
#5 alloc::raw_vec::RawVec::try_allocate_in<u8, alloc::alloc::Global> () at library/alloc/src/raw_vec.rs:230
#6 alloc::raw_vec::RawVec::with_capacity_in<u8, alloc::alloc::Global> () at library/alloc/src/raw_vec.rs:158
#7 alloc::vec::Vec::with_capacity_in<u8, alloc::alloc::Global> () at library/alloc/src/vec/mod.rs:699
#8 alloc::slice::hack::{impl#1}::to_vec<u8, alloc::alloc::Global> () at library/alloc/src/slice.rs:162
#9 alloc::slice::hack::to_vec<u8, alloc::alloc::Global> () at library/alloc/src/slice.rs:111
#10 alloc::slice::{impl#0}::to_vec_in<u8, alloc::alloc::Global> () at library/alloc/src/slice.rs:478
#11 alloc::vec::{impl#11}::clone<u8, alloc::alloc::Global> () at library/alloc/src/vec/mod.rs:2843
#12 std::sys::os_str::bytes::{impl#4}::clone () at library/std/src/sys/os_str/bytes.rs:73
#13 std::ffi::os_str::{impl#10}::clone () at library/std/src/ffi/os_str.rs:641
#14 std::sys_common::process::CommandEnv::capture () at library/std/src/sys_common/process.rs:45
#15 std::sys_common::process::CommandEnv::capture_if_changed () at library/std/src/sys_common/process.rs:58
#16 std::sys::pal::unix::process::process_common::Command::capture_env () at library/std/src/sys/pal/unix/process/process_common.rs:363
#17 0x00006458e3b71913 in std::sys::pal::unix::process::process_common::Command::exec () at library/std/src/sys/pal/unix/process/process_unix.rs:237
#18 std::os::unix::process::{impl#0}::exec () at library/std/src/os/unix/process.rs:227
```
Something in `capture_env` is allocating, which violates the rules around what you're allowed to do between `fork` or `clone` and `exec`.
As far as I can tell, this isn't documented one way or the other. So maybe this is a documentation bug, or I missed the documentation. Still, the amount of surface area that has the potential to allocate seems very small here - maybe the allocation would be possible to avoid? That would let me and others use the stdlib `Command` for this use-case, which would be pretty nice.
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.81.0 (eeb90cda1 2024-09-04)
binary: rustc
commit-hash: eeb90cda1969383f56a2637cbd3037bdf598841c
commit-date: 2024-09-04
host: x86_64-unknown-linux-gnu
release: 1.81.0
LLVM version: 18.1.7
```
</details>
| T-libs,C-discussion | low | Critical |
2,543,060,066 | godot | v4.4.dev2 Changes in the .godot\exported folder break the project: export to web stops working | ### Tested versions
- Reproducible in: v4.4.dev2.official [97ef3c837]
- Not reproducible in: v4.4.dev1.official [28a72fa43]
### System information
Godot v4.4.dev2 - Windows 10.0.19045 - OpenGL 3 (Compatibility) - GeForce GT 740M - Intel(R) Core(TM) i5-3317U CPU @ 1.70GHz (4 Threads)
### Issue description
At some point, the game stopped running on the web. Instead of a scene, there was a blank screen. But scripts can work. The error "tmp_js_export.js:9 WebGL: INVALID_ENUM: disable: invalid capability" in the console doesn't seem to affect this in any way - it shows up on work projects too.
I deleted everything I could from the project, but it didn't help.
But after deleting the .godot\exported folder, the problem is solved.
Also, if you try to roll back the version to v4.4.dev1, the web works.
Game in editor:

Game in web:

### Steps to reproduce
Run the game in remote debugging mode in the browser. Or export as web and run in the browser.
To make the error disappear, you need to delete the folder .godot\exported
### Minimal reproduction project (MRP)
[bug.zip](https://github.com/user-attachments/files/17100399/bug.zip)
| platform:web,needs testing,topic:export | low | Critical |
2,543,131,188 | transformers | Qwen2-VL: Multi-GPU training | ### System Info
- `transformers` version: 4.45.0.dev0
- Platform: Linux-4.18.0-477.10.1.el8_8.x86_64-x86_64-with-glibc2.28
- Python version: 3.11.5
- Huggingface_hub version: 0.24.0
- Safetensors version: 0.4.3
- Accelerate version: 0.34.2
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: NO
- mixed_precision: bf16
- use_cpu: False
- debug: False
- num_processes: 1
- machine_rank: 0
- num_machines: 1
- gpu_ids: all
- rdzv_backend: static
- same_network: True
- main_training_function: main
- enable_cpu_affinity: False
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.2.1+rocm5.7 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: AMD Instinct MI250X
### Who can help?
@muellerzr @ArthurZucker @gante
Issue about both the Qwen-VL model and perhaps the trainer so not sure who is best suited to answer :)
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Replicating the setup is a bit tough, so this is more of a preliminary discussion issue to see if there is an obvious problem that surfaces.
1. Multi-GPU setup + Huggingface trainer
2. Train Qwen2-VL model with dynamic image resolution
3. The processor creates BatchEncodings with pixel_values, input_ids, attention_mask and image_grid_thw.
4. Run a model forward pass with the model in data parallel mode of the trainer.
We observe that compared to mono-gpu setups, the rope values are disaligned with the hidden_states size.
Typically, in line 1109 (Qwen2VisionTransformerPretrainedModel forward pass):
```python
def forward(self, hidden_states: torch.Tensor, grid_thw: torch.Tensor) -> torch.Tensor:
hidden_states = self.patch_embed(hidden_states)
rotary_pos_emb = self.rot_pos_emb(grid_thw)
```
we can see rotary_pos_emb is hidden_states have a sligtly different dimension 0.
ex: torch.Size([7820, 40]) torch.Size([7736, 1280])
Upon further inspection, we see rotary_pos_emb has the same dimension as what we would get in mono-gpu runs (normal since it only depends on the grid_thw argument). However, hidden_states (that correspond to pixel values) have a different size.
This makes training crash:
```bash
File "/lus/home/CT10/cad15443/mfaysse/colpali/venv/lib/python3.11/site-packages/transformers/models/qwen2_vl/modeling_qwen2_vl.py", line 395, in forward
q = apply_rotary_pos_emb_vision(q.unsqueeze(0), rotary_pos_emb).squeeze(0)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lus/home/CT10/cad15443/mfaysse/colpali/venv/lib/python3.11/site-packages/transformers/models/qwen2_vl/modeling_qwen2_vl.py", line 254, in apply_rotary_pos_emb_vision
output = (tensor * cos) + (rotate_half(tensor) * sin)
~~~~~~~^~~~~
RuntimeError: The size of tensor a (7736) must match the size of tensor b (7808) at non-singleton dimension 1
```
### Expected behavior
[edited] see below for more details being investigated
Thanks ! | Distributed Training / Models,trainer,Feature request,bug,Vision,Multimodal | low | Critical |
2,543,135,786 | ui | [bug]: https://ui.shadcn.com/r/colors/[sky | sky etc.].json route is not exist in https://ui.shadcn.com/r | ### Describe the bug
npx shadcn@latest add form
✔ Checking registry.
✔ Installing dependencies.
⠧ Updating files.
Something went wrong. Please check the error below for more details.
If the problem persists, please open an issue on GitHub.
The component at https://ui.shadcn.com/r/colors/sky.json was not found.
It may not exist at the registry. Please make sure it is a valid component.
### Affected component/components
all
### How to reproduce
npx shadcn@latest add form
✔ Checking registry.
✔ Installing dependencies.
⠧ Updating files.
Something went wrong. Please check the error below for more details.
If the problem persists, please open an issue on GitHub.
The component at https://ui.shadcn.com/r/colors/sky.json was not found.
It may not exist at the registry. Please make sure it is a valid component.
### Codesandbox/StackBlitz link
https://codesandbox.io/p/sandbox/vigorous-currying-8653mn | Just because it required
### Logs
_No response_
### System Info
```bash
Linux/Manjaro
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,543,189,016 | go | build: amd64 builders don't support SHA extensions | The amd64 builders don't support the [SHA x86-64 extensions](https://en.wikipedia.org/wiki/Intel_SHA_extensions), so our crypto/sha256 assembly is untested, in violation of the [Assembly Policy](https://go.dev/wiki/AssemblyPolicy).
/cc @golang/security @golang/release | NeedsFix | low | Minor |
2,543,194,052 | rust | rustc-LLVM ERROR: section size does not fit in a uint32_t | Apparently I'm cursed with linker/llvm errors, but building the example examples/ssr_axum in https://github.com/benwis/thaw_llvm_error with the latest stable rust produces this error
```rust
Compiling thaw_utils v0.1.0-beta3 (/celCluster/projects/thaw/thaw_utils)
Compiling thaw_components v0.2.0-beta3 (/celCluster/projects/thaw/thaw_components)
Compiling thaw v0.4.0-beta3 (/celCluster/projects/thaw/thaw)
Compiling demo v0.1.0 (/celCluster/projects/thaw/demo)
rustc-LLVM ERROR: section size does not fit in a uint32_t
error: could not compile `demo` (lib)
Error: Failed to build ssr_axum
```
The changes undone by this commit might have something to do with it: https://github.com/leptos-rs/leptos/pull/3011/files/6206073c5e50aac57e99e27b4993645d4778a8a8..0375df5431121752579dd50e8aed4393662d8cdc
| A-LLVM,C-bug | low | Critical |
2,543,204,023 | go | build: arm64 builders don't support SHA-512 extensions | The arm64 builders don't support the [SHA-512 Armv8 extensions](https://developer.arm.com/documentation/109697/0100/Feature-descriptions/The-Armv8-2-architecture-extension?lang=en#md447-the-armv82-architecture-extension__FEAT_SHA512), so our crypto/sha512 assembly is untested, in violation of the [Assembly Policy](https://go.dev/wiki/AssemblyPolicy).
/cc @golang/security @golang/release | NeedsFix | low | Minor |
2,543,225,569 | opencv | Fatal error in cv::dnn::function readNetFromONNX() | ### System Information
OpenCV version: 4.6.0 (error persists in 4.10.0)
Operating System / Platform: Ubuntu 24
Compiler & compiler version: GCC 13.2.0
Default settings in CLion
### Detailed description
I have a working application that performs detection using ssd_mobilenet_v2, powered solely by OpenCV 4.6.0. Attempt to switch to ssd_mobilenet_v3, using ONNX format, results in error:
```
[ERROR:0@0.104] global ./modules/dnn/src/onnx/onnx_importer.cpp (1018) handleNode DNN/ONNX: ERROR during processing node with 1 inputs and 1 outputs: [ReduceMax]:(onnx_node!/transform/ReduceMax) from domain='ai.onnx'
terminate called after throwing an instance of 'cv::Exception'
what(): OpenCV(4.6.0) ./modules/dnn/src/onnx/onnx_importer.cpp:1040: error: (-2:Unspecified error) in function 'handleNode'
> Node [ReduceMax@ai.onnx]:(onnx_node!/transform/ReduceMax) parse error: OpenCV(4.6.0) ./modules/dnn/src/layers/reduce_layer.cpp:327: error: (-215:Assertion failed) inputs.size() > 0 in function 'getMemoryShapes'
```
In OpenCV 4.10.0 built from sources this error replaced by different one, but in the same function:
```
[ERROR:0@0.411] global onnx_importer.cpp:1036 handleNode DNN/ONNX: ERROR during processing node with 6 inputs and 1 outputs: [Concat]:(onnx_node!/transform/Concat_2) from domain='ai.onnx'
terminate called after throwing an instance of 'cv::Exception'
what(): OpenCV(4.10.0-dev) /home/kuver/Downloads/opencv-4.x/modules/dnn/src/onnx/onnx_importer.cpp:1058: error: (-2:Unspecified error) in function 'handleNode'
> Node [Concat@ai.onnx]:(onnx_node!/transform/Concat_2) parse error: OpenCV(4.10.0-dev) /home/kuver/Downloads/opencv-4.x/modules/dnn/src/layers/concat_layer.cpp:104: error: (-215:Assertion failed) curShape.size() == outputs[0].size() in function 'getMemoryShapes'
```
### Steps to reproduce
Clone https://github.com/Lesaje/sam/tree/bug; specify some video in row 21 `src/Detection/Detection.h` `std::string video_file;` actual content of video doesn't matter. To verify that `readNetFromTensorflow()` works fine, change `src/Detection/Model/SSDModel.cpp` constructor, row 15 to `loadModelFromTf();`.
I've also tried [this](https://drive.google.com/drive/folders/1F8KYbW_DJjxCGAjqhm5HDVDFMiKK7mto) ONNX files, some produce same error, some produce different one, but in the same spot - when trying to load model, `net = cv::dnn::readNetFromONNX(model_path);` row 104 `src/Detection/Model/SSDModel.cpp`
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [x] I updated to the latest OpenCV version and the issue is still there
- [X] There is reproducer code and related data files (videos, images, onnx, etc) | bug,category: dnn (onnx) | low | Critical |
2,543,230,732 | pytorch | Allow Inductor to Compose with FakeTensorMode to Estimate Memory Usage | ### 🚀 The feature, motivation and pitch
Some users would like to compile and run with fake tensor to estimate memory usage. We would need to instantiate tensors with a constructor that composes with TorchDispatchMode, and likely make some other changes related to autotuning / skipping invocation of triton kernels.
See repro below:
```
import sys
from typing import Any, Callable, List, Optional, Tuple
import torch
from torch._subclasses.fake_tensor import FakeTensorMode
from torch.fx.experimental.symbolic_shapes import ShapeEnv
class MyModule(torch.nn.Module):
def __init__(self):
super().__init__()
self.lin = torch.nn.Linear(100, 10)
def forward(self, x):
# return torch.nn.functional.relu(self.lin(x))
return self.lin(x)
def _assert_and_set_pt2_config(config_fqn: str, value: Any) -> None:
config_path_parts = config_fqn.split(".")[1:]
config_obj = torch
for i, attr in enumerate(config_path_parts):
if i == len(config_path_parts) - 1:
setattr(config_obj, attr, value)
return
else:
config_obj = getattr(config_obj, attr)
def main(argv: List[str]) -> None:
fake_mode = FakeTensorMode(allow_non_fake_inputs=True)
_assert_and_set_pt2_config("torch._dynamo.config.suppress_errors", False)
_assert_and_set_pt2_config(
"torch._functorch.config.activation_memory_budget_solver",
"dp",
)
_assert_and_set_pt2_config(
"torch._functorch.config.activation_memory_budget_runtime_estimator",
"flops",
)
_assert_and_set_pt2_config(
"torch._functorch.config.activation_memory_budget",
0.5,
)
with fake_mode:
module = MyModule()
module.compile(
backend="inductor",
dynamic=None,
options={
"triton.cudagraphs": False,
"force_shape_pad": False,
},
)
with fake_mode:
train_input = torch.randn(5, 100)
ret = module(train_input)
print(ret.numel(), ret.element_size())
def invoke_main() -> None:
main(sys.argv[1:])
if __name__ == "__main__":
invoke_main() # pragma: no cover
```
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @zou3519 @bdhirsh | triaged,oncall: pt2,module: fakeTensor,module: inductor,module: pt2-dispatcher | low | Critical |
2,543,236,275 | godot | 3D editor Cannot Select Imported Meshes (Make Local Option) | ### Tested versions
Reproduceable in All Godot x3 and x4 Versions, (Old a&& bug)
### System information
Windows 10 Pro x64, Godot 4.3 - 3.0, Render Mode Doesnt matter,
### Issue description
Ever since Godot 3.0 there has been a problem with Selecting Imported Meshes on 3D editor, You cant you have to use box select and select origin of the mesh in order to edit it or scene tree
### Steps to reproduce
## Example To reproduce it in any godot version
https://github.com/user-attachments/assets/e6dd61cd-de4c-44aa-b45b-7fb77411b94b
Check the Bongo Cat's Mouse, I'm clicking but nothing is being selected except the interior of the car
the only way to select them is use Box select or Scene tree, which is not ideal for complex scenes
### Minimal reproduction project (MRP)
No Needed, any mesh drag into scene and right click select Make Local and done. | bug,topic:editor,topic:3d | low | Critical |
2,543,247,496 | opencv | cv2.resizeWindow doesn't upscale the displayed image anymore. | ### System Information
OpenCV python version: >=4.10.0.82
Operating System / Platform: Ubuntu 20.04 or Ubuntu 24.04
Python version: 3.9.20
### Detailed description
### Expected behaviour
Calling cv2.resizeWindow on a cv2.WINDOW_NORMAL namedWindow used to upscale the image to the new size if the window size was larger then the image size.
### Actual behaviour
Since version `4.10.0.82` (at least this is the first pypi version where this issue occurs), the window gets resized, but the image will stay at it's original size if you resize the window to be bigger then the image. (downscaling still works as expected)
### Steps to reproduce
### Steps to reproduce
```sh
python3 -m pip install "opencv-python==4.10.0.84"
```
Run the attached example:
```sh
tar xvf MREResizeWindow.tar.gz && cd MREresizeWindow
python3 mre_resize_window.py
```
[MREResizeWindow.tar.gz](https://github.com/user-attachments/files/17101471/MREResizeWindow.tar.gz)
### Expected (ran on 4.9.0.80):

### Actual (ran on 4.10.0.84):

### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [X] I updated to the latest OpenCV version and the issue is still there
- [X] There is reproducer code and related data files (videos, images, onnx, etc) | bug,category: highgui-gui | low | Minor |
2,543,294,378 | pytorch | torch._dynamo.exc.InternalTorchDynamoError when tracing through torch.ops.prim.NumToTensor | ### 🐛 Describe the bug
Calling `torch.export` on `torch.ops.prim.NumToTensor` raises an internal Dynamo error.
```
import torch.nn as nn
import torch
from torch.nn import functional as F
class PrimIntToTensorModule(torch.nn.Module):
constant: int
def __init__(self, constant):
super().__init__()
self.constant = constant
def forward(self):
return torch.ops.prim.NumToTensor(self.constant)
constant = 5
model = PrimIntToTensorModule(constant)
ep = torch.export.export(model, ())
print(ep)
```
### Error logs
File "/home/anieto/Groq/test.py", line 21, in <module>
ep = torch.export.export(model, ())
File "/home/anieto/.local/lib/python3.10/site-packages/torch/export/__init__.py", line 449, in export
return export__RC__(
File "/home/anieto/.local/lib/python3.10/site-packages/torch/_export/__init__.py", line 258, in export__RC__
return _export(
File "/home/anieto/.local/lib/python3.10/site-packages/torch/_export/__init__.py", line 567, in wrapper
return fn(*args, **kwargs)
File "/home/anieto/.local/lib/python3.10/site-packages/torch/_export/__init__.py", line 604, in _export
gm_torch_level = _export_to_torch_ir(
File "/home/anieto/.local/lib/python3.10/site-packages/torch/_export/__init__.py", line 514, in _export_to_torch_ir
gm_torch_level, _ = torch._dynamo.export(
File "/home/anieto/.local/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 1342, in inner
result_traced = opt_f(*args, **kwargs)
File "/home/anieto/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/anieto/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/home/anieto/.local/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 489, in _fn
return fn(*args, **kwargs)
File "/home/anieto/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/anieto/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/home/anieto/.local/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 655, in catch_errors
return callback(frame, cache_entry, hooks, frame_state)
File "/home/anieto/.local/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 383, in _convert_frame_assert
compiled_product = _compile(
File "/home/anieto/.local/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 664, in _compile
raise InternalTorchDynamoError(str(e)).with_traceback(
File "/home/anieto/.local/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 645, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/home/anieto/.local/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 244, in time_wrapper
r = func(*args, **kwargs)
File "/home/anieto/.local/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 562, in compile_inner
out_code = transform_code_object(code, transform)
File "/home/anieto/.local/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1033, in transform_code_object
transformations(instructions, code_options)
File "/home/anieto/.local/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 151, in _fn
return fn(*args, **kwargs)
File "/home/anieto/.local/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 527, in transform
tracer.run()
File "/home/anieto/.local/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2123, in run
super().run()
File "/home/anieto/.local/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 818, in run
and self.step()
File "/home/anieto/.local/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 781, in step
getattr(self, inst.opname)(inst)
File "/home/anieto/.local/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 470, in wrapper
return inner_fn(self, inst)
File "/home/anieto/.local/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1213, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/home/anieto/.local/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 652, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/home/anieto/.local/lib/python3.10/site-packages/torch/_dynamo/variables/torch.py", line 599, in call_function
tensor_variable = wrap_fx_proxy(
File "/home/anieto/.local/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py", line 1283, in wrap_fx_proxy
return wrap_fx_proxy_cls(target_cls=TensorVariable, **kwargs)
File "/home/anieto/.local/lib/python3.10/site-packages/torch/_dynamo/variables/builder.py", line 1399, in wrap_fx_proxy_cls
raise InternalTorchDynamoError(
torch._dynamo.exc.InternalTorchDynamoError: `example_value` needs to be a `FakeTensor`wrapped by this instance of Dynamo. Found: 5
### Minified repro
_No response_
### Versions
Collecting environment information...
PyTorch version: 2.2.0.dev20231121+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.5.0-1023-gcp-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU @ 2.20GHz
CPU family: 6
Model: 79
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
Stepping: 0
BogoMIPS: 4399.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat md_clear arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 128 KiB (4 instances)
L1i cache: 128 KiB (4 instances)
L2 cache: 1 MiB (4 instances)
L3 cache: 55 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-7
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Syscall hardening, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT Host state unknown
Versions of relevant libraries:
[pip3] mypy==1.11.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.1
[pip3] onnx==1.15.0
[pip3] torch==2.2.0.dev20231121+cpu
[pip3] torchaudio==2.2.0.dev20231121+cpu
[pip3] torchvision==0.17.0.dev20231121+cpu
[conda] Could not collect
cc @ezyang @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | triaged,oncall: pt2,export-triaged,oncall: export | low | Critical |
2,543,296,217 | ollama | Support for jinaai/jina-embeddings-v3 embedding model | jina-embeddings-v3 is a multilingual multi-task text embedding model designed for a variety of NLP applications. Based on the [Jina-XLM-RoBERTa architecture](https://huggingface.co/jinaai/xlm-roberta-flash-implementation), this model supports Rotary Position Embeddings to handle long input sequences up to 8192 tokens. Additionally, it features 5 LoRA adapters to generate task-specific embeddings efficiently.
Key Features:
- Extended Sequence Length: Supports up to 8192 tokens with RoPE.
- Task-Specific Embedding: Customize embeddings through the task argument with the following options:
retrieval.query: Used for query embeddings in asymmetric retrieval tasks
retrieval.passage: Used for passage embeddings in asymmetric retrieval tasks
separation: Used for embeddings in clustering and re-ranking applications
classification: Used for embeddings in classification tasks
text-matching: Used for embeddings in tasks that quantify similarity between two texts, such as STS or symmetric retrieval tasks
- Matryoshka Embeddings: Supports flexible embedding sizes (32, 64, 128, 256, 512, 768, 1024), allowing for truncating embeddings to fit your application.
| model request | medium | Critical |
2,543,333,446 | vscode | Long chat attachment filenames cause chat input to overflow |
Type: <b>Bug</b>
1. Attach a file with a very long name to chat
**Bug**
The widget expands so that you can't see the submit buttons anymore:

VS Code version: Code - Insiders 1.94.0-insider (Universal) (1926933184de3f77ac7176e9fc302c54bd9634b0, 2024-09-23T05:12:01.964Z)
OS version: Darwin arm64 23.6.0
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Apple M2 Max (12 x 2400)|
|GPU Status|2d_canvas: unavailable_software<br>canvas_oop_rasterization: disabled_off<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: disabled_software<br>multiple_raster_threads: enabled_on<br>opengl: disabled_off<br>rasterization: disabled_software<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: disabled_software<br>video_encode: disabled_software<br>webgl: unavailable_software<br>webgl2: unavailable_software<br>webgpu: unavailable_software<br>webnn: disabled_off|
|Load (avg)|4, 4, 4|
|Memory (System)|64.00GB (1.13GB free)|
|Process Argv|--crash-reporter-id 0fffb5da-9cd7-46fd-9e7f-a1564e8c5fda|
|Screen Reader|no|
|VM|0%|
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805:30301674
vsaa593cf:30376535
py29gd2263:31024238
c4g48928:30535728
a9j8j154:30646983
962ge761:30841072
pythongtdpath:30726887
welcomedialog:30812478
pythonnoceb:30776497
asynctok:30898717
dsvsc014:30777825
dsvsc015:30821418
pythonmypyd1:30859725
h48ei257:31000450
pythontbext0:30879054
accentitlementst:30870582
dsvsc016:30879898
dsvsc017:30880771
dsvsc018:30880772
cppperfnew:30980852
pythonait:30973460
724cj586:31013169
a69g1124:31018687
dvdeprecation:31040973
dwnewjupytercf:31046870
impr_priority:31057980
nativerepl1:31134653
refactort:31084545
pythonrstrctxt:31093868
flighttreat:31119334
wkspc-onlycs-t:31132770
nativeloc1:31118317
wkspc-ranged-t:31125599
jh802675:31132134
e80f6927:31120813
ei213698:31121563
i21gd607:31141543
notype1:31143044
b50ed353:31138333
showbadge:31139796
f8igb616:31140137
```
</details>
<!-- generated by issue reporter --> | bug,panel-chat | low | Critical |
2,543,339,658 | go | proposal: testing/fstest: add MapFS.CopyFrom(fs.FS) | ### Proposal Details
Since it was concluded that `fs.FS` is necessarily read-only, the only alternative is for every writable filesystem to implement its own form of [`os.CopyFS`](https://pkg.go.dev/os#CopyFS). This proposal is about doing so for [`MapFS`](https://pkg.go.dev/testing/fstest#MapFS).
For details, see:
* https://github.com/golang/go/issues/45757#issuecomment-1640530305 | Proposal | low | Major |
2,543,372,466 | godot | GDScript test suite fails with MinGW-LLVM build | ### Tested versions
v4.4.dev.custom_build [42a330e6e]
### System information
Godot v4.4.dev (42a330e6e) - Windows 10.0.22621 - Multi-window, 2 monitors - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3060 Ti (NVIDIA; 32.0.15.6109) - AMD Ryzen 7 3700X 8-Core Processor (16 threads)
### Issue description
Toolchain used: https://github.com/mstorsjo/llvm-mingw/releases (both 20240917 with LLVM 19.1.0 final and llvm-mingw 20240619 with LLVM 18.1.8)
Output:
```
[doctest] doctest version is "2.4.11"
[doctest] run with "--help" for options
Could not load project settings.
ERROR: Could not open specified test directory.
at: (modules\gdscript\tests\gdscript_test_runner.cpp:334)
===============================================================================
./modules/gdscript/tests/gdscript_test_runner_suite.h:43:
TEST SUITE: [Modules][GDScript]
TEST CASE: Script compilation and runtime
modules\gdscript\tests\gdscript_test_runner.cpp:190: FATAL ERROR: An error occurred while making the tests.
./modules/gdscript/tests/gdscript_test_runner_suite.h:49: FATAL ERROR: REQUIRE( fail_count == 0 ) is NOT correct!
values: REQUIRE( -1 == 0 )
logged: Make sure `*.out` files have expected results.
All GDScript tests should pass.
ERROR: Failed to create file "res://.editorconfig".
at: EditorPaths (editor\editor_paths.cpp:271)
ERROR: Failed to get attributes for: res://.editorconfig
at: (drivers\windows\file_access_windows.cpp:473)
Could not load project settings.
===============================================================================
./modules/gdscript/tests/test_completion.h:223:
TEST SUITE: [Modules][GDScript][Completion]
TEST CASE: [Editor] Check suggestion list
./modules/gdscript/tests/test_completion.h:87: FATAL ERROR: Invalid test directory.
WARNING: Property not found: gui/theme/lcd_subpixel_layout
at: get_setting_with_override (core\config\project_settings.cpp:375)
===============================================================================
./modules/gdscript/tests/test_lsp.h:391:
TEST SUITE: [Modules][GDScript][LSP]
TEST CASE: [workspace][resolve_symbol]
./modules/gdscript/tests/test_lsp.h:90: FATAL ERROR: REQUIRE( err == OK ) is NOT correct!
values: REQUIRE( 31 == 0 )
logged: Could not open specified root directory
================================================================
CrashHandlerException: Program crashed with signal 11
Engine version: Godot Engine v4.4.dev.custom_build (de106b9cf3557cfc3dcccad5e62d46d845e32730)
Dumping the backtrace. Please include this when reporting the bug to the project developer.
[1] _gnu_exception_handler (../crt/crt_handler.c:0)
[2] GDScriptTests::initialize(String const&) (./modules/gdscript/tests/test_lsp.h:91)
-- END OF BACKTRACE --
================================================================
```
### Steps to reproduce
Build Godot using MinGW-LLVM (`use_mingw=yes` and `use_llvm=yes`) with tests enabled.
Run it with `--test`.
### Minimal reproduction project (MRP)
- | bug,platform:windows,needs testing,topic:tests | low | Critical |
2,543,390,919 | ollama | https://ollama.com/install.sh creates contrib.list which just creates tons of warnings | ### What is the issue?
After running `https://ollama.com/install.sh` I now have a `/etc/apt/sources.list.d/contrib.list` which I never asked for and every `apt-get update` command now makes tons of warnings:
```
W: Target Packages (contrib/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list:4 and /etc/apt/sources.list.d/contrib.list:4
W: Target Packages (contrib/binary-i386/Packages) is configured multiple times in /etc/apt/sources.list:4 and /etc/apt/sources.list.d/contrib.list:4
W: Target Packages (contrib/binary-all/Packages) is configured multiple times in /etc/apt/sources.list:4 and /etc/apt/sources.list.d/contrib.list:4
W: Target Translations (contrib/i18n/Translation-en_IE) is configured multiple times in /etc/apt/sources.list:4 and /etc/apt/sources.list.d/contrib.list:4
(etc. for a couple 200 lines in my case)
```
Of course, I'm going to fix the issue by removing `/etc/apt/sources.list.d/contrib.list`, but I believe it shouldn't be created in the first place. And if it has to be, please cleanup after your script by removing it yourself. A 2024 version of "Be kind, rewind" in some sort ;-)
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.11 | bug,linux,install | low | Minor |
2,543,400,759 | pytorch | Setting a `complex` tensor to `linalg.norm()` returns a `float` tensor | ### 🐛 Describe the bug
Setting an `int` tensor to [linalg.norm()](https://pytorch.org/docs/stable/generated/torch.linalg.norm.html) gets the error message as shown below:
```python
import torch
from torch import linalg
my_tensor = torch.tensor([8, -3, 0, 1])
linalg.norm(input=my_tensor) # Error
```
> RuntimeError: linalg.vector_norm: Expected a floating point or complex tensor as input. Got Long
But, setting a `complex` tensor to `linalg.norm()` returns a `float` tensor as shown below:
```python
import torch
from torch import linalg
my_tensor = torch.tensor([8.+0.j, -3.+0.j, 0.+0.j, 1.+0.j])
linalg.norm(input=my_tensor)
# tensor(8.6023)
linalg.norm(input=my_tensor).dtype
# torch.float32
```
So, I set `dtype=torch.complex64` to `linalg.norm()` but it still returns a `float` tensor as shown below:
```python
import torch
from torch import linalg
my_tensor = torch.tensor([8.+0.j, -3.+0.j, 0.+0.j, 1.+0.j])
# ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓
linalg.norm(input=my_tensor, dtype=torch.complex64)
# tensor(8.6023)
linalg.norm(input=my_tensor, dtype=torch.complex64).dtype
# torch.float32
```
### Versions
```python
import torch
torch.__version__ # '2.3.0'
```
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano | triaged,module: linear algebra | low | Critical |
2,543,444,705 | pytorch | `2` and `-2` for `ord` argument of `linalg.norm()` should be explained more clearly | ### 📚 The doc issue
[The doc](https://pytorch.org/docs/stable/generated/torch.linalg.norm.html) of `linalg.norm()` explains the supported norms for `ord` argument but `2` and `-2` are not explained clearly just saying `largest singular value` and `smallest singular value` respectively as shown below:
|`ord`|norm for matrix|norm for vector|
|-|-|-|
|...|...|...|
|`2`|largest singular value|as below|
|`-2`|smallest singular value|as below|
|...|...|...|
### Suggest a potential alternative/fix
So, the words `SVD(Singular Value Decomposition)` should be added to them as shown below:
|`ord`|norm for matrix|norm for vector|
|-|-|-|
|...|...|...|
|`2`|The largest singular value of SVD(Singular Value Decomposition)|as below|
|`-2`|The smallest singular value of SVD|as below|
|...|...|...|
cc @svekars @brycebortree @sekyondaMeta @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano | module: docs,triaged,module: linear algebra | low | Minor |
2,543,454,920 | go | cmd/go: doc: doesn't show embedded struct's methods | ### Proposal Details
Hi!
I was recently using a new GO module in my project.
There was code like this:
```go
type (
A struct {}
B struct { *A }
)
func (*A) Foo() {}
```
Tybe B however was a large struct with many methods and exported fields.
I had a code example that used method `Foo` and I just wanted to know more about i.
I used `go doc B.Foo` but it said `doc: no method or field B.Foo in package`.
I suppose, that because call like this is 100% possible `&B{}.Foo()`, above `go doc` call should return documentation for `Foo` | help wanted,NeedsInvestigation,GoCommand | low | Major |
2,543,464,966 | flutter | [video_player] Add support for transparency | It looks like https://pub.dev/packages/video_player doesn't document any support for [alpha channels](https://pixelbakery.com/recipes/video-image-formats), but it would be nice if it could accurately render videos with transparency. | c: new feature,a: video,p: video_player,team-ecosystem,P3,triaged-ecosystem | low | Minor |
2,543,492,800 | transformers | Add support for OmDet-Turbo multi-gpu inference with DataParallel. | ### Feature request
OmDet-Turbo will be added to Transformers soon, however it won't support using DataParallel for inference using multi-gpu, at least initially.
### Motivation
If there is a large demand to support multi-gpu inference with DataParallel.
### Your contribution
A PR will be created if there is demand for it. | New model,Distributed Training / Models,Feature request | low | Minor |
2,543,526,795 | flutter | [go_router] improve readability of go_router prior to guard implementation | ### Use case
After having a look into [the guard proposal](http://flutter.dev/go/go-router-redirect) and what it'd take to implement that; I believe it would be beneficial to improve this package readability prior to implementation in the following aspects:
1. It is hard to know if an `Uri uri` or `String path` or `String loc` parameter is in fact a pattern or simply an already "compiled" path. You may have to check the wider context.
2. There are some function nesting going on, sometimes 3 level deep, which is hard to read in the redirects
3. The logic for redirection is mainly in configuration.dart, but is still a bit spread in other files
### Proposal
1. Create a new `RoutePattern` class that would also serve as a wrapper for functions in `path_utils` but also indicate that an parameter is in fact a route pattern.
2. Concentrate the redirection logic in a redirection.dart and flatten the function nesting part.
I believe the guard proposal, would be easier to implement after those changes. | c: new feature,package,c: proposal,P3,p: go_router,team-go_router,triaged-go_router | low | Minor |
2,543,597,989 | go | x/build/cmd/gomote: don't duplicate logic present in golangbuild in the repro subcommand | Soon, `gomote repro` is going to assume some logic in `golangbuild` as part of its test command output, specifically for the no-network builders. This is unfortunate, since we're duplicating this subtle logic in multiple places.
Let's strive to avoid that in the future. In particular, test execution in `golangbuild` has some complexity around test execution. The two biggest ones are disabling the network on some builders and copying nested submodules out of context to test them (#34352). It would make sense to turn `golangbuild` into its own reproducer, like we do for the environment. This could work but needs thought.
For now, this issue tracks this particular consequence of the complexity of test execution seeping into other parts of the codebase: duplicating the no-network logic for printing the test command in the `gomote repro` command. | Builders,NeedsInvestigation | low | Minor |
2,543,633,315 | yt-dlp | Broken ivi support | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Rus
### Provide a description that is worded well enough to be understood
Cookies don't work, downloads video without subscription once in a while. video with subscription doesn't download at all.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', '--cookies', 'ivi.ru_cookies.txt', 'https://www.ivi.ru/watch/vasha-chest-2024/538722']
[debug] Encodings: locale cp1251, fs utf-8, pref cp1251, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.08.06 from yt-dlp/yt-dlp [4d9231208] (win_exe)
[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1k 25 Mar 2021)
[debug] exe versions: ffmpeg N-117043-g8707c8660d-20240915 (setts), ffprobe N-117043-g8707c8660d-20240915
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.07.04, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.35.5, urllib3-2.2.2, websockets-12.0
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1830 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.08.06 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.08.06 from yt-dlp/yt-dlp)
[ivi] Extracting URL: https://www.ivi.ru/watch/vasha-chest-2024/538722
[ivi] 538722: Downloading timestamp JSON
[ivi] 538722: Downloading video JSON
[ivi] 538722: Downloading video JSON
ERROR: [ivi] 538722: Unable to download video 538722: Не смогли определить версию по переданным site=s183 и app_version=None
File "yt_dlp\extractor\common.py", line 740, in extract
File "yt_dlp\extractor\ivi.py", line 133, in _real_extract
```
| site-bug,triage | low | Critical |
2,543,706,203 | flutter | Add `AnimationTheme` to specify default curve(s) and duration(s) | ### Use case
There are a ton of super useful [implicitly animated widgets](https://api.flutter.dev/flutter/widgets/ImplicitlyAnimatedWidget-class.html). They require an explicit duration argument and have a static default argument for curve (`Curves.linear`).
The most obvious way to handle these arguments across your app it to create globals and/or create your own inherited widget to hold them. This works okay, but it's annoying to have to specify the relevant curve(s) and duration(s) in every relevant place. It's even more annoying if you:
1. Use different durations and/or curves for different scenarios
2. Want to dynamically override the animation duration and/or curve according to user settings or situation (ie tuning a game to feel faster and snappier when a user is on a streak)
3. Are on a large team with specific animation design guidelines and don't want to rely on developers remembering to pipe the correct values into the relevant widgets
This is an especially good candidate for including into the framework itself instead of relying on the community to build packages/roll their own solutions because the core value add (having sane defaults and optional arguments on all relevant built-ins) can only be achieved by changing the built in widgets.
### Proposal
1. Create an `AnimationTheme` inherited widget and `AnimationThemeData` class that hold a default `Curve` and `Duration`
2. Add an `AnimationTheme.of(context)` static method that retrieves the animation theme and listens to any changes in the theme
2. Add `AnimationThemeData` to `ThemeData` and include default values
3. Have all implicitly animated widgets inherit default values (with chained inheritance resolution if `inherit=true`) from the nearest `AnimationTheme`/`Theme`
4. Make the `duration` and `curve` arguments optional and nullable with no default values and use them as overrides for the theme values when non-null
Open questions:
1. Should there be other arguments such as `curveFast`, `durationFast` etc equivalent to `bodySmall`, `bodyMedium`....? Devs could then opt-into alternate theme-held animations
2. Are there other widgets that should reference the theme held values? Maybe navigator and it's default transitions? What about `AnimationController`?
3. Should we lerp curves and durations in some clever way (like some theme colors etc do) when their theme values change, or should we defer to the individual widgets to handle changes as they see fit?
On all of the above I feel the best approach is to launch an MVP and then see what feature requests devs make as all these extra features are easy to add later if needed but hard to roll back if we find they're not. | framework,a: animation,c: proposal,P3,team-framework,triaged-framework,f: theming | low | Major |
2,543,709,534 | go | x/website: go.dev tends not to show up on Google search results, but tip.golang.org does | ### Go version
n/a
### Output of `go env` in your module/workspace:
```shell
n/a
```
### What did you do?
I performed Google searches for various documents that I know are hosted on go.dev. Example search queries:
* `golang doc comments`
* `golang 1.23 release notes`
### What did you see happen?
The copy of the document on tip.golang.org was the first Google search result each time.
Here are the two examples:


(I noticed this on my personal Google account, but I took those two screenshots from a Chrome incognito window.)
### What did you expect to see?
I expect the canonical version of these documents to be the first search result, or at least near the top.
* `golang doc comments`:
- https://tip.golang.org/doc/comment is the first result
- https://go.dev/doc/comment isn't anywhere in the first two pages of search results
* `golang 1.23 release notes`:
- https://tip.golang.org/doc/go1.23 is the first result
- https://go.dev/doc/go1.23 isn't anywhere in the first two pages of search results
- Weirdly, the second search result *is* a go.dev URL, the "Go 1.23 is released" blog post at https://go.dev/blog/go1.23.
I guess for most projects I probably wouldn't file an SEO ticket, but you folks are mostly paid by Google, so I figure you ought to be able to sort it out :) | NeedsInvestigation,website | low | Minor |
2,543,761,548 | TypeScript | TypeScript LSP crashes when a project with .ts videos is opened | Type: <b>Bug</b>
TS Server fatal error: Cannot create a string longer than 0x1fffffe8 characters
**TypeScript Version:** 5.5.4
**Steps to reproduce crash**
1. Open a project that contains a video with a `.ts` extension
2. Open a typescript source file
**TS Server Log**
[tsserver.log](https://github.com/user-attachments/files/16913079/tsserver.log)
VS Code version: Code 1.93.0 (4849ca9bdf9666755eb463db297b69e5385090e3, 2024-09-04T13:02:38.431Z)
OS version: Darwin x64 23.2.0
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|13th Gen Intel(R) Core(TM) i9-13900K (32 x 3000)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: disabled_off<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|4, 3, 2|
|Memory (System)|64.00GB (3.93GB free)|
|Process Argv|--crash-reporter-id 77a646e2-87f9-4a1b-994c-3ed1a2768762|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (57)</summary>
Extension|Author (truncated)|Version
---|---|---
vscode-color|ans|0.4.5
exemplary|ant|0.0.1
asciidoctor-vscode|asc|3.3.1
biome|bio|2.3.0
vscode-tailwindcss|bra|0.12.10
vsc-jetbrains-icons-enhanced|Bre|2.2.0
js-auto-backticks|cha|1.2.0
dart-code|Dar|3.96.0
flutter|Dar|3.96.0
macos-modern-theme|dav|2.3.19
vscode-eslint|dba|3.0.10
EditorConfig|Edi|0.16.4
vsc-material-theme|Equ|34.5.2
vsc-material-theme-icons|equ|3.8.8
prettier-vscode|esb|11.0.0
copilot|Git|1.229.0
copilot-chat|Git|0.20.0
vscode-pull-request-github|Git|0.96.0
todo-tree|Gru|0.0.226
discord-vscode|icr|5.8.0
elixir-ls|Jak|0.23.1
svg|joc|1.5.4
vscord|Leo|5.2.13
MagicPython|mag|1.1.0
moon-console|moo|0.13.0
vscode-docker|ms-|1.29.2
debugpy|ms-|2024.10.0
python|ms-|2024.14.0
vscode-pylance|ms-|2024.8.2
jupyter|ms-|2024.8.0
jupyter-keymap|ms-|1.1.2
jupyter-renderers|ms-|1.0.19
vscode-jupyter-cell-tags|ms-|0.1.9
vscode-jupyter-slideshow|ms-|0.1.6
remote-containers|ms-|0.384.0
remote-ssh|ms-|0.114.1
remote-ssh-edit|ms-|0.86.0
cmake-tools|ms-|1.19.51
cpptools|ms-|1.21.6
cpptools-extension-pack|ms-|1.3.0
remote-explorer|ms-|0.4.3
remote-server|ms-|1.5.2
bun-vscode|ove|0.0.12
phoenix|pho|0.1.2
platformio-ide|pla|3.3.3
inline-sql-syntax|quf|2.16.0
geo-data-viewer|Ran|2.6.0
vscode-yaml|red|1.15.0
rust-analyzer|rus|0.4.2100
vscode-shadcn-svelte|Sel|0.1.1
solid-snippets|sol|0.1.4
vscode-nushell-lang|The|1.9.0
overpassql-syntax|tqd|2.1.0
type-doc-vscode|Tre|0.0.35
cmake|twx|0.0.17
pretty-ts-errors|Yoa|0.6.0
intellij-ify|zew|1.0.2
(8 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
962ge761:30959799
pythongtdpath:30769146
welcomedialogc:30910334
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
accentitlementsc:30995553
dsvsc016:30899300
dsvsc017:30899301
dsvsc018:30899302
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
jg8ic977:31013176
a69g1124:31058053
dvdeprecation:31068756
dwnewjupyter:31046869
newcmakeconfigv2:31071590
impr_priority:31102340
refactort:31108082
pythonrstrctxt:31112756
flighttreat:31119336
wkspc-onlycs-t:31132770
wkspc-ranged-t:31125599
ei213698:31121563
```
</details>
<!-- generated by issue reporter --> | Needs More Info | low | Critical |
2,543,788,305 | godot | Color wrapper for alpha values does not work | ### Tested versions
Reproducible in 4.1.1 and 4.3
### System information
Godot v4.3.stable - Windows 10.0.22631 - GLES3 (Compatibility) - NVIDIA GeForce RTX 4070 Ti (NVIDIA; 32.0.15.6109) - AMD Ryzen 9 7950X 16-Core Processor (32 Threads)
### Issue description
On the docs for Color, there are wrappers for RGBA to use 0-255 instead of 0-1 values.
https://docs.godotengine.org/en/stable/classes/class_color.html#class-color-property-a
However, the wrapper for alpha (a) value does not work.
The docs should explain this or the engine should give a warning if values outside the expected range are used.
(Alternatively--the wrapper should behave as expected.)
Instead of using the wrapper, Color8 can also be used to strictly specify the 0-255 range.
### Steps to reproduce
Open the MRP below.
Run the root scene and observe the color behavior.
The lines are generated from left to right, from i=0 to i=num_lines.
We expect the first two lines to have the same color, but they do not:
line.default_color = Color(1, 1, 1, 51.0/255.0)
line.default_color = Color(255, 255, 255, 51)
The third and fourth lines show the expected behavior, demonstrated using Color8 (both lines should be the same color.)
line.default_color = Color(1, 1, 1, 51.0/255.0)
line.default_color = Color8(255, 255, 255, 51)
### Minimal reproduction project (MRP)
[color_test.zip](https://github.com/user-attachments/files/17104957/color_test.zip)
| topic:core | low | Minor |
2,543,814,639 | pytorch | The small shape change of input tensor leads to a significant increase in GPU memory usage in Conv3D | ### 🐛 Describe the bug
The following code defines a 3d convolution layer and we run inference under AMP. For the input tensor with the shape of [1, 128, 248, 248, 248], the peak memory usage from the `nvidia-smi` command is 19171 MiB. However, when we slightly increase the shape of the input tensor to [1, 128, 256, 256, 256], the code will cause a cuda out-of-memory issue. The error message is
`torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 108.00 GiB. GPU`.
Such a small shape change leads to a significant increase in GPU memory usage of Conv3D. From [1, 128, 248, 248, 248] to [1, 128, 256, 256, 256], we just increase the number of elements in a tensor by about 10%. Are there any additional memory overheads in Conv3D implementation when we increase the tensor's spatial shape?
```
import torch
rank = 0
device = torch.device(f"cuda:{rank}")
x = torch.zeros(1, 128, 248, 248, 248).to(device).half()
# x = torch.zeros(1, 128, 256, 256, 256).to(device).half()
conv = torch.nn.Conv3d(128, 128, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1)).to(device)
with torch.no_grad(), torch.cuda.amp.autocast():
dummy = conv(x)
```
### Versions
```
PyTorch version: 2.3.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-88-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100 80GB PCIe
GPU 1: NVIDIA A100 80GB PCIe
GPU 2: NVIDIA A100 80GB PCIe
GPU 3: NVIDIA A100 80GB PCIe
GPU 4: NVIDIA A100 80GB PCIe
GPU 5: NVIDIA A100 80GB PCIe
GPU 6: NVIDIA A100 80GB PCIe
GPU 7: NVIDIA A100 80GB PCIe
Nvidia driver version: 545.29.06
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7H12 64-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 1
Core(s) per socket: 64
Socket(s): 2
Stepping: 0
Frequency boost: enabled
CPU max MHz: 2600.0000
CPU min MHz: 1500.0000
BogoMIPS: 5199.83
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es
Virtualization: AMD-V
L1d cache: 4 MiB (128 instances)
L1i cache: 4 MiB (128 instances)
L2 cache: 64 MiB (128 instances)
L3 cache: 512 MiB (32 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-63
NUMA node1 CPU(s): 64-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT disabled
Vulnerability Spec rstack overflow: Mitigation; SMT disabled
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==7.1.0
[pip3] numpy==1.24.1
[pip3] pytorch-ignite==0.5.1
[pip3] torch==2.3.1
[pip3] torchvision==0.18.1
[pip3] triton==2.3.1
[conda] Could not collect
```
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki | module: nn,module: memory usage,module: convolution,triaged | low | Critical |
2,543,842,570 | pytorch | dataclasses.replace not supported by dynamo | ### 🐛 Describe the bug
The `dataclasses.replace` function appears to be unimplemented in dynamo. In this particular case it is used from the xformers (0.0.27.post2) package in `xformers/ops/fmha/cutlass.py:259`. I'm using the pytorch 2.5 nightly (2.4 fails in a different spot).
It can be reproduced with the following steps:
1. Save to `bug.py`
```
import torch
from vllm import LLM, SamplingParams
# Sample prompts.
prompts = [
"Hello, my name is",
"The president of the United States is",
"The capital of France is",
"The future of AI is",
]
# Create a sampling params object.
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
# Create an LLM.
llm = LLM(model="bigcode/tiny_starcoder_py", enforce_eager=True, dtype=torch.float32)
# Generate texts from the prompts. The output is a list of RequestOutput objects
# that contain the prompt, generated text, and other information.
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
2. Download and build VLLM version/hash: 0.6.1.post2, `b05f5c923`
3. Run the following command.
```
VLLM_TEST_DYNAMO_GRAPH_CAPTURE=1 VLLM_TEST_DYNAMO_FULLGRAPH_CAPTURE=1 python3 bug.py
```
Stacktrace:
```
E torch._dynamo.exc.Unsupported: Error in model execution (input dumped to /tmp/err_execute_model_input_20240924-020006.pkl): 'skip function replace in file /usr/lib/python3.10/dataclasses.py'
E
E from user code:
E File "/home/bnell/nm-vllm-new/vllm/model_executor/models/gpt_bigcode.py", line 284, in forward
E hidden_states = self.transformer(input_ids, positions, kv_caches,
E File "/home/bnell/nm-vllm-new/vllm/model_executor/models/gpt_bigcode.py", line 229, in forward
E hidden_states = layer(hidden_states, kv_caches[i], attn_metadata)
E File "/home/bnell/nm-vllm-new/vllm/model_executor/models/gpt_bigcode.py", line 173, in forward
E attn_output = self.attn(
E File "/home/bnell/nm-vllm-new/vllm/model_executor/models/gpt_bigcode.py", line 110, in forward
E attn_output = self.attn(q, k, v, kv_cache, attn_metadata)
E File "/home/bnell/nm-vllm-new/vllm/attention/layer.py", line 98, in forward
E return self.impl.forward(query,
E File "/home/bnell/nm-vllm-new/vllm/attention/backends/xformers.py", line 595, in forward
E out = self._run_memory_efficient_xformers_forward(
E File "/home/bnell/nm-vllm-new/vllm/attention/backends/xformers.py", line 739, in _run_memory_efficient_xformers_forward
E out = xops.memory_efficient_attention_forward(
E File "/home/bnell/pt24/lib/python3.10/site-packages/xformers/ops/fmha/__init__.py", line 304, in memory_efficient_attention_forward
E return _memory_efficient_attention_forward(
E File "/home/bnell/pt24/lib/python3.10/site-packages/xformers/ops/fmha/__init__.py", line 418, in _memory_efficient_attention_forward
E out, *_ = op.apply(inp, needs_gradient=False)
E File "/home/bnell/pt24/lib/python3.10/site-packages/xformers/ops/fmha/cutlass.py", line 259, in apply
E replace(inp, query=query, key=key, value=value, attn_bias=bias),
E
E Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
E
E
E You can suppress this exception and fall back to eager by setting:
E import torch._dynamo
E torch._dynamo.config.suppress_errors = True
```
### Versions
```
Collecting environment information...
PyTorch version: 2.5.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.5.0-35-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.5.82
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 555.42.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8462Y+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 8
CPU max MHz: 4100.0000
CPU min MHz: 800.0000
BogoMIPS: 5600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx \
fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_ts\
c cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4\
_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l\
2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase\
tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx51\
2cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect a\
vx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hfi vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq\
avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serial\
ize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 120 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence\
; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flashinfer==0.0.9+cu121torch2.3
[pip3] mypy==1.9.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] onnx==1.14.1
[pip3] onnxruntime==1.18.1
[pip3] pytorch-triton==3.1.0+5fe38ffd73
[pip3] torch==2.5.0+cu124
[pip3] torchaudio==2.5.0.dev20240919+cu121
[pip3] torchvision==0.20.0.dev20240919+cu121
[pip3] triton==3.0.0
[conda] Could not collect
```
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @rec @zou3519 | good first issue,triaged,oncall: pt2,module: dynamo,vllm-compile | low | Critical |
2,543,847,295 | next.js | Using a client-side promise on initial render hangs server stream | ### Link to the code that reproduces this issue
https://github.com/mordechaim/promise-stream
### To Reproduce
1. Start the application with `npm run dev`
2. Click "hard navigation" link
### Current vs. Expected behavior
I use `use()` to resolve the promise in a client component. If the page is a full page load, the suspended component never "wakes up", the browser's loading indicator keeps spinning and the initial response body never completes.
When building the application with `next build` it hangs as well, with the following error message:
```
> next build
▲ Next.js 15.0.0-canary.163
Creating an optimized production build ...
✓ Compiled successfully
✓ Linting and checking validity of types
✓ Collecting page data
Generating static pages (5/6) [= ]Failed to build /suspend/page: /suspend (attempt 1 of 3) because it took more than 60 seconds. Retrying again shortly.
```
The behavior is not present if any of those is true:
- The promise is created on the server and passed to the client in unresolved state
- The promise resolves before the initial render completes
- The page is a soft navigation, namely, the promise wasn't pre-rendered on the server
### Provide environment information
```bash
Operating System:
Platform: win32
Arch: x64
Version: Windows 10 Home
Available memory (MB): 32674
Available CPU cores: 8
Binaries:
Node: 20.5.0
npm: N/A
Yarn: N/A
pnpm: N/A
Relevant Packages:
next: 15.0.0-canary.163 // Latest available version is detected (15.0.0-canary.163).
eslint-config-next: N/A
react: 19.0.0-rc-5d19e1c8-20240923
react-dom: 19.0.0-rc-5d19e1c8-20240923
typescript: 5.3.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Lazy Loading
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local)
### Additional context
_No response_ | bug,Lazy Loading | low | Critical |
2,543,866,011 | pytorch | [torch.library] add convenience API for autocast | internal x-post: https://fb.workplace.com/groups/1405155842844877/permalink/9142331259127258/
Probably add some torch.library.register_autocast API with some convenience options ("upcast to float32", "downcast to float16")
cc @mcarilli @ptrblck @leslie-fang-intel @jgong5 @anjali411 @ezyang @chauhang @penguinwu @bdhirsh | triaged,module: amp (automated mixed precision),module: library,oncall: pt2,module: pt2-dispatcher | low | Minor |
2,543,870,024 | TypeScript | TypeScript language server fails to recognize new files and needs restart | Type: <b>Bug</b>
Since the last update(s) or so, the TypeScript language server fails to recognize or do any code completion on newly created (or copy-pasted) files, no matter if they are TS or TSX.
With TS files it fails to do code completion for any other code from my project (cannot find anything from my project to import when I do CTRL+SPACE).
For TSX files, it fails to recognize React and all I get is syntax errors for JSX code.
I have to manually restart the TS language server in order to fix this. That or wait ~5s.
Context:
- I am on the latest TS version 5.5.4
- I am working on a typical NextJS (14.2.5) project with shadcn-ui components.
tsconfig is the nextj's default:
```json
{
"compilerOptions": {
"lib": ["dom", "dom.iterable", "esnext"],
"allowJs": true,
"skipLibCheck": true,
"strict": true,
"noEmit": true,
"esModuleInterop": true,
"module": "esnext",
"target": "ES2020",
"moduleResolution": "bundler",
"resolveJsonModule": true,
"isolatedModules": true,
"jsx": "preserve",
"incremental": true,
"plugins": [
{
"name": "next"
}
],
"paths": {
"@/*": ["./*"]
}
},
"include": ["next-env.d.ts", "**/*.ts", "**/*.tsx", ".next/types/**/*.ts"],
"exclude": ["node_modules", ".next"]
}
```
VS Code version: Code 1.92.1 (Universal) (eaa41d57266683296de7d118f574d0c2652e1fc4, 2024-08-07T20:16:39.455Z)
OS version: Darwin arm64 23.5.0
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Apple M1 Max (10 x 2400)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|2, 2, 2|
|Memory (System)|32.00GB (1.05GB free)|
|Process Argv|--crash-reporter-id 593ea142-22bd-42ca-a19a-94f980be787b|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (37)</summary>
Extension|Author (truncated)|Version
---|---|---
rust-bundle|1Yi|1.0.0
html-class-suggestions|And|1.2.1
biome|bio|2.3.0
vscode-tailwindcss|bra|0.12.6
npm-intellisense|chr|1.4.5
path-intellisense|chr|2.9.0
vscode-css-modules|cli|0.5.1
vscode-notes|dio|1.2.1
rust-syntax|dus|0.6.1
prettier-vscode|esb|10.4.0
html-slim-scss-css-class-completion|gen|1.7.8
codespaces|Git|1.17.2
copilot|Git|1.221.0
copilot-chat|Git|0.18.1
vscode-github-actions|git|0.26.3
vscode-scss|mrm|0.10.0
black-formatter|ms-|2024.2.0
debugpy|ms-|2024.10.0
python|ms-|2024.12.2
vscode-pylance|ms-|2024.8.1
jupyter|ms-|2024.7.0
jupyter-keymap|ms-|1.1.2
jupyter-renderers|ms-|1.0.19
vscode-jupyter-cell-tags|ms-|0.1.9
vscode-jupyter-slideshow|ms-|0.1.6
live-server|ms-|0.4.14
vscode-speech|ms-|0.10.0
material-icon-theme|PKi|5.9.0
vscode-css-peek|pra|4.4.1
code-snapshot|rob|0.2.1
rust-analyzer|rus|0.3.2062
tauri-vscode|tau|0.2.6
luna-paint|Tyr|0.16.0
vscode-mdx|uni|1.8.9
vscode-wakatime|Wak|24.6.0
pretty-ts-errors|Yoa|0.6.0
vscode-className-completion|zwk|0.0.18
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vstes627:30244334
vscoreces:30445986
vscod805cf:30301675
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
962ge761:30959799
pythongtdpath:30769146
welcomedialog:30910333
pythonnoceb:30805159
asynctok:30898717
pythonregdiag2:30936856
pythonmypyd1:30879173
2e7ec940:31000449
pythontbext0:30879054
accentitlementsc:30995553
dsvsc016:30899300
dsvsc017:30899301
dsvsc018:30899302
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
da93g388:31013173
pythoncenvpt:31062603
a69g1124:31058053
dvdeprecation:31068756
dwnewjupytercf:31046870
2f103344:31071589
impr_priority:31102340
refactort:31108082
pythonrstrctxt:31112756
wkspc-onlycs-t:31111718
wkspc-ranged-t:31111713
```
</details>
<!-- generated by issue reporter --> | Needs More Info | low | Critical |
2,543,933,717 | node | `test/pummel/test-timers.js` is flaky | ### Test
`test/pummel/test-timers.js`
### Platform
Linux x64
### Console output
```console
=== release test-timers ===
Path: pummel/test-timers
--- stderr ---
diff: 999
node:internal/assert/utils:281
throw err;
^
AssertionError [ERR_ASSERTION]: The expression evaluated to a falsy value:
assert.ok(1000 <= diff && diff < 1000 + WINDOW)
at Timeout.<anonymous> (/home/runner/work/_temp/node-v23.0.0-nightly2024-09-2371eb7381f9/test/pummel/test-timers.js:39:12)
at Timeout._onTimeout (/home/runner/work/_temp/node-v23.0.0-nightly2024-09-2371eb7381f9/test/common/index.js:493:15)
at listOnTimeout (node:internal/timers:614:17)
at process.processTimers (node:internal/timers:549:7) {
generatedMessage: true,
code: 'ERR_ASSERTION',
actual: false,
expected: true,
operator: '=='
}
Node.js v23.0.0-pre
Command: out/Release/node --test-reporter=spec /home/runner/work/_temp/node-v23.0.0-nightly2024-09-2371eb7381f9/test/pummel/test-timers.js
```
### Build links
- https://github.com/nodejs/node/actions/runs/11001920947/job/30547830943?pr=54987#step:10:5638
### Additional information
_No response_ | flaky-test,linux | low | Critical |
2,543,966,141 | vscode | NB Muli Cursor -- Undo/Redo operation failure | Re: https://github.com/microsoft/vscode/issues/141673
---
Undo operation only applied to 1 of 2 models. Fails rarely ❄️
- two cells, both with only `print("hello world")
- cursor on the first `hello`
- trigger `cmd+d` twice, selecting both
- backspace/delete (trigger deleteLeft)
- undo
- 🐛 only second cell is restored | bug,notebook-cell-editor | low | Critical |
2,543,967,938 | svelte | Svelte does not support import maps | ### Describe the bug
If you try to use an [import map](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/script/type/importmap) in Svelte, it does not compile and throws an error.
### Reproduction
[REPL link](https://svelte-5-preview.vercel.app/#H4sIAAAAAAAACl2Q22rDMBBEf0UsLU4g-NJHxTHtd1R9cKxNrNS6IK1zwfjfi7AcSF8kNGdmNewEJzVgAP49gWk1Aocv52AH9HDxEa44EMIOgh19F5U6dF45aoRR2llPbOqsdmrAmZ281SxbIkVSfbYXZkBii8wO7C1QS7jJsm0iySkjk-jVFeUmaZsltY3Wunj-nDqwWPIgYCmiWyegEUbQFA9BKwgC-CqRAGMlcudthyESAT2RC7woRuN-z3lndZHoZ5lXVV6VxdHbW0CfX4KAZdAcr_lfKXoM2AhDeKfWY8smYRi7KUk9Z1VZvu_ju0d17omzj7J0970wy5AUrZ_ZozKSX9thxMO0LGFmTV2svKmP_sWfrOsuX82wA22lOimUwMmPOP_Mf7gX0bn6AQAA)
### Logs
_No response_
### System Info
```shell
System info is not relevant here.
The error is in all versions of Svelte.
```
### Severity
annoyance | feature request,needs discussion | low | Critical |
2,543,999,795 | PowerToys | Keyboard Manager individual assignments temporary disable buttons/checkmarks | ### Description of the new feature / enhancement
Beside the remove buttons for certain assignments, it would be useful to add a pause or deactivate button to temporarily disable the assignments without removing them.
Even better would be automatic assignment changes depending on certain apps/programs in the foreground (or at least active).
### Scenario when this would be used?
When you only use assignments for certain scenarios, you'd want to disable them for general use. Without disabling the entire Manager functionality. Because some assignments you want to be permanent, others temporary.
Deleting them makes it very tedious to assign them again.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,544,034,305 | pytorch | [dynamo] enable TorchDispatchMode for eager part when graph breaks | ### 🚀 The feature, motivation and pitch
In eager mode, we use `TorchDispatchMode` to count flops and estimate runtime. It works well for AutoFSDP and we want to extend it to torch.compile with graph breaks
for torch.compile with graph breaks, we want to enable `TorchDispatchMode` for eager part in-between compiled regions. In following example, `relu_graph_break` is the eager part. currently I have to manually call `with printing_mode:` to turn on `TorchDispatchMode` at graph break boundary.
**Question: is it possible to have some dynamo hooks so I can enable/disable `TorchDispatchMode` at graph break boundaries?**
```
import torch
import torch.nn as nn
from torch.utils._python_dispatch import TorchDispatchMode
class PrintingMode(TorchDispatchMode):
def __torch_dispatch__(self, func, types, args=(), kwargs=None):
print(f"{func.__module__}.{func.__name__}")
return func(*args, **kwargs)
class MLP(nn.Module):
def __init__(self, printing_mode: TorchDispatchMode):
super().__init__()
self.in_proj = nn.Linear(4, 4, bias=False, device="cuda")
self.relu = nn.ReLU()
self.out_proj = nn.Linear(4, 4, bias=False, device="cuda")
self.printing_mode = printing_mode
def forward(self, x: torch.Tensor) -> torch.Tensor:
z = self.in_proj(x)
z = self.relu_graph_break(z)
z = self.out_proj(z)
return z
@torch.compiler.disable
def relu_graph_break(self, x):
with printing_mode:
return self.relu(x)
if __name__ == "__main__":
printing_mode = PrintingMode()
model = MLP(printing_mode)
inp = torch.rand(4, 4, device="cuda")
loss = torch.compile(model)(inp).sum()
```
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @rec | triaged,oncall: pt2,module: dynamo | low | Major |
2,544,096,153 | TypeScript | `ReadonlySet` and `ReadonlyMap` are lacking `Symbol.toStringTag` | ### 🔎 Search Terms
`ReadonlySet`, `ReadonlyMap`, `Symbol.toStringTag`
### 🕗 Version & Regression Information
This is the behavior in every version I tried, and I reviewed the FAQ for entries about `ReadonlySet`, `ReadonlyMap`, `Symbol.toStringTag`
### ⏯ Playground Link
https://www.typescriptlang.org/play/?target=99#code/C4TwDgpgBAsgrsAhsAlgOwOY0WGFgAWA9gCYByiAttALxQCiAHgMYA2cJEAPANYQhEAZrBxc4aHmiIB3NABoo4yTLQA+BXwHCAShEQkiaViGxgxEqbIVLLa1QChQkWAmToMAZXx5CpCtSg6JjYObk0hKC9gc2VZdShwnT0DIxAomNtVB3sAehyoAAFgAGcAWghGSGZgcoAnWqJa+1Z8KEQALhckVExTH2JyKlpIkEoAIyJWADpgIg9gWvcAFUQMXPyisoqqmoh6xubWsc74bvco-r8hwJHxyZm5heXV+yA
### 💻 Code
```ts
type MutatingMapMethodName = Exclude<keyof Map<unknown, unknown>, keyof ReadonlyMap<unknown, unknown>>
type MutatingSetMethodName = Exclude<keyof Set<unknown>, keyof ReadonlySet<unknown>>
// @ts-expect-error
let a: MutatingMapMethodName = Symbol.toStringTag
// @ts-expect-error
let b: MutatingSetMethodName = Symbol.toStringTag
```
### 🙁 Actual behavior
Both `@ts-expect-error`s give `Unused '@ts-expect-error' directive.(2578)`
### 🙂 Expected behavior
Both `@ts-expect-error`s are used
### Additional information about the issue
_No response_ | Bug,Help Wanted | low | Critical |
2,544,101,022 | tauri | [bug] V2 pnpm tauri android dev just hands and does nothing - Ubuntu 24.4.0 | ### Describe the bug
pnpm Tauri android dev just hangs with no output or action. --verbose doesn't assist.
### Reproduction
pnpm create tauri-app --rc
cd tauri-app
pnpm i
pnpm tauri android init
pnpm tauri dev -- builds and launches
pnpm tauri android dev: hangs
pnpm tauri android dev --open: opens android studio and will run the app after I open the emulator
### Expected behavior
I would expect something to start compiling and then for the emulator to launch
### Full `tauri info` output
```text
[✔] Environment
- OS: Ubuntu 24.4.0 x86_64 (X64)
✔ webkit2gtk-4.1: 2.44.3
✔ rsvg2: 2.58.0
✔ rustc: 1.81.0 (eeb90cda1 2024-09-04)
✔ cargo: 1.81.0 (2dbb1af80 2024-08-20)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-x86_64-unknown-linux-gnu (default)
- node: 22.8.0
- pnpm: 9.11.0
- npm: 10.8.2
[-] Packages
- tauri 🦀: 2.0.0-rc.15
- tauri-build 🦀: 2.0.0-rc.12
- wry 🦀: 0.43.1
- tao 🦀: 0.30.2
- @tauri-apps/api : 2.0.0-rc.5
- @tauri-apps/cli : 2.0.0-rc.16
[-] Plugins
- tauri-plugin-shell 🦀: 2.0.0-rc.3
- @tauri-apps/plugin-shell : 2.0.0-rc.1
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:1420/
- framework: Vue.js
- bundler: Vite
```
### Stack trace
_No response_
### Additional context
.zshrc exports
export JAVA_HOME=/snap/android-studio/current/jbr
export ANDROID_HOME="$HOME/Android/Sdk"
export NDK_HOME="$ANDROID_HOME/ndk/$(ls -1 $ANDROID_HOME/ndk)"
Android studio:
CompileCommand: exclude com/intellij/openapi/vfs/impl/FilePartNodeRoot.trieDescend bool exclude = true
Android Studio Koala | 2024.1.1
Build #AI-241.15989.150.2411.11948838
Note that 'pnpm tauri android dev --open' starts compiling and then opens android studio.
| type: bug,platform: Linux,status: needs triage,platform: Android | medium | Critical |
2,544,106,271 | pytorch | torch._dynamo.exc.Unsupported: Unexpected type in sourceless builder torch.Tensor when running Mamba models in vLLM | ### 🐛 Describe the bug
The Mamba models in vLLM contain a user defined class `MambaCacheParams` which dynamo seems to be unable to process. The error can be reproduced running the following steps:
```
export VLLM_TEST_DYNAMO_GRAPH_CAPTURE=1
export VLLM_TEST_DYNAMO_FULLGRAPH_CAPTURE=1
pytest -s tests/models/decoder_only/language/test_jamba.py::test_batching[5-half-ai21labs/Jamba-tiny-random]
```
I'm using the 2.5 nightly version of pytorch and the 0.6.1.post2 (b05f5c923) version of vLLM.
Note: this also fails with pytorch 2.4 due to `itertools.zip_longest` being unsupported.
Stacktrace:
```
E torch._dynamo.exc.Unsupported: Unexpected type in sourceless builder torch.Tensor
E
E from user code:
E File "/home/bnell/nm-vllm-new/vllm/model_executor/models/jamba.py", line 635, in forward
E hidden_states = self.model(input_ids, positions, kv_caches,
E File "/home/bnell/nm-vllm-new/vllm/model_executor/models/jamba.py", line 531, in forward
E hidden_states, residual = layer(
E File "/home/bnell/nm-vllm-new/vllm/model_executor/models/jamba.py", line 352, in forward
E hidden_states = self.mamba(hidden_states, attn_metadata, conv_state,
E File "/home/bnell/nm-vllm-new/vllm/model_executor/models/jamba.py", line 235, in forward
E cache = MambaCacheParams(True,
E
E Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
E
E
E You can suppress this exception and fall back to eager by setting:
E import torch._dynamo
E torch._dynamo.config.suppress_errors = True
```
### Versions
Collecting environment information...
PyTorch version: 2.5.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.5.0-35-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.5.82
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 555.42.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8462Y+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 8
CPU max MHz: 4100.0000
CPU min MHz: 800.0000
BogoMIPS: 5600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hfi vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 120 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flashinfer==0.0.9+cu121torch2.3
[pip3] mypy==1.9.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] onnx==1.14.1
[pip3] onnxruntime==1.18.1
[pip3] pytorch-triton==3.1.0+5fe38ffd73
[pip3] torch==2.5.0+cu124
[pip3] torchaudio==2.5.0.dev20240919+cu121
[pip3] torchvision==0.20.0.dev20240919+cu121
[pip3] triton==3.0.0
[conda] Could not collect
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @rec @zou3519 | triaged,oncall: pt2,module: dynamo,vllm-compile | low | Critical |
2,544,114,274 | pytorch | torch._dynamo.exc.Unsupported: 'immutable_list' object does not support mutation when running MiniCPM-Llama model in vLLM | ### 🐛 Describe the bug
Running one of the minicpm-llama model tests results in a dynamo error on the builtin `iadd` function.
```
export VLLM_TEST_DYNAMO_GRAPH_CAPTURE=1
export VLLM_TEST_DYNAMO_FULLGRAPH_CAPTURE=1
pytest -s tests/models/decoder_only/vision_language/test_minicpmv.py::test_models[5-128-half-size_factors0-openbmb/MiniCPM-Llama3-V-2_5]
```
I'm using the 2.5 nightly version of pytorch and the 0.6.1.post2 (b05f5c923) version of vLLM.
Stacktrace:
```
E torch._dynamo.exc.Unsupported: Error in model execution (input dumped to /tmp/err_execute_model_input_20240924-020835.pkl): Failed running call_function <built-in function iadd>(*([], FakeTensor(..., device='cuda:0', size=(1, 3, 14, 14336))), **{}):
E 'immutable_list' object does not support mutation. If you are attempting to modify the kwargs or args of a torch.fx.Node object,
E instead create a new copy of it and assign the copy to the node:
E new_args = ... # copy and mutate args
E node.args = new_args
E
E
E from user code:
E File "/home/bnell/nm-vllm-new/vllm/model_executor/models/minicpmv.py", line 474, in forward
E image_inputs = self._parse_and_validate_inputs(input_ids, **kwargs)
E File "/home/bnell/nm-vllm-new/vllm/model_executor/models/minicpmv.py", line 446, in _parse_and_validate_inputs
E pixel_values_flat += pixel_n
E
E Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
E
E
E You can suppress this exception and fall back to eager by setting:
E import torch._dynamo
E torch._dynamo.config.suppress_errors = True
```
### Versions
```
Collecting environment information...
PyTorch version: 2.5.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.5.0-35-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.5.82
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 555.42.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8462Y+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 8
CPU max MHz: 4100.0000
CPU min MHz: 800.0000
BogoMIPS: 5600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hfi vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 120 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flashinfer==0.0.9+cu121torch2.3
[pip3] mypy==1.9.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] onnx==1.14.1
[pip3] onnxruntime==1.18.1
[pip3] pytorch-triton==3.1.0+5fe38ffd73
[pip3] torch==2.5.0+cu124
[pip3] torchaudio==2.5.0.dev20240919+cu121
[pip3] torchvision==0.20.0.dev20240919+cu121
[pip3] triton==3.0.0
[conda] Could not collect
```
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @rec @zou3519 | triaged,oncall: pt2,module: dynamo,vllm-compile | low | Critical |
2,544,124,133 | pytorch | torch._dynamo.exc.Unsupported: ObservedKeyError exception running Gguf llama model in vLLM | ### 🐛 Describe the bug
Running the `TheBloke/TinyLlama-1.1B-Chat-v1.0-GGUF` model with gguf quantization leads to a `ObservedKeyError` in dynamo.
Reproduction steps:
Save to `bug.py`
```
import torch
import vllm
from huggingface_hub import hf_hub_download
from vllm import LLM, SamplingParams
# Sample prompts.
prompts = [
"Hello, my name is",
"The president of the United States is",
"The capital of France is",
"The future of AI is",
]
# Create a sampling params object.
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
model=hf_hub_download("TheBloke/TinyLlama-1.1B-Chat-v1.0-GGUF",
filename="tinyllama-1.1b-chat-v1.0.Q4_K_M.gguf")
llm = LLM(model=model, enforce_eager=True)
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
```
export VLLM_TEST_DYNAMO_GRAPH_CAPTURE=1
export VLLM_TEST_DYNAMO_FULLGRAPH_CAPTURE=1
python3 bug.py
```
I'm using the 2.5 nightly version of pytorch and the 0.6.1.post2 (b05f5c923) version of vLLM.
Note: this was masked by a different issue in pytorch 2.4
Partial Stacktrace:
```
../pt24/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py:111: in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
../pt24/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:836: in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
../pt24/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:3011: in inline_call
return cls.inline_call_(parent, func, args, kwargs)
../pt24/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:3139: in inline_call_
tracer.run()
../pt24/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:983: in run
while self.step():
../pt24/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:898: in step
self.exception_handler(e)
../pt24/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:1496: in exception_handler
raise raised_exception
../pt24/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:895: in step
self.dispatch_table[inst.opcode](self, inst)
../pt24/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:582: in wrapper
return inner_fn(self, inst)
../pt24/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py:301: in impl
self.push(fn_var.call_function(self, self.popn(nargs), {}))
../pt24/lib/python3.10/site-packages/torch/_dynamo/variables/builtin.py:967: in call_function
return handler(tx, args, kwargs)
../pt24/lib/python3.10/site-packages/torch/_dynamo/variables/builtin.py:848: in builtin_dispatch
rv = fn(tx, args, kwargs)
../pt24/lib/python3.10/site-packages/torch/_dynamo/variables/builtin.py:766: in call_self_handler
result = self_handler(tx, *args, **kwargs)
../pt24/lib/python3.10/site-packages/torch/_dynamo/variables/builtin.py:1472: in call_getitem
return args[0].call_method(tx, "__getitem__", args[1:], kwargs)
../pt24/lib/python3.10/site-packages/torch/_dynamo/variables/dicts.py:261: in call_method
return self.getitem_const_raise_exception_if_absent(tx, args[0])
../pt24/lib/python3.10/site-packages/torch/_dynamo/variables/dicts.py:224: in getitem_const_raise_exception_if_absent
raise_observed_exception(KeyError, tx, self)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
e = <class 'KeyError'>, tx = <torch._dynamo.symbolic_convert.InliningInstructionTranslator object at 0x76f3d82b5390>
vt = ConstDictVariable()
def raise_observed_exception(e, tx, vt):
from .variables import BuiltinVariable
# CPython here raises an exception. Since there is no python code, we have to manually setup the exception
# stack and raise the exception.
exception_vt = BuiltinVariable(e).call_function(vt, [], {})
tx.exn_vt_stack.append(exception_vt)
> raise observed_exception_map[e]
E torch._dynamo.exc.ObservedKeyError
../pt24/lib/python3.10/site-packages/torch/_dynamo/exc.py:234: ObservedKeyError
```
### Versions
```
Collecting environment information...
PyTorch version: 2.5.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.5.0-35-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.5.82
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 555.42.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8462Y+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
Stepping: 8
CPU max MHz: 4100.0000
CPU min MHz: 800.0000
BogoMIPS: 5600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hfi vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 128 MiB (64 instances)
L3 cache: 120 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flashinfer==0.0.9+cu121torch2.3
[pip3] mypy==1.9.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] onnx==1.14.1
[pip3] onnxruntime==1.18.1
[pip3] pytorch-triton==3.1.0+5fe38ffd73
[pip3] torch==2.5.0+cu124
[pip3] torchaudio==2.5.0.dev20240919+cu121
[pip3] torchvision==0.20.0.dev20240919+cu121
[pip3] triton==3.0.0
[conda] Could not collect
```
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @rec @zou3519 | triaged,oncall: pt2,module: dynamo,vllm-compile | low | Critical |
2,544,144,053 | go | proposal: path/filepath: Deprecate Walk | ### Proposal Details
filepath.WalkDir was added in Go 1.16. It's past time to go ahead and deprecate filepath.Walk, which is less efficient. | Proposal | low | Major |
2,544,147,561 | godot | Custom cursors don't scale the same as ui elements, when monitors are scaled | ### Tested versions
- Reproducible in: 4.0.stable, 4.3.stable, 4.4.dev2
### System information
Godot v4.3.stable (77dcf97d8) - Windows 10.0.19045 - Vulkan (Forward+) - dedicated AMD Radeon RX 6650 XT (Advanced Micro Devices, Inc.; 32.0.11021.1011) - AMD Ryzen 5 2600X Six-Core Processor (12 Threads)
### Issue description
When using monitors with different scaling, custom mouse cursor images aren't scaled.
UI elements work fine and will correctly update when changing a monitor's scaling percent.
### Steps to reproduce
1. Set a custom cursor, `Input.set_custom_mouse_cursor(CUSTOM_CURSOR)`
2. Run project
3. Move game window between monitors of different scales.
3b. You can also change the scale of the monitor the game window is currently on.

### Minimal reproduction project (MRP)
[Custom Cursor Scaling.zip](https://github.com/user-attachments/files/17107289/Custom.Cursor.Scaling.zip)
Cursor size should match the color rect.
Expected (100% scale):

Actual (150% scale):

| platform:windows,needs testing,topic:input | low | Major |
2,544,166,871 | vscode | On transforming the text case everything is mugged up in single line. | Type: <b>Bug</b>
I have this code in camel case.

When I do `ctrl + shift + P` and type `transform` , I get suggestions to convert it to `snake case`.

On transforming it works perfectly.

### BUT BUT BUT
On transforming back to the pascal case , everything is mugged up in the same line , also keywords should be ignored but it changes it also.

## Please look into it . Thanks
VS Code version: Code 1.93.1 (38c31bc77e0dd6ae88a4e9cc93428cc27a56ba40, 2024-09-11T17:20:05.685Z)
OS version: Windows_NT x64 10.0.22631
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|12th Gen Intel(R) Core(TM) i5-12500H (16 x 3110)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|15.69GB (4.54GB free)|
|Process Argv|--crash-reporter-id 5767e645-bcaf-493d-905a-ce91fe543ffb|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (41)</summary>
Extension|Author (truncated)|Version
---|---|---
better-comments|aar|3.0.2
vscode-tailwindcss|bra|0.12.10
simple-react-snippets|bur|1.2.8
vscode-notes|dio|1.2.1
competitive-programming-helper|Div|2024.7.1722430096
bracket-pair-toggler|dzh|0.0.3
prettier-vscode|esb|11.0.0
auto-close-tag|for|0.5.15
code-runner|for|0.12.2
vscode-javac|geo|0.2.46
mongodb-vscode|mon|1.8.1
python|ms-|2024.14.0
vscode-pylance|ms-|2024.9.2
jupyter|ms-|2024.8.1
jupyter-keymap|ms-|1.1.2
jupyter-renderers|ms-|1.0.19
vscode-jupyter-cell-tags|ms-|0.1.9
vscode-jupyter-slideshow|ms-|0.1.6
cpptools|ms-|1.21.6
cpptools-extension-pack|ms-|1.3.0
live-server|ms-|0.4.15
prisma|Pri|5.19.1
java|red|1.34.0
vscode-microprofile|red|0.12.0
vscode-quarkus|red|1.18.1
LiveServer|rit|5.7.9
es7-react-js-snippets|rod|1.9.3
sonarlint-vscode|Son|4.10.0
ayu|tea|1.0.5
pdf|tom|1.2.2
intellicode-api-usage-examples|Vis|0.2.8
vscodeintellicode|Vis|1.3.1
vscode-gradle|vsc|3.16.4
vscode-java-debug|vsc|0.58.0
vscode-java-dependency|vsc|0.24.0
vscode-java-pack|vsc|0.29.0
vscode-java-test|vsc|0.42.0
vscode-maven|vsc|0.44.0
vscode-icons|vsc|12.9.0
Java-extension-pack|wal|1.0.0
markdown-pdf|yza|1.5.0
(4 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
2i9eh265:30646982
962ge761:30959799
pythongtdpath:30769146
welcomedialog:30910333
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
accentitlementst:30995554
dsvsc016:30899300
dsvsc017:30899301
dsvsc018:30899302
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
724cj586:31013169
a69g1124:31058053
dvdeprecation:31068756
dwnewjupyter:31046869
newcmakeconfigv2:31071590
impr_priority:31102340
nativerepl1:31139838
refactort:31108082
pythonrstrctxt:31112756
flightc:31134773
wkspc-onlycs-t:31132770
nativeloc1:31134641
wkspc-ranged-c:31125598
cf971741:31144450
fje88620:31121564
iacca2:31144504
```
</details>
<!-- generated by issue reporter --> | help wanted | low | Critical |
2,544,214,507 | godot | VoxelGI doesn't render if a LightmapGI's data is loaded | ### Tested versions
- Reproducible in v4.3.stable.official [77dcf97d8]
### System information
- Windows 10.0.22621 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3050 Laptop GPU (NVIDIA; 32.0.15.5599) - 12th Gen Intel(R) Core(TM) i5-12500H (16 Threads)
### Issue description
## Expected Behavior
If a LightmapGI node and a VoxelGI node exist, making one visible and one invisible will render just the GI style of the visible node. Lowest in the scene tree order has priority in rendering.
## Observed Behavior
If a LightmapGI node and a VoxelGI node exist, and LightmapGI has light data loaded, under no circumstances the VoxelGI node will affect the lighting. Turning the LightmapGI node invisible will make Godot think **neither** exist, even if VoxelGi is visible and has baked data.
If a LightmapGI node and a VoxelGI node exist, BUT LightmapGI has no light data loaded, the VoxelGI correctly affects the lighting if it's visible.
## Use-case and Possible Workaround
I want to have both a VoxelGI and LightmapGI set-up and ready for different target platforms / graphical settings, choosing which is visible on scene load, and otherwise easily previewing both in the editor by making them visible/invisible, including test runs.
However, as the existance of a loaded LightmapGI invalidates VoxelGI completely, to swap between the two looks in the editor I either need to keep clearing the light data field or removing the lightmap node completely, which means I'd have to add them back later which is not quick. In the actual game I'd just use scripts to control this on load but the editor is the key use-case here.
Issue can be seen recorded here:
https://youtu.be/EgHb4anSAqk
### Steps to reproduce
- Make a scene with meshes that can receive lightmaps.
- Add a VoxelGI node, set it up and bake. Lighting should change to the voxel's.
- Add a LightmapGI node and bake. Lighting should change to the lightmap's.
---
- Make the LightmapGI node invisible. Lighting should act as if VoxelGI is also invisible, displaying just the engine lighting.
---
- Clear the "Light Data" field of LightmapGI. Lighting will work correctly, displaying the voxel's lighting.
### Minimal reproduction project (MRP)
[voxelandlightmap.zip](https://github.com/user-attachments/files/17131247/voxelandlightmap.zip)
| bug,topic:rendering,documentation,topic:3d | low | Minor |
2,544,396,254 | godot | Crash when spawning PackedScenes containing GPUParticles3D with Z-Bilboard Enabled in Compatibility | ### Tested versions
Reproducible in Compatibility mode using both C# and GDScript In:
* v4.2.2.stable.mono.official [15073afe3]
* v4.3.stable.mono.official [77dcf97d8]
* v4.4.dev2.mono.official [97ef3c837]
Not Reproducible in:
* Forward+ in any versions listed above
### System information
Godot v4.3.stable.mono - Windows 10.0.19045 - GLES3 (Compatibility) - NVIDIA GeForce RTX 2070 (NVIDIA; 31.0.15.3742) - Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz (16 Threads)
### Issue description
Possibly related to this but seems like that resolution only solved the issue for Forward+ and not Compatibility mode.
https://github.com/godotengine/godot/issues/78498
When in compatibility mode, spawning a PackedScene containing a GPUParticle3D with Z-Bilboard set causes it to crash with no errors even if `--verbose` is used.
Disabling Z-Bilboard does indeed prevent this from happening, and this seems to only happen in compatibility mode, but an error on crash would be valuable instead of silently stopping.
This does not apply to the root scene being loaded, it seems to be specific to loading it from a PackedScene.
I'm spawning the scenes with this
```gdscript
class_name GridSpawner
extends Node
@export var sceneToSpawn : PackedScene
@export var gridSize : Vector2i = Vector2(16, 16)
@export var gridCellSize : Vector3 = Vector3(10, 0, 10)
func _ready():
for y in gridSize.y:
for x in gridSize.x:
print("spawning room %s %s" % [x, y])
var spawnedRoom = sceneToSpawn.instantiate() as Node3D
add_child(spawnedRoom)
spawnedRoom.global_position = Vector3(x, 0, y) * gridCellSize
```
Here is a video of trying different tests with Z-Billboard enabled and disabled, as well as a static test with no packed scene loading:
https://github.com/user-attachments/assets/ec8fe4d9-110b-40b9-8c55-5ced73e426fa
### Steps to reproduce
Create a new project in Compatibility mode
Set up a script that spawns packed scenes
Set up a scene to spawn that contains various things including a GPUParticle3D with Z-Bilboard enabled under the Drawing option
Try to spawn that scene some number of times
Over some number of spawns, the game crashes with no error
### Minimal reproduction project (MRP)
Minimal Reproduction
GridSpawnerTest.tscn is the main scene that has the issue, but for completeness, StaticObjectTest.tscn is there and that seems to load fine with 256 particles, it seems to specifically be some kind of issue with spawning from PackedScenes.
I made this with 4.2.2 but this was the same code and objects I used for testing 4.3 and 4.4 dev 2
[SpawnBug_4.2.2.zip](https://github.com/user-attachments/files/17108701/SpawnBug_4.2.2.zip)
| bug,confirmed,needs testing,topic:particles | low | Critical |
2,544,402,092 | PowerToys | print several seperated pdfs | ### Description of the new feature / enhancement
we want to print severl seperated pdf at once
### Scenario when this would be used?
when im downloading severl pdf from for example to fill form of pention or socail securety
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,544,404,251 | ollama | error looking up nvidia GPU memory - intermittent "cuda driver library failed to get device context 800" | ### What is the issue?
I've been running Ollama using the official Docker image, and everything was working fine initially. However, after a while (sometimes a dozen hours, sometimes a few days), Ollama logs showed the following error. Could you please advise on how to resolve this?
log
```
cuda driver library failed to get device context 800time=2024-09-24T00:41:06.577Z level=WARN source=gpu.go:400 msg="error looking up nvidia GPU memory"
time=2024-09-24T00:41:06.823Z level=WARN source=sched.go:647 msg="gpu VRAM usage didn't recover within timeout" seconds=5.504949612 model=/root/.ollama/models/blobs/sha256-60b185bbd0004312d5d4e3343d177b9cc049c1422629b9b96878a75f7bcf7fd3
```
### OS
Docker
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.10 | bug,nvidia,needs more info,docker | low | Critical |
2,544,457,881 | ui | [bug]: CLI fails to install components after manual setup due to missing components.json | ### Describe the bug
It seems that when adding a component via the CLI, a [conditional check](https://github.com/shadcn-ui/ui/blob/078dfe66072c4ca780bbc99d4ad4b13b1f44fe7e/packages/shadcn/src/preflights/preflight-add.ts#L26-L33) is performed to ensure that the `components.json` file is present. However, when following the [manual installation guide](https://ui.shadcn.com/docs/installation/manual), this file is never created, causing the CLI to initiate its init flow. This likely results in a failure, as no valid configuration is detected unless the project contains a config file that matches the following pattern: `**/{next,vite,astro}.config.*|gatsby-config.*|composer.json` ([see code here](https://github.com/shadcn-ui/ui/blob/078dfe66072c4ca780bbc99d4ad4b13b1f44fe7e/packages/shadcn/src/preflights/preflight-init.ts#L58-L74)).
This issue was previously reported in #4885 with an Electron project, but it likely affects any project that doesn't match the expected configuration file pattern when using the manual setup.
This issue initially occurred while I was implementing a shared UI package in a Turborepo project, but I was able to reproduce it more easily on an RSBuild + React setup.
Could this error be resolved by adjusting the manual installation instructions to ensure the `components.json` file is created? I managed to solve this error by just manually creating a `components.json` file and tweaking some configs.
### Affected component/components
CLI
### How to reproduce
1. Fork the linked Codesandbox project.
2. Run `pnpm dlx shadcn@latest add button`.
3. When prompted to create a `components.json` file, select "Yes".
### Codesandbox/StackBlitz link
https://codesandbox.io/p/devbox/2k3vm9
### Logs
```bash
✔ You need to create a component.json file to add components. Proceed? … yes
We could not detect a supported framework at /project/workspace.
Visit https://ui.shadcn.com/docs/installation/manual to manually configure your project.
Once configured, you can use the cli to add components.
```
### System Info
```bash
Any
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,544,523,264 | godot | the multiplayer.send_auth() returns error ERR_INVALID_PARAMETER | ### Tested versions
reproducible in Godot Engine v4.3.stable.official.77dcf97d8
### System information
windows 11 - godot 4.3 - OpenGL API 3.3.0 NVIDIA 560.81 - Compatibility - Using Device: NVIDIA - NVIDIA GeForce RTX 3070 Ti Laptop GPU
### Issue description
if you see the example code that i provided
in line :
```
var auth_packet :PackedByteArray= "test".to_utf8_buffer()
var error = multiplayer.send_auth(id, auth_packet)
```
the `id` is `server id` and is `int 1` and the `auth_packet ` is `PackedByteArray`
and value of `error` is `31` means `ERR_INVALID_PARAMETER`
according to [documents](https://docs.godotengine.org/en/4.3/classes/class_scenemultiplayer.html#class-scenemultiplayer-method-send-auth):
the **send_auth()** input parameters is : **int** and **PackedByteArray**
and im passing correct values to it. but returns **ERR_INVALID_PARAMETER**
i think this is a bug
### Steps to reproduce
```
extends Node3D
var port=8083
var ip='127.0.0.1'
func createServer()->void:
var server_peer := ENetMultiplayerPeer.new()
server_peer.create_server(port)
multiplayer.multiplayer_peer = server_peer
multiplayer.auth_callback = Callable(self, "_auth_request_handler")
multiplayer.peer_authenticating.connect(_on_peer_authenticating)
func _auth_request_handler(peer_id: int, auth_data: PackedByteArray)->void:
print('_auth_request_handler: ',peer_id)
func _on_peer_authenticating(peer_id:int)->void:
print("Peer is authenticating: ", peer_id)
func joinServer()->void:
var client_peer := ENetMultiplayerPeer.new()
client_peer.create_client(ip,port)
multiplayer.multiplayer_peer = client_peer
multiplayer.peer_connected.connect (inClientSide_clientConnected)
func inClientSide_clientConnected(id:int)->void:
if id==1:
var auth_packet :PackedByteArray= "test".to_utf8_buffer()
# Send the authentication data to the server
var error = multiplayer.send_auth(id, auth_packet)
print(id,' ',error)
```
### Minimal reproduction project (MRP)
[test2.zip](https://github.com/user-attachments/files/17109645/test2.zip)
| documentation,topic:multiplayer | low | Critical |
2,544,593,682 | vscode | Screen reader is not announcing control name present in top menu navigation and left navigation on hovering with mouse in mouse tracking on mode:A11y_Visual Studio Code Client_Home screen_ScreenReader | ## GitHub Tags:
#A11yTCS; #A11ySev4; #Visual Studio Code Client; #BM_Visual Studio Code Client_Win32_JULY2024; #DesktopApp; #FTP; #A11yUsable; #A11yUsablehigh; #NVDA; #Screen reader; #Win32; #A11yeDAD;
## Environment and OS details:
Application Name: Visual Studio Code Client
OS: Windows 11 version 23H2 OS built: 22631.4169.
Screen reader: NVDA: version 2024.2
## Reproduction Steps:
1. Turning on the NVDA.
2. Open Visual studio code insider editor.
3. Turn on mouse tracking mode with "Insert + M" key.
4. Now hover the control present in top menu navigation and left navigation.
5. Observed that NVDA is not announcing the control name.
## Actual Result:
Screen reader is not announcing control name present in top menu navigation and left navigation on hovering with mouse in mouse tracking on mode.
## Expected Result:
Screen reader should announce the control name present in top menu navigation and left navigation on hovering with mouse in mouse tracking on mode.
## User Impact:
Screen reader user will not now the control name on mouse hover because screen reader is not announcing control name present in top menu navigation and left navigation on hovering with mouse in mouse tracking on mode.
## Attachments:
NVDA attachment
https://github.com/user-attachments/assets/0c9cef7a-f921-481a-98f8-f1920d02c9d1
JAWS attachment
https://github.com/user-attachments/assets/60093b13-360d-44c2-bdeb-dd62c8869e58
| bug,accessibility | low | Minor |
2,544,597,931 | vscode | Investigate using MenuWorkbenchButtonBar for additionalActions button bar | Follow-up for https://github.com/microsoft/vscode/pull/229344/files.
/cc @alexr00 @jrieken | debt,comments | low | Minor |
2,544,599,477 | node | `node:child_process.fork` does not generate cpu-prof when process is killed | ### Version
v20.17.0
### Platform
```text
Darwin Aris-MacBook-Air.local 23.6.0 Darwin Kernel Version 23.6.0: Mon Jul 29 21:16:46 PDT 2024; root:xnu-10063.141.2~1/RELEASE_ARM64_T8112 arm64
```
### Subsystem
_No response_
### What steps will reproduce the bug?
```js
// repro.mjs
import { fork } from "node:child_process";
import { Worker } from "node:worker_threads";
import { existsSync, writeFileSync } from "node:fs";
import assert from "node:assert";
writeFileSync( "./example.mjs", `
console.log("Hello world");
// Keep alive
setInterval(() => {}, 1_000);
`, "utf8");
const subprocess = fork("./example.mjs", { execArgv: ["--cpu-prof", "--cpu-prof-dir=forks-profile"] });
const onExit = new Promise((r) => subprocess.on("exit", r));
await new Promise((r) => setTimeout(r, 1000));
subprocess.kill();
await onExit;
const thread = new Worker("./example.mjs", { execArgv: ["--cpu-prof", "--cpu-prof-dir=threads-profile"] });
await new Promise((r) => setTimeout(r, 1000));
await thread.terminate();
assert(existsSync("./threads-profile"), "Threads profile missing");
assert(existsSync("./forks-profile"), "Forks profile missing");
```
```sh
$ node repro.mjs
Hello world
Hello world
node:internal/modules/run_main:129
triggerUncaughtException(
^
AssertionError [ERR_ASSERTION]: Forks profile missing
at file:///x/repros/scripts/repro.mjs:26:1
at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {
generatedMessage: false,
code: 'ERR_ASSERTION',
actual: false,
expected: true,
operator: '=='
}
Node.js v20.17.0
```
### How often does it reproduce? Is there a required condition?
Always
### What is the expected behavior? Why is that the expected behavior?
When a child process is killed with `.kill()`, it does not generate the CPU profile that `--cpu-prof` argument instructs it to do. I would expect profile to be generated.
This is inconsistent with `node:worker_threads` where terminating a `Worker` with `.terminate()` does still generate the profile. It also makes it difficult to debug slow child processes as you cannot get profile info without waiting for graceful exit.
### What do you see instead?
Child process is killed and CPU profile is not written.
### Additional information
_No response_ | child_process | low | Critical |
2,544,618,353 | vscode | Diff editor: `1 files` should be `1 file` | 
| polish,multi-diff-editor | low | Minor |
2,544,637,046 | opencv | Why can't there be parallel inference when the batch size is greater than 1 in C++ OpenCV CUDA DNN? | OpenCV = 4.9
Operating System / Platform = Windows 64 Bit
Compiler = Visual Studio 2022
cuda =11.6
cudnn = 8.6.0
Driver Version = 536.45
GPU PTX4050 6G
Detailed description:
I used the C++ version of OpenCV for model inference with a simple convolutional network using the GPU. In release mode, when the batch size is 1, the inference time is 40 ms, but when the batch size is 4, the time is approximately 160 ms. The expectation is that the inference time for the model is 40 ms, whether the batch size is 1 or 4. Why is there no parallel inference? In debug mode, the following error is output:
[ INFO:0@0.535] global registry_parallel.impl.hpp:96 cv::parallel::ParallelBackendRegistry::ParallelBackendRegistry core(parallel): Enabled backends(3, sorted by priority): ONETBB(1000); TBB(990); OPENMP(980)
[ INFO:0@0.535] global plugin_loader.impl.hpp:67 cv::plugin::impl::DynamicLib::libraryLoad load D:\code\ISImgDetect\demo\opencv_core_parallel_onetbb490_64d.dll => FAILED
[ INFO:0@0.536] global plugin_loader.impl.hpp:67 cv::plugin::impl::DynamicLib::libraryLoad load opencv_core_parallel_onetbb490_64d.dll => FAILED
[ INFO:0@0.536] global plugin_loader.impl.hpp:67 cv::plugin::impl::DynamicLib::libraryLoad load D:\code\ISImgDetect\demo\opencv_core_parallel_tbb490_64d.dll => FAILED
[ INFO:0@0.537] global plugin_loader.impl.hpp:67 cv::plugin::impl::DynamicLib::libraryLoad load opencv_core_parallel_tbb490_64d.dll => FAILED
[ INFO:0@0.537] global plugin_loader.impl.hpp:67 cv::plugin::impl::DynamicLib::libraryLoad load D:\code\ISImgDetect\demo\opencv_core_parallel_openmp490_64d.dll => FAILED
[ INFO:0@0.538] global plugin_loader.impl.hpp:67 cv::plugin::impl::DynamicLib::libraryLoad load opencv_core_parallel_openmp490_64d.dll => FAILED
[ INFO:0@2.086] global op_cuda.cpp:80 cv::dnn::dnn4_v20231225::Net::Impl::initCUDABackend CUDA backend will fallback to the CPU implementation for the layer "_input" of type NetInputLayer
the layer "_input" of type NetInputLayer be accelerated with GPU, instead using CPU.
Why can't the model perform parallel inference? when the batch size is 1,the GPU usage is 24% and when the batch size is 4 ,the GPU usage is 28%.
How to solve this problem? pls! | feature,category: gpu/cuda (contrib),category: dnn | low | Critical |
2,544,641,771 | vscode | Should cell input lose focus when hidden lines are expanded? | Testing #228393
- Focus the cell
- When you expand a hidden region, the focused cell loses focus and the whole notebook is focused.
- I am wondering if it would make sense to keep the focus on the cell since we interact with the cell? | bug,papercut :drop_of_blood:,notebook-diff | low | Minor |
2,544,659,506 | vscode | Idea: Indicate if file is opened in tab | Testing #229342
If think for common file names, it could be very helpful if the suggest list indicates if an item is currently opened in an editor. | feature-request,suggest,chat | low | Minor |
2,544,713,048 | angular | docs: light mode doesn't show the correct colors for terminal text | ### Describe the problem that you experienced
https://github.com/user-attachments/assets/7d20d725-d68c-4cb2-83b9-06d1973e6471
### Enter the URL of the topic with the problem
_No response_
### Describe what you were looking for in the documentation
_No response_
### Describe the actions that led you to experience the problem
_No response_
### Describe what you want to experience that would fix the problem
_No response_
### Add a screenshot if that helps illustrate the problem
_No response_
### If this problem caused an exception or error, please paste it here
_No response_
### If the problem is browser-specific, please specify the device, OS, browser, and version
_No response_
### Provide any additional information here in as much as detail as you can
_No response_ | area: docs-infra | low | Critical |
2,544,728,077 | ant-design | Upload 上传组件使用照片墙的同时多选上传了400张图片,会出现卡住浏览器无响应 | ### Reproduction link
[https://codepen.io/pen?&editors=001&prefill_data_id=9da31576-63b9-44b5-9649-1435b8a837e8](https://codepen.io/pen?&editors=001&prefill_data_id=9da31576-63b9-44b5-9649-1435b8a837e8)
### Steps to reproduce
直接官方文档选择照片墙,打开codepen,改为 multiple 模式,上传400张图片
### What is expected?
不会卡死
### What is actually happening?
浏览器直接卡死
| Environment | Info |
| --- | --- |
| antd | 5.20.1 |
| React | react 18 |
| System | win10 |
| Browser | chrome 版本 128.0.6613.138(正式版本) (64 位) |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | Inactive,unconfirmed | low | Major |
2,544,729,842 | PowerToys | Remapping shortcuts does not work unless I reboot Powertoys | ### Microsoft PowerToys version
0.84.1
### Installation method
PowerToys auto-update
### Running as admin
None
### Area(s) with issue?
Keyboard Manager
### Steps to reproduce
- Having shortcuts remapped (it seems that either the ones with an application specified or the ones that do not are concerned)
- Restart computer
- The shortcuts does not work unless I quit the application and start it again
### ✔️ Expected Behavior
The shortcuts should always work as soon as Powertoys is running. It worked well wtih 0.83.0
### ❌ Actual Behavior
The shortcuts don't work unless I restart Powertoys
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,544,736,123 | vscode | `Turn on Remote Tunnel Access` account view could show suggestions for accounts to authenticate with | Testing #229420
I am not sure if this is a bug or intended.
- Open the account menu from the account icon
- Click on `Turn on Remote Tunnel Access`
- Click on `Turn on for this session`
- The first entry shows the Microsoft account I am already signed in with
- I am already signed in with my GitHub account on VS Code
- Perhaps the GitHub account can be shown in the menu as a suggested account to sign in with?
https://github.com/user-attachments/assets/f810da0e-06fa-48bd-a7ae-82bc6f1833c5
| feature-request,remote-tunnel | low | Critical |
2,544,744,996 | vscode | SCM Graph - Picked wrong repo not from active editor | Testing #229375
In a MR workspace where `vscode` is the first repo, I have 3 editors opened from other repos and even though the active editor is from another repo, `vscode` was picked:

Here `/Users/bpasero/Development/Microsoft/vscode-node-speech/SECURITY.md` is from a repo `vscode-node-speech`. | bug,scm | low | Minor |
2,544,749,383 | vscode | SCM Graph - Make repository picker a dropdown with a "More..." entry | Testing #229375
I would put the top 5 repos in a dropdown and only show quick pick when you click "More..."

| ux,scm,under-discussion | low | Minor |
2,544,754,856 | vscode | SCM Graph - Picked repository is not remembered after restart | Testing #229375
When you pick something other than "Auto", after a window reload I think I am back to "Auto". Maybe this needs a better indicator whether I am set to "Auto" vs. a specific repo? | bug,scm,under-discussion | low | Minor |
2,544,765,442 | flutter | Can't scroll flutter web inside iframe when using iPhone mirroring | ### Steps to reproduce
When I try to scroll the flutter web app inside iframe when using iPhone mirroing, it doesn't works. For example, https://codepen.io/Liao-Han-the-encoder/full/ZEgEqaE. We can't scroll the inner context as we want. But it works in desktop browsers like Chrome, Safari.
1. Create a flutter web with a long list;
2. Embed it into an iframe and open it in the iOS Safari.
3. Using iPhone mirroing to scroll it.
https://codepen.io/Liao-Han-the-encoder/full/ZEgEqaE
You can directly open this link.
https://github.com/user-attachments/assets/393a89c8-4a4d-41ff-ac85-b3c0ca5ac7fc
### Expected results
Scroll correctly.
### Actual results
Can't scroll the inner content.
### Code sample
<details open><summary>Code sample</summary>
```dart
[Paste your code here]
```
<iframe src="https://flutter.github.io/samples/web/material_3_demo/" frameborder="0" width="400" height="1000"></iframe>
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.24.1, on macOS 15.0 24A335 darwin-x64, locale en-US)
• Flutter version 3.24.1 on channel stable at /Users/derekliao/dev/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 5874a72aa4 (5 weeks ago), 2024-08-20 16:46:00 -0500
• Engine revision c9b9d5780d
• Dart version 3.5.1
• DevTools version 2.37.2
• Pub download mirror https://pub.flutter-io.cn
• Flutter download mirror https://storage.flutter-io.cn
[!] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at /Users/derekliao/Library/Android/sdk
✗ cmdline-tools component is missing
Run `path/to/sdkmanager --install "cmdline-tools;latest"`
See https://developer.android.com/studio/command-line for more details.
✗ Android license status unknown.
Run `flutter doctor --android-licenses` to accept the SDK licenses.
See https://flutter.dev/to/macos-android-setup for more details.
[!] Xcode - develop for iOS and macOS (Xcode 15.4)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 15F31d
✗ CocoaPods not installed.
CocoaPods is a package manager for iOS or macOS platform code.
Without CocoaPods, plugins will not work on iOS or macOS.
For more info, see https://flutter.dev/to/platform-plugins
For installation instructions, see https://guides.cocoapods.org/using/getting-started.html#installation
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.1)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
[✓] IntelliJ IDEA Ultimate Edition (version 2024.2.2)
• IntelliJ at /Applications/IntelliJ IDEA.app
• Flutter plugin version 81.1.3
• Dart plugin version 242.22855.32
[✓] VS Code (version 1.93.1)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension can be installed from:
🔨 https://marketplace.visualstudio.com/items?itemName=Dart-Code.flutter
[✓] Connected device (3 available)
• Eric’s iPhone (mobile) • 00008120-000E38E426EBC01E • ios • iOS 18.1 22B5045g
• macOS (desktop) • macos • darwin-x64 • macOS 15.0 24A335 darwin-x64
• Chrome (web) • chrome • web-javascript • Google Chrome 129.0.6668.58
[✓] Network resources
• All expected network resources are available.
! Doctor found issues in 2 categories.
```
</details>
| platform-ios,f: scrolling,platform-web,has reproducible steps,P3,team-ios,triaged-ios,found in release: 3.24,found in release: 3.27 | low | Major |
2,544,768,259 | vscode | SCM Graph - Render code backticks | Testing #229359
I think it would be nice to render code backticks in the graph view:

| feature-request,scm | low | Minor |
2,544,779,317 | react-native | Click events do not take effect in animation views (Some Android devices, Huawei) | ### Description
On some Android devices (Huawei), the buttons in the animation view cannot respond to click events normally, and you need to click many times before you can touch them occasionally.
### Steps to reproduce
1、install the application with `yarn android`
2、click '显示弹框'
3、click '关闭',It takes many clicks to close
### React Native Version
0.74.1
### Affected Platforms
Runtime - Android
### Areas
Fabric - The New Renderer
### Output of `npx react-native info`
```text
System:
OS: macOS 13.4
CPU: (10) arm64 Apple M2 Pro
Memory: 90.55 MB / 16.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 18.0.0
path: /usr/local/bin/node
Yarn:
version: 1.22.19
path: /usr/local/bin/yarn
npm:
version: 8.6.0
path: /usr/local/bin/npm
Watchman:
version: 2024.01.22.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods:
version: 1.12.1
path: /Users/01400926/.rbenv/shims/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 22.4
- iOS 16.4
- macOS 13.3
- tvOS 16.4
- watchOS 9.4
Android SDK: Not Found
IDEs:
Android Studio: 2022.3 AI-223.8836.35.2231.10671973
Xcode:
version: 14.3/14E222b
path: /usr/bin/xcodebuild
Languages:
Java:
version: 20.0.2
path: /usr/bin/javac
Ruby:
version: 3.2.2
path: /Users/01400926/.rbenv/shims/ruby
npmPackages:
"@react-native-community/cli": Not Found
react:
installed: 18.2.0
wanted: 18.2.0
react-native:
installed: 0.74.1
wanted: 0.74.1
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: true
iOS:
hermesEnabled: Not found
newArchEnabled: false
```
### Stacktrace or Logs
```text
not
```
### Reproducer
https://github.com/peaktangf/rnnotresponsedemo
### Screenshots and Videos
_No response_
https://github.com/user-attachments/assets/5d7ec6d6-2cbd-4dbd-926d-7005db2dbf38
| Issue: Author Provided Repro,Platform: Android,Newer Patch Available,Type: New Architecture | low | Major |
2,544,811,578 | vscode | Chat file attachment - disambiguate when file name matches | Testing #229436
Attached two files from the same workspace that have the same name. At the moment the only way to disambiguate is to hover over. We should probably include parts of the path in the label in order to disambiguate between the files.

| bug,panel-chat | low | Minor |
2,544,859,837 | godot | "DisplayServer.window_set_mouse_passthrough" causes flickering around polygon border | ### Tested versions
- Reproducible in: 4.3.stable, 4.2.stable, 4.4.dev2 [[97ef3c8]](https://github.com/godotengine/godot/commit/97ef3c837263099faf02d8ebafd6c77c94d2aaba) with Compatibility renderer
### System information
Godot v4.3.stable - Windows 10.0.19045 - GLES3 (Compatibility) - GeForce GTX 1060 - Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz (12 Threads)
### Issue description
https://github.com/user-attachments/assets/2f9acf0a-c61e-41a9-808d-00f779bfc0fa
When using window_set_mouse_passthrough and updating the clickable polygon area (e.g. when the player drags a desktop pet around on their screen), it causes clipping around the borders of the polygon area.
Similar issue to [#80098](https://github.com/godotengine/godot/issues/80098#issue-1830205875), except I get clipping around the borders of the polygon instead of whole screen flickering.
### Steps to reproduce
Run the minimal reproduction project and drag the test object around at a relatively fast pace.
The clipping does not happen if ```DisplayServer.window_set_mouse_passthrough(passthrough_polygon)``` (line 25 of Scenes/pet.gd) is commented out.
### Minimal reproduction project (MRP)
[test_window_set_mouse_passthrough_clipping.zip](https://github.com/user-attachments/files/17111759/test_window_set_mouse_passthrough_clipping.zip)
| bug,topic:rendering,topic:porting | low | Minor |
2,544,860,824 | storybook | [Bug]: @storybook/angular unsupported --stats-json flag | ### Describe the bug
When Chromatic build storybook it passes additional arguments such as --stats-json.
The `--webpackStatsJson` parameter has been renamed into `--stats-json`: https://github.com/chromaui/chromatic-cli/issues/1030
Therefore since this morning all my CI fails.
Example:
```
> nx run storybook-host-angular:build-storybook --output-dir=/tmp/chromatic--2639-uZkil9x91h0W --stats-json=/tmp/chromatic--2639-uZkil9x91h0W
NX 'stats-json' is not found in schema
NX Running target build-storybook for project storybook-host-angular failed
```
### Reproduction link
https://stackblitz.com/edit/github-a5abiy?file=package.json
### Reproduction steps
1. Go to the above link
2. Run the command `yarn storybook -- --stats-json=./tmp`
### System
```bash
Storybook Environment Info:
System:
OS: macOS 15.0
CPU: (10) arm64 Apple M1 Pro
Shell: 5.9 - /bin/zsh
Binaries:
Node: 20.17.0 - ~/.volta/tools/image/node/20.17.0/bin/node
npm: 10.8.2 - ~/.volta/tools/image/node/20.17.0/bin/npm <----- active
Browsers:
Chrome: 129.0.6668.59
Edge: 129.0.2792.52
Safari: 18.0
npmPackages:
@storybook/addon-essentials: 8.3.2 => 8.3.2
@storybook/addon-interactions: 8.3.2 => 8.3.2
@storybook/angular: 8.3.2 => 8.3.2
@storybook/core-server: 8.3.2 => 8.3.2
@storybook/nextjs: 8.3.2 => 8.3.2
@storybook/react-vite: 8.3.2 => 8.3.2
@storybook/test: 8.3.2 => 8.3.2
@storybook/test-runner: 0.18.2 => 0.18.2
storybook: 8.3.2 => 8.3.2
```
### Additional context
_No response_ | bug,needs triage | low | Critical |
2,544,865,209 | godot | Resource.local_to_scene in an array or dictionary does not work for child nodes that have been instantiated in PackedScene or at runtime | ### Tested versions
v4.3.stable.official [77dcf97d8]
### System information
Godot v4.3.stable - Windows 10.0.22631 - Vulkan (Forward+) - dedicated AMD Radeon RX 7900 GRE (Advanced Micro Devices, Inc.; 32.0.11037.4004) - AMD Ryzen 7 7800X3D 8-Core Processor (16 Threads)
### Issue description
Although #71578 #87268 fixes some similar issues, I'm still experiencing other issues.
1. The resources on the parent node are independent and as expected, whether stored in a variable or in an array or dictionary, whether they have been instantiated in the packedscene or at runtime.
2. The resources on the child nodes stored in a variable are independent, while stored in array or dictionary are not independent, whether they have been instantiated in the packedscene or at runtime.
3. Also during testing, I found that in addition to the second issue, the get_local_scene method of a resource that has been instantiated in packedscene points to the node that was instantiated at runtime, and which confused me.
<details><summary>Output</summary>
<p>
```
Godot Engine v4.3.stable.official.77dcf97d8 - https://godotengine.org
Vulkan 1.3.287 - Forward+ - Using Device #0: AMD - AMD Radeon RX 7900 GRE
Comparison of parent nodes already in the packedscene
Node:<Node#28806481158> -9223372008031517431
Node:<Node#28806481158> -9223372007914076908
Node:<Node#28806481158> -9223372007897299696
--------------------------------------------------------
Node2:<Node#28974253335> -9223372007863745269
Node2:<Node#28974253335> -9223372007746304758
Node2:<Node#28974253335> -9223372007729527521
Comparison of child nodes already in the packedscene
Node:<Node#28806481158> -9223372007964408546
InstantiatedNode2:<Node#29326574884> -9223372007427537624
InstantiatedNode2:<Node#29326574884> -9223372007410760407
--------------------------------------------------------
Node2:<Node#28974253335> -9223372007796636382
InstantiatedNode2:<Node#29326574884> -9223372007427537624
InstantiatedNode2:<Node#29326574884> -9223372007410760407
Comparison of instantiated parent nodes
InstantiatedNode:<Node#29192357147> -9223372007645641444
InstantiatedNode2:<Node#29326574884> -9223372007494646493
InstantiatedNode2:<Node#29326574884> -9223372007477869293
--------------------------------------------------------
InstantiatedNode2:<Node#29326574884> -9223372007511423727
InstantiatedNode2:<Node#29326574884> -9223372007494646493
InstantiatedNode2:<Node#29326574884> -9223372007477869293
Comparison of instantiated child nodes
InstantiatedNode:<Node#29192357147> -9223372007578532576
InstantiatedNode2:<Node#29326574884> -9223372007427537624
InstantiatedNode2:<Node#29326574884> -9223372007410760407
--------------------------------------------------------
InstantiatedNode2:<Node#29326574884> -9223372007444314841
InstantiatedNode2:<Node#29326574884> -9223372007427537624
InstantiatedNode2:<Node#29326574884> -9223372007410760407
--- Debugging process stopped ---
```
</p>
</details>
### Steps to reproduce
1. Run project (F5).
2. Observe the console output.
### Minimal reproduction project (MRP)
[resource-local-to-scene-test.zip](https://github.com/user-attachments/files/17110959/resource-local-to-scene-test.zip) | topic:core,needs testing | low | Critical |
2,544,906,290 | vscode | Surface find icon in explorer view toolbar | Testing #229408

I think now this feature can be made much more discoverable given how it works. I would think that maybe the "Refresh" icon could go away if we wanted to reduce clutter, but maybe we can check with how often these actions are used daily. | feature-request,file-explorer | low | Minor |
2,544,922,902 | deno | Bug: `.bin` npm commands not found with byonm | In a pnpm workspace setup some entries from the `.bin` folder can be in the workspace member or the root folder:
- `node_modules/.bin`
- `packages/member/node_modules/.bin`
Not all binaries are linked in the workspace member and in node it traverses upwards and searches all `node_modules/.bin` directories for the binaries. In Deno we don't seem to do that and when a binary isn't in the member folder, we fail.
## Steps to reproduce
1. Clone https://github.com/vitest-dev/vitest
2. Run `pnpm i`
3. Run `cd packages/ui`
4. Run `deno task dev`
Output:
```sh
$ deno task dev
Task dev rollup -c --watch --watch.include 'node/**'
rollup: command not found
```
Version: Deno 2.0.0-rc.4+1e261c9
| bug,node compat | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.