id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k ⌀ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,531,856,676 | deno | bug: jupyter kernel permissions are ignored | Version: Deno 1.46.3
```bash
deno 1.46.3 (stable, release, aarch64-apple-darwin)
v8 12.9.202.5-rusty
typescript 5.5.2
```
The deno kernel doesn't use Deno's permissions api.

| feat | low | Critical |
2,531,864,487 | rust | Fix cross-edition fragment specifier span behavior | In the lang call on 2024-09-04, we [agreed](https://github.com/rust-lang/rust/pull/129755#issuecomment-2329806350) that the span of the token used to fill in the fragment specifier should be used for deciding the behavior.
That is, if we have code like this in a Rust 2021 crate:
```rust
#[macro_export]
macro_rules! make_matcher {
($name:ident, $fragment_type:ident, $d:tt) => {
#[macro_export]
macro_rules! $name {
($d _:$fragment_type) => { true };
(const { 0 }) => { false };
}
};
}
make_matcher!(is_expr_from_2021, expr, $);
```
And code like this in a Rust 2024 crate:
```rust
make_matcher!(is_expr_from_2024, expr, $);
```
We would expect that `is_expr_from_2024` would exhibit the Rust 2024 behavior.
We'd also like to fix this for `pat`, pending of course a crater run.
cc #129755
cc @eholk @vincenzopalazzo @compiler-errors
Tracking:
- https://github.com/rust-lang/rust/issues/123742 | A-macros,T-lang,T-compiler,C-bug | low | Critical |
2,531,865,427 | pytorch | triangular ops cuda kernel int overflow | ### 🐛 Describe the bug
# minimal repro
```python
import torch
def triangulate_check(d):
tri_func = [torch.tril, torch.triu]
for f in tri_func:
on_cpu = f(torch.ones([d, d], device='cpu', dtype=torch.bool))
on_gpu = f(torch.ones([d, d], device='cuda', dtype=torch.bool))
is_identical = torch.all(on_cpu == on_gpu.to('cpu'))
print(f"{f.__name__}: d={hex(d)} identical={is_identical}")
triangulate_check(0x10000)
triangulate_check(0x10000 + 1)
```
```
tril: d=0x10000 identical=True
triu: d=0x10000 identical=True
tril: d=0x10001 identical=False
triu: d=0x10001 identical=False
```
The bug was first introduced in the initial cuda kernel implementation for triangulation ops (tril, triu which are lower, upper)
https://github.com/pytorch/pytorch/commit/b4c3268b23c30cb14b1a249e9566e0bd54c9bcd8#diff-f09d9c010be6a94210ece2f019facc2566c5b14367209a958e8bab0d06408b81
no impact for cpu implementation.
# root cause
https://github.com/pytorch/pytorch/blob/ea10c072f3e5a0afea9ef308eb64a73e5915c811/aten/src/ATen/native/cuda/TriangularOps.cu#L47
referring to CUDA docs:
https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html?highlight=blockIdx#blockidx
[https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html?highlight=blockdim#blockdim](https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html?highlight=blockIdx#blockdim)
`blockIdx`, `blockDim` are uint3 (dim3 is based on uint3) type
-> unsafe multiplication leading to integer overflow, which is enough to render the whole linear index wrong.
# fix
safe arithmetic:
```python
int64_t linear_idx = ((uint64_t)blockIdx.x * (uint64_t)blockDim.x + (uint64_t)threadIdx.x) * (uint64_t)elements_per_thread;
```
it would be better to introduce safe arithmetic functions in general
# Credits
@HofitBata
@grados
### Versions
Collecting environment information...
PyTorch version: 2.1.0a0+32f93b1
Is debug build: False
CUDA used to build PyTorch: 12.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.27.4
Libc version: glibc-2.35
Any torch version basically
cc @ptrblck @msaroufim @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano @ezyang @gchanan @zou3519 @kadeng | module: cuda,triaged,module: 64-bit,module: linear algebra | low | Critical |
2,531,910,609 | flutter | `Linux tool_integration_tests_4_4` taking ~40 minutes | The longer anyone test takes, the longer presubmits and post submits take. | team-tool | low | Major |
2,531,917,734 | PowerToys | Advanced Paste- as CSV (from excel cells copy) | ### Description of the new feature / enhancement
While it is possible to save-as within excel to create a csv version of cells, if you have a small number of cells, being able to copy and used advanced paste in CSV format would be amazing and a real time saver.
### Scenario when this would be used?
When using excel, I often need to import to a different application as CSV. It's a real pain when I only want to paste a few cells of data.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,532,008,135 | godot | Empty custom RichTextEffect is misplacing some ligature glyphs | ### Tested versions
- Reproducible in v4.3.stable.mono.official [77dcf97d8]
### System information
Godot v4.3.stable.mono - Windows 10.0.19045 - GLES3 (Compatibility) - NVIDIA GeForce GTX 1070 (NVIDIA; 31.0.15.3713) - AMD Ryzen 7 1700 Eight-Core Processor (16 Threads)
### Issue description
When applying custom RichTextEffect with empty_ProcessCustomFX method the rendered text should be exactly the same as the text that isn't wrapped in any bbcode block. Instead even with empty effects some ligature glyphs are misplaced, making the resultuing text look very bad - [effectone] (c#), [effectthree] (gd)

I looked at the source code of rich_text_label.cpp and this is the line that's messing up the some of the glyph positions.

I'm not really sure how it should work but noticed that the vector is not added for some built-in effects, like rainbow (which correctly displays the text).
My workaround is to just set the offset to empty vector in the process function, which seems to fix the issue - [effecttwo] (c#), [effectfour] (gd)
### Steps to reproduce
1. Create scene with RichTextLabel
2. Use a font with ligatures
3. Create custom RichTextEffect without any logic, just returning true from the process function
4. Attach custom RichTextEffect to RichTextLabel and wrap the text in custom bb code
5. Some of the ligature glyphs will be displaced
### Minimal reproduction project (MRP)
[richtexteffect.zip](https://github.com/user-attachments/files/17034291/richtexteffect.zip) | bug,topic:gui | low | Minor |
2,532,033,315 | pytorch | [torch.export] Detect internal constrains | ### 🐛 Describe the bug
I don't know if it is a valid bug report or more a feature request. But is technically possible that dynamo detect the model padding instead of enforcing input guards?
This minimal repro was create to debug the same issue on a different bigger model.
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.export as exp
from torch.export import Dim, ShapesCollection
class MinimalModel(nn.Module):
def __init__(self):
super(MinimalModel, self).__init__()
# A simple convolution layer expecting 1 input channel
self.conv = nn.Conv2d(1, 8, kernel_size=3, stride=1, padding=1)
def forward(self, inputs):
B, C, H, W = inputs.shape
# Calculate necessary padding to make dimensions multiples of 32
pad_H = (32 - H % 32) % 32
pad_W = (32 - W % 32) % 32
if pad_H > 0 or pad_W > 0:
pad_top = pad_H // 2
pad_bottom = pad_H - pad_top
pad_left = pad_W // 2
pad_right = pad_W - pad_left
# Apply reflection padding
inputs = F.pad(inputs, (pad_left, pad_right, pad_top, pad_bottom), mode='reflect')
# Apply convolution
outputs = self.conv(inputs)
return outputs
# Initialize the minimal model
model = MinimalModel().eval()
# Example input (1-channel input with arbitrary size)
inputs = torch.randn(1, 1, 224, 224) # 1-channel input with height and width of 224
# Device setup
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
inputs = inputs.to(device)
# Define dynamic dimensions for export
height_dim = Dim("height_dim", min=224, max=1024)
width_dim = Dim("width_dim", min=224, max=1024)
# Use ShapesCollection to define dynamic height and width
dynamic_shapes = ShapesCollection()
dynamic_shapes[inputs] = (Dim.STATIC, Dim.STATIC, height_dim, width_dim)
# Attempt export
try:
exported_model = exp.export(
model,
(inputs,),
dynamic_shapes=dynamic_shapes
)
exp.save(exported_model, "exported_model.pt")
print("Model exported successfully.")
except torch._dynamo.exc.UserError as e:
print(f"Failed to export: {e}")
```
```python
E0917 20:00:03.648000 545 site-packages/torch/_guards.py:283] [0/0] Error while creating guard:
E0917 20:00:03.648000 545 site-packages/torch/_guards.py:283] [0/0] Name: ''
E0917 20:00:03.648000 545 site-packages/torch/_guards.py:283] [0/0] Source: shape_env
E0917 20:00:03.648000 545 site-packages/torch/_guards.py:283] [0/0] Create Function: SHAPE_ENV
E0917 20:00:03.648000 545 site-packages/torch/_guards.py:283] [0/0] Guard Types: None
E0917 20:00:03.648000 545 site-packages/torch/_guards.py:283] [0/0] Code List: None
E0917 20:00:03.648000 545 site-packages/torch/_guards.py:283] [0/0] Object Weakref: None
E0917 20:00:03.648000 545 site-packages/torch/_guards.py:283] [0/0] Guarded Class Weakref: None
E0917 20:00:03.648000 545 site-packages/torch/_guards.py:283] [0/0] Traceback (most recent call last):
E0917 20:00:03.648000 545 site-packages/torch/_guards.py:283] [0/0] File "/opt/conda/lib/python3.11/site-packages/torch/_guards.py", line 281, in create
E0917 20:00:03.648000 545 site-packages/torch/_guards.py:283] [0/0] return self.create_fn(builder, self)
E0917 20:00:03.648000 545 site-packages/torch/_guards.py:283] [0/0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E0917 20:00:03.648000 545 site-packages/torch/_guards.py:283] [0/0] File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/guards.py", line 1844, in SHAPE_ENV
E0917 20:00:03.648000 545 site-packages/torch/_guards.py:283] [0/0] guards = output_graph.shape_env.produce_guards(
E0917 20:00:03.648000 545 site-packages/torch/_guards.py:283] [0/0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E0917 20:00:03.648000 545 site-packages/torch/_guards.py:283] [0/0] File "/opt/conda/lib/python3.11/site-packages/torch/fx/experimental/symbolic_shapes.py", line 4194, in produce_guards
E0917 20:00:03.648000 545 site-packages/torch/_guards.py:283] [0/0] raise ConstraintViolationError(
E0917 20:00:03.648000 545 site-packages/torch/_guards.py:283] [0/0] torch.fx.experimental.symbolic_shapes.ConstraintViolationError: Constraints violated (height_dim, width_dim)! For more information, run with TORCH_LOGS="+dynamic".
E0917 20:00:03.648000 545 site-packages/torch/_guards.py:283] [0/0] - Not all values of height_dim = L['inputs'].size()[2] in the specified range 224 <= height_dim <= 1024 satisfy the generated guard Mod(32 - Mod(L['inputs'].size()[2], 32), 32) <= 0.
E0917 20:00:03.648000 545 site-packages/torch/_guards.py:283] [0/0] - Not all values of width_dim = L['inputs'].size()[3] in the specified range 224 <= width_dim <= 1024 satisfy the generated guard Mod(32 - Mod(L['inputs'].size()[3], 32), 32) <= 0.
E0917 20:00:03.652000 545 site-packages/torch/_guards.py:285] [0/0] Created at:
E0917 20:00:03.652000 545 site-packages/torch/_guards.py:285] [0/0] File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/convert_frame.py", line 615, in transform
E0917 20:00:03.652000 545 site-packages/torch/_guards.py:285] [0/0] tracer = InstructionTranslator(
E0917 20:00:03.652000 545 site-packages/torch/_guards.py:285] [0/0] File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/symbolic_convert.py", line 2670, in __init__
E0917 20:00:03.652000 545 site-packages/torch/_guards.py:285] [0/0] output=OutputGraph(
E0917 20:00:03.652000 545 site-packages/torch/_guards.py:285] [0/0] File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/output_graph.py", line 317, in __init__
E0917 20:00:03.652000 545 site-packages/torch/_guards.py:285] [0/0] self.init_ambient_guards()
E0917 20:00:03.652000 545 site-packages/torch/_guards.py:285] [0/0] File "/opt/conda/lib/python3.11/site-packages/torch/_dynamo/output_graph.py", line 463, in init_ambient_guards
E0917 20:00:03.652000 545 site-packages/torch/_guards.py:285] [0/0] self.guards.add(ShapeEnvSource().make_guard(GuardBuilder.SHAPE_ENV))
Failed to export: Constraints violated (height_dim, width_dim)! For more information, run with TORCH_LOGS="+dynamic".
- Not all values of height_dim = L['inputs'].size()[2] in the specified range 224 <= height_dim <= 1024 satisfy the generated guard Mod(32 - Mod(L['inputs'].size()[2], 32), 32) <= 0.
- Not all values of width_dim = L['inputs'].size()[3] in the specified range 224 <= width_dim <= 1024 satisfy the generated guard Mod(32 - Mod(L['inputs'].size()[3], 32), 32) <= 0.
Suggested fixes:
_height_dim = Dim('_height_dim', min=7, max=32)
_width_dim = Dim('_width_dim', min=7, max=32)
height_dim = 32*_height_dim
width_dim = 32*_width_dim
```
### Versions
nightly
cc @ezyang @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | triaged,oncall: pt2,export-triaged,oncall: export | low | Critical |
2,532,033,492 | pytorch | Segmentaion fault on Apple M1 caused by libomp on v2.4.1 | ### 🐛 I`m using libtorch in C++ project, it worked for a while then I got this after running the binary.
The bizare thing is that it worked for a while, even on GPU, I have trained and everything worked. Now it fails on simple `torch::relu(tensor)` ...
any idea? is m1 related? older version will?
Here is the logs after `./cnn_bin`:
```
LibTorch version: 2.4.1
Tensor created successfully
AddressSanitizer:DEADLYSIGNAL
=================================================================
AddressSanitizer:DEADLYSIGNAL
==6330==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000010 (pc 0x0001213f0cf0 bp 0x00016be8aa50 sp 0x00016be8a9a0 T2)
AddressSanitizer:DEADLYSIGNAL
==6330==The signal is caused by a READ memory access.
==6330==Hint: address points to the zero page.
#0 0x1213f0cf0 in void __kmp_suspend_64<false, true>(int, kmp_flag_64<false, true>*)+0x30 (libomp.dylib:arm64+0x54cf0)
#1 0x12174151c in kmp_flag_64<false, true>::wait(kmp_info*, int, void*)+0x754 (libomp.dylib:arm64+0x4551c)
#2 0x12173c55c in __kmp_hyper_barrier_release(barrier_type, kmp_info*, int, int, int, void*)+0xb4 (libomp.dylib:arm64+0x4055c)
#3 0x1217400e4 in __kmp_fork_barrier(int, int)+0x270 (libomp.dylib:arm64+0x440e4)
#4 0x12171ce10 in __kmp_launch_thread+0x150 (libomp.dylib:arm64+0x20e10)
#5 0x12175b008 in __kmp_launch_worker(void*)+0x114 (libomp.dylib:arm64+0x5f008)
#6 0x180976030 in _pthread_start+0x84 (libsystem_pthread.dylib:arm64e+0x7030)
#7 0x180970e38 in thread_start+0x4 (libsystem_pthread.dylib:arm64e+0x1e38)
==6330==Register values:
x[0] = 0x0000000000000002 x[1] = 0x000000016be8ab30 x[2] = 0x0000000000000000 x[3] = 0x0000000fffffc088
x[4] = 0x0000000000000001 x[5] = 0x0000000000000000 x[6] = 0x000000016be8aca0 x[7] = 0x0000000000000000
x[8] = 0x0000000000000000 x[9] = 0x000000007fffffff x[10] = 0x00000000000003e8 x[11] = 0xce5899d053670034
x[12] = 0x00000000016e3600 x[13] = 0x000000000007e8e8 x[14] = 0x0000000000000000 x[15] = 0x0000000000000000
x[16] = 0x00000001213f0cc0 x[17] = 0x00000001e02c7480 x[18] = 0x0000000000000000 x[19] = 0x000000014a4f49c0
x[20] = 0x000000016be8ab30 x[21] = 0x000000016be8ab30 x[22] = 0x0000000121792c80 x[23] = 0x00000001217885a8
x[24] = 0x0000000000000002 x[25] = 0x0000000000000000 x[26] = 0x0000000121788548 x[27] = 0x000000014a4f4f08
x[28] = 0x000000012178b1e0 fp = 0x000000016be8aa50 lr = 0x0000000121741520 sp = 0x000000016be8a9a0
AddressSanitizer can not provide additional info.
SUMMARY: AddressSanitizer: SEGV (libomp.dylib:arm64+0x54cf0) in void __kmp_suspend_64<false, true>(int, kmp_flag_64<false, true>*)+0x30
Thread T2 created by T0 here:
#0 0x10864c1b0 in wrap_pthread_create+0x54 (libclang_rt.asan_osx_dynamic.dylib:arm64e+0x4c1b0)
#1 0x12175ab50 in __kmp_create_worker+0xcc (libomp.dylib:arm64+0x5eb50)
#2 0x12171cbb0 in __kmp_allocate_thread+0x420 (libomp.dylib:arm64+0x20bb0)
#3 0x121717640 in __kmp_allocate_team+0x90c (libomp.dylib:arm64+0x1b640)
#4 0x121719438 in __kmp_fork_call+0x16f8 (libomp.dylib:arm64+0x1d438)
#5 0x12170c084 in __kmpc_fork_call+0xc0 (libomp.dylib:arm64+0x10084)
#6 0x111140170 in at::TensorIteratorBase::for_each(c10::function_ref<void (char**, long long const*, long long, long long)>, long long)+0x1ac (libtorch_cpu.dylib:arm64+0xb4170)
#7 0x1133a8644 in at::native::(anonymous namespace)::clamp_min_scalar_kernel_impl(at::TensorIteratorBase&, c10::Scalar)+0x378 (libtorch_cpu.dylib:arm64+0x231c644)
#8 0x1117da30c in void at::native::DispatchStub<void (*)(at::TensorIteratorBase&, c10::Scalar), at::native::clamp_min_scalar_stub_DECLARE_DISPATCH_type>::operator()<at::native::structured_clamp_min_out&, c10::Scalar const&>(c10::DeviceType, at::native::structured_clamp_min_out&, c10::Scalar const&)+0x74 (libtorch_cpu.dylib:arm64+0x74e30c)
#9 0x1123dbeac in c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (at::Tensor const&, c10::Scalar const&), &at::(anonymous namespace)::wrapper_CPU_clamp_min(at::Tensor const&, c10::Scalar const&)>, at::Tensor, c10::guts::typelist::typelist<at::Tensor const&, c10::Scalar const&>>, at::Tensor (at::Tensor const&, c10::Scalar const&)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&, c10::Scalar const&)+0x78 (libtorch_cpu.dylib:arm64+0x134feac)
#10 0x112109be0 in at::_ops::clamp_min::call(at::Tensor const&, c10::Scalar const&)+0x114 (libtorch_cpu.dylib:arm64+0x107dbe0)
#11 0x1113f6014 in at::native::relu(at::Tensor const&)+0x4c (libtorch_cpu.dylib:arm64+0x36a014)
#12 0x11456d214 in c10::impl::wrap_kernel_functor_unboxed_<c10::impl::detail::WrapFunctionIntoFunctor_<c10::CompileTimeFunctionPointer<at::Tensor (c10::DispatchKeySet, at::Tensor const&), &torch::autograd::VariableType::(anonymous namespace)::relu(c10::DispatchKeySet, at::Tensor const&)>, at::Tensor, c10::guts::typelist::typelist<c10::DispatchKeySet, at::Tensor const&>>, at::Tensor (c10::DispatchKeySet, at::Tensor const&)>::call(c10::OperatorKernel*, c10::DispatchKeySet, at::Tensor const&)+0x290 (libtorch_cpu.dylib:arm64+0x34e1214)
#13 0x1122742bc in at::_ops::relu::call(at::Tensor const&)+0x10c (libtorch_cpu.dylib:arm64+0x11e82bc)
#14 0x104f92e98 in at::relu(at::Tensor const&) relu.h:27
#15 0x104f92270 in main main.cpp:16
#16 0x1805f50dc (<unknown module>)
AddressSanitizer:DEADLYSIGNAL
AddressSanitizer:DEADLYSIGNAL
AddressSanitizer:DEADLYSIGNAL
AddressSanitizer:DEADLYSIGNAL
==6330==ABORTING
```
### My simple program:
```cpp
#include <torch/torch.h>
#include <iostream>
int main() {
std::cout << "LibTorch version: " << TORCH_VERSION_MAJOR << "."
<< TORCH_VERSION_MINOR << "."
<< TORCH_VERSION_PATCH << std::endl;
try {
// Create a tensor
auto tensor = torch::randn({1, 3, 720, 720});
std::cout << "Tensor created successfully" << std::endl;
// Perform a simple operation
auto result = torch::relu(tensor);
std::cout << "Operation successful, result tensor size: " << result.sizes() << std::endl;
} catch (const std::exception& e) {
std::cerr << "Exception: " << e.what() << std::endl;
return -1;
}
return 0;
}
```
### CMakeLists.txt
```
cmake_minimum_required(VERSION 3.10)
project(cnn_bin)
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_BUILD_TYPE Debug)
# Path to the LibTorch folder (inside your project)
set(TORCH_PATH "${CMAKE_SOURCE_DIR}/libtorch")
# Find OpenCV
find_package(OpenCV REQUIRED)
# Find LibTorch
find_package(Torch REQUIRED PATHS ${TORCH_PATH}/share/cmake/Torch)
# Add include directories
include_directories(${OpenCV_INCLUDE_DIRS})
# Link OpenCV libraries
link_directories(${OpenCV_LIBRARY_DIRS})
# Enable AddressSanitizer
if(CMAKE_BUILD_TYPE STREQUAL "Debug")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fsanitize=address")
set(CMAKE_LINKER_FLAGS "${CMAKE_LINKER_FLAGS} -fsanitize=address")
endif()
# Add executable
add_executable(cnn_bin main.cpp)
# Link TensorFlow library
target_link_libraries(cnn_bin "${TORCH_LIBRARIES}" ${OpenCV_LIBS})
set_property(TARGET cnn_bin PROPERTY CXX_STANDARD 17)
```
### Versions
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: macOS 14.3 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.1.0.2.5)
CMake version: version 3.25.2
Libc version: N/A
Python version: 3.9.6 (default, Dec 7 2023, 05:42:47) [Clang 15.0.0 (clang-1500.1.0.2.5)] (64-bit runtime)
Python platform: macOS-14.3-arm64-arm-64bit
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] No relevant packages
[conda] Could not collect
cc @malfet @albanD | module: crash,triaged,module: macos,module: openmp | low | Critical |
2,532,088,966 | godot | Resource Local to Scene Flag causes Post Import script to convert Resource from External to Built-in | ### Tested versions
v4.3.stable.official [77dcf97d8]
### System information
Godot v4.3.stable - Ubuntu 24.04.1 LTS 24.04 - X11 - Vulkan (Forward+) - integrated AMD Radeon Graphics (RADV RENOIR) - AMD Ryzen 7 7730U with Radeon Graphics (16 Threads)
### Issue description
Setting the flag, local to scene, on an external resource will cause a post import script (EditorScenePostImport) that loads that resource to convert it from external to built-in for that scene. Expected resource to remain external until runtime which then a duplicate of the resource would be made for it's scene.
Expected:

Actual:

### Steps to reproduce
1. Unzip and open project
4. View `cube/cube.tscn` to confirm shared `Integrity` resource on meshes `Cube_Healthy` and `Cube_Damaged` loaded as external
5. Open `cube/cube_integrity.tres` and set `Local to Scene` to true
6. Reimport by opening `cube/cube.glb` and clicking button at bottom of import dialog
7. Review `cube/cube.tscn` to view shared `Integrity` resource on meshes is now a built-in rather then reference the external.
### Minimal reproduction project (MRP)
[load-external-resource-as-local.zip](https://github.com/user-attachments/files/17034633/load-external-resource-as-local.zip)
| bug,confirmed,topic:import | low | Minor |
2,532,172,434 | pytorch | [pipelining] try not to dry run module when creating PipelineStage? | ### 🚀 The feature, motivation and pitch
Today `PipelineStage`'s init method would dry run the module with the example input:
https://github.com/pytorch/pytorch/blob/48d18fbd4cf785e1f69a6555d97a39023a5d199e/torch/distributed/pipelining/stage.py#L1270
This demands extra memory and may OOM for large models which additionally requires TP/FSDP or Activation Checkpointing to keep the memory envelope low. (But they might not have been applied at this point of pipeline stage creation.)
### Alternatives
The dryrun is for generating `output_args`, the shape of which we rely on to create gradient recv buffers during backward.
A workaround would be for user to provide `output_args` to `PipelineStage` init but it is not ergonomic.
Also, inference runs do not have backward to worry about.
### Additional context
cc: @H-Huang @wconstab
cc @XilunWu @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed | low | Minor |
2,532,172,684 | pytorch | [Inductor] Some mechanism to fuse pointwise ops (and more) into a user-defined triton kernel | We had an interesting use case that looks like the following:
- user initially had a model that used PyTorch operations.
- then, they switched it to user-defined triton kernels for the forward and backward
- previously, Inductor would fuse the aten backward operations with aten.add that showed up from the gradient computation
- when using the user-defined triton kernels, we lose that fusion, because Inductor doesn't fuse into user-defined triton kernels.
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @oulgen @aakhundov | triaged,oncall: pt2,module: inductor,module: user triton | low | Minor |
2,532,194,674 | transformers | Pass input_ids to _update_model_kwargs_for_generation | ### Feature request
For flexibility in adding additional kwargs to models and handling them consistently with generation utils, it might be useful if input_ids was available in _update_model_kwargs_for_generation.
### Motivation
My current use-case involves passing a user-defined position index, separate from position_ids, and incrementing it during generation depending on which tokens are generated (e.g. if a sep token is generated, the position index resets, whereas if a standard token is generated, it increments by 1).
The most natural way for me to handle generation in such a model would be to overwrite _update_model_kwargs_for_generation and handle the incrementation index there.
However, this would require access to input_ids. Although the logic can be handled by prepare_inputs_for_generation, because this cannot update model_kwargs, it cannot be computed incrementally, making the computation more complicated.
I expect other custom use cases would also benefit from this change.
### Your contribution
If this sounds reasonable it is a simple change which I'd happily help with | Feature request,Generation | low | Minor |
2,532,197,762 | storybook | [Bug]: React Context Provider in Global Decorator not working in monorepo | ### Describe the bug
In a monorepo where storybook is one of the packages (ie packages/storybook) rending stories from another package (ie packages/components), any imported react providers put into a decorator in preview don't actually provide the value, but if I use the provider at the story level (which is colocated with the component in packages/components), it works.
### Reproduction link
http://localhost:6006/?path=/story/components-level--area-1-level-1&globals=backgrounds.value:transparent;backgrounds.grid:!true
### Reproduction steps
1. Go the above link.
2. Dragging the green block between the sun and the flower should make a sound, but it doesn't (global provider is loading sounds).
3. Go to Solvers/BreadthFirstSearch/Area1Level2
4. Immediately you'll hear a sound, and if you drag a block away and back, you'll hear the sound (uses a story level provider).
### System
```bash
Chromatic
```
### Additional context
I THINK it's because storybook is using built code when I import the providers into preview, but is using the story's components directly when it find a story in that package, making them TECHNICALLY different react contexts. | bug,needs triage | low | Critical |
2,532,201,668 | flutter | Wrapping a `TextField` with `Semantics.identifier` doesn't work on the Web | ### Steps to reproduce
Wrapping `TextField` widget with the `Semantics` widget (and setting `Semantics.identifier` on it) does not work on the Web.
- It works fine on Android and iOS.
- It works fine when wrapping, e.g. a `TextButton`, not a `TextField`
### Expected results
The semantics identifier is present, i.e. the following HTML exists:
```html
<flt-semantics id="flt-semantic-node-25" flt-semantics-identifier="text_field">
<input type="text" ... />
</flt-semantics>
```
### Actual results
The semantics identifier is not present.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/foundation.dart';
import 'package:flutter/material.dart';
import 'package:flutter/semantics.dart';
void main() {
WidgetsFlutterBinding.ensureInitialized();
runApp(const MyApp());
if (kIsWeb) {
SemanticsBinding.instance.ensureSemantics();
}
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
theme: ThemeData.light(),
home: Scaffold(
body: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: [
Semantics(
identifier: 'text_field',
child: const TextField(
decoration: InputDecoration(
border: OutlineInputBorder(),
hintText: 'Enter a search term',
),
),
),
Semantics(
identifier: 'text_button',
child: TextButton(
style: ButtonStyle(
foregroundColor: WidgetStateProperty.all<Color>(Colors.blue),
),
onPressed: () {},
child: const Text('TextButton'),
),
),
],
),
),
);
}
}
```
</details>
In this sample, here's the HTML that gets created for the subtree starting at `Semantics` wrapping the `TextButton` with `Text` widget:
```html
<flt-semantics
id="flt-semantic-node-26"
flt-semantics-identifier="text_button"
style="position: absolute; overflow: visible; width: 92.7578px; height: 32px; transform-origin: 0px 0px 0px; transform: matrix(1, 0, 0, 1, 499.621, 464); pointer-events: none; z-index: 2;"
>
<flt-semantics-container style="position: absolute; pointer-events: none; top: 0px; left: 0px;">
<flt-semantics
id="flt-semantic-node-27"
role="button"
tabindex="0"
flt-tappable=""
style="position: absolute; overflow: visible; width: 92.7578px; height: 32px; top: 0px; left: 0px; pointer-events: all;"
>
TextButton
</flt-semantics>
</flt-semantics-container>
</flt-semantics>;
```
And here is the HTML that gets created for the subtree starting at `Semantics` wrapping the `TextField`:
```html
<flt-semantics
id="flt-semantic-node-25"
style="position: absolute; overflow: visible; width: 1092px; height: 48px; transform-origin: 0px 0px 0px; transform: matrix(1, 0, 0, 1, 0, 416); pointer-events: all; z-index: 1;"
>
<input
type="text"
spellcheck="false"
autocorrect="on"
autocomplete="on"
data-semantics-role="text-field"
placeholder="Enter a search term"
aria-label="Enter a search term"
style="position: absolute; top: 0px; left: 0px; width: 1092px; height: 48px;"
/>
</flt-semantics>;
```
> [!NOTE]
>
> Actually, the `input` above when "copied as element" from Chrome DevTools is missing the `/`, i.e. it's `<input ...>`, not `<input .../> - it's weird.
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
<img width="1787" alt="Screenshot 2024-09-17 at 4 21 18 PM" src="https://github.com/user-attachments/assets/e73e5d09-1e38-4aea-bc36-2a38bf684262">
</details>
### Logs
n/a
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
Doctor summary (to see all details, run flutter doctor -v):
[✓] Flutter (Channel stable, 3.24.1, on macOS 14.6.1 23G93 darwin-x64, locale
en-CO)
[✓] Android toolchain - develop for Android devices (Android SDK version 33.0.0)
[✓] Xcode - develop for iOS and macOS (Xcode 16.0)
[✓] Chrome - develop for the web
[✓] Android Studio (version 2021.2)
[✗] Cannot determine if IntelliJ is installed
✗ Directory listing failed
[✓] VS Code (version 1.86.1)
[✓] Connected device (2 available)
[✓] Network resources
```
</details>
| a: text input,platform-web,has reproducible steps,P2,team-text-input,triaged-text-input,found in release: 3.24,found in release: 3.26,found in release: 3.27 | low | Critical |
2,532,203,674 | vscode | Problems do not show up in problems panel with repeated tasks | With contributing the problem matcher from the extension standpoint, repeated use of the same exact task produces weird behavior:
1. First time the task is ran with contributed problem matcher, the error/problems show up in the problems panel properly.
2. The very next time I run the exact same task, the problem/error somehow disappears entirely: https://github.com/microsoft/vscode-python/pull/24114#issuecomment-2353989195
3. This issue of problems disappearing does not happen if the task terminal is in brand new state. (So in order to have problems show up each time with the same task+problem matcher), I would have to exit out of any existing task terminal and re-run.
@meganrogge Also mentioned that this does not seem unique to task again, when she ran the vscode build task, there would be errors in the buffer, but none reported in the problems area.
| bug,tasks | low | Critical |
2,532,210,459 | react-native | error unknown option `--config-cmd' | ### Description
While I have upgraded to react-native 0.75.2 version i am facing issue regariding react-native-xcode.sh file
when i am running the application i get this error
**error unknown option `--config-cmd'**
my react-native-xcode.sh file below
```
#!/bin/bash
# Copyright (c) Meta Platforms, Inc. and affiliates.
#
# This source code is licensed under the MIT license found in the
# LICENSE file in the root directory of this source tree.
# Bundle React Native app's code and image assets.
# This script is supposed to be invoked as part of Xcode build process
# and relies on environment variables (including PWD) set by Xcode
# Print commands before executing them (useful for troubleshooting)
set -x -e
DEST=$CONFIGURATION_BUILD_DIR/$UNLOCALIZED_RESOURCES_FOLDER_PATH
# Enables iOS devices to get the IP address of the machine running Metro
if [[ ! "$SKIP_BUNDLING_METRO_IP" && "$CONFIGURATION" = *Debug* && ! "$PLATFORM_NAME" == *simulator ]]; then
for num in 0 1 2 3 4 5 6 7 8; do
IP=$(ipconfig getifaddr en${num} || echo "")
if [ ! -z "$IP" ]; then
break
fi
done
if [ -z "$IP" ]; then
IP=$(ifconfig | grep 'inet ' | grep -v ' 127.' | grep -v ' 169.254.' |cut -d\ -f2 | awk 'NR==1{print $1}')
fi
echo "$IP" > "$DEST/ip.txt"
fi
if [[ "$SKIP_BUNDLING" ]]; then
echo "SKIP_BUNDLING enabled; skipping."
exit 0;
fi
case "$CONFIGURATION" in
*Debug*)
if [[ "$PLATFORM_NAME" == *simulator ]]; then
if [[ "$FORCE_BUNDLING" ]]; then
echo "FORCE_BUNDLING enabled; continuing to bundle."
else
echo "Skipping bundling in Debug for the Simulator (since the packager bundles for you). Use the FORCE_BUNDLING flag to change this behavior."
exit 0;
fi
else
echo "Bundling for physical device. Use the SKIP_BUNDLING flag to change this behavior."
fi
DEV=true
;;
"")
echo "$0 must be invoked by Xcode"
exit 1
;;
*)
DEV=false
;;
esac
# Path to react-native folder inside node_modules
REACT_NATIVE_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
# Most projects have their project root, one level up from their Xcode project dir (the "ios" directory)
PROJECT_ROOT=${PROJECT_ROOT:-"$PROJECT_DIR/.."}
cd "$PROJECT_ROOT" || exit
# Define entry file
if [[ "$ENTRY_FILE" ]]; then
# Use ENTRY_FILE defined by user
:
elif [[ -s "index.ios.js" ]]; then
ENTRY_FILE=${1:-index.ios.js}
else
ENTRY_FILE=${1:-index.js}
fi
# check and assign NODE_BINARY env
# shellcheck source=/dev/null
source "$REACT_NATIVE_DIR/scripts/node-binary.sh"
HERMES_ENGINE_PATH="$PODS_ROOT/hermes-engine"
[ -z "$HERMES_CLI_PATH" ] && HERMES_CLI_PATH="$HERMES_ENGINE_PATH/destroot/bin/hermesc"
# If hermesc is not available and USE_HERMES is not set to false, show error.
if [[ $USE_HERMES != false && -f "$HERMES_ENGINE_PATH" && ! -f "$HERMES_CLI_PATH" ]]; then
echo "error: Hermes is enabled but the hermesc binary could not be found at ${HERMES_CLI_PATH}." \
"Perhaps you need to run 'bundle exec pod install' or otherwise " \
"point the HERMES_CLI_PATH variable to your custom location." >&2
exit 2
fi
[ -z "$NODE_ARGS" ] && export NODE_ARGS=""
[ -z "$CLI_PATH" ] && CLI_PATH="$REACT_NATIVE_DIR/scripts/bundle.js"
[ -z "$COMPOSE_SOURCEMAP_PATH" ] && COMPOSE_SOURCEMAP_PATH="$REACT_NATIVE_DIR/scripts/compose-source-maps.js"
if [[ -z "$BUNDLE_CONFIG" ]]; then
CONFIG_ARG=""
else
CONFIG_ARG="--config $BUNDLE_CONFIG"
fi
BUNDLE_FILE="$CONFIGURATION_BUILD_DIR/main.jsbundle"
EXTRA_ARGS=()
case "$PLATFORM_NAME" in
"macosx")
BUNDLE_PLATFORM="macos"
;;
*)
BUNDLE_PLATFORM="ios"
;;
esac
if [ "${IS_MACCATALYST}" = "YES" ]; then
BUNDLE_PLATFORM="ios"
fi
EMIT_SOURCEMAP=
if [[ ! -z "$SOURCEMAP_FILE" ]]; then
EMIT_SOURCEMAP=true
fi
PACKAGER_SOURCEMAP_FILE=
if [[ $EMIT_SOURCEMAP == true ]]; then
if [[ $USE_HERMES != false ]]; then
PACKAGER_SOURCEMAP_FILE="$CONFIGURATION_BUILD_DIR/$(basename "$SOURCEMAP_FILE")"
else
PACKAGER_SOURCEMAP_FILE="$SOURCEMAP_FILE"
fi
EXTRA_ARGS+=("--sourcemap-output" "$PACKAGER_SOURCEMAP_FILE")
fi
# Hermes doesn't require JS minification.
if [[ $USE_HERMES != false && $DEV == false ]]; then
EXTRA_ARGS+=("--minify" "false")
fi
# Allow opting out of using npx react-native config
if [[ -n "$CONFIG_JSON" ]]; then
EXTRA_ARGS+=("--load-config" "$CONFIG_JSON")
elif [[ -n "$CONFIG_CMD" ]]; then
EXTRA_ARGS+=("--config-cmd" "$CONFIG_APP")
else
EXTRA_ARGS+=("--config-cmd" "$NODE_BINARY $NODE_ARGS $REACT_NATIVE_DIR/cli.js config")
fi
# shellcheck disable=SC2086
"$NODE_BINARY" $NODE_ARGS "$CLI_PATH" $BUNDLE_COMMAND \
$CONFIG_ARG \
--config-cmd "$CONFIG" \
--entry-file "$ENTRY_FILE" \
--platform "$BUNDLE_PLATFORM" \
--dev $DEV \
--reset-cache \
--bundle-output "$BUNDLE_FILE" \
--assets-dest "$DEST" \
"${EXTRA_ARGS[@]}" \
$EXTRA_PACKAGER_ARGS
if [[ $USE_HERMES == false ]]; then
cp "$BUNDLE_FILE" "$DEST/"
BUNDLE_FILE="$DEST/main.jsbundle"
else
EXTRA_COMPILER_ARGS=
if [[ $DEV == true ]]; then
EXTRA_COMPILER_ARGS=-Og
else
EXTRA_COMPILER_ARGS=-O
fi
if [[ $EMIT_SOURCEMAP == true ]]; then
EXTRA_COMPILER_ARGS="$EXTRA_COMPILER_ARGS -output-source-map"
fi
"$HERMES_CLI_PATH" -emit-binary -max-diagnostic-width=80 $EXTRA_COMPILER_ARGS -out "$DEST/main.jsbundle" "$BUNDLE_FILE"
if [[ $EMIT_SOURCEMAP == true ]]; then
HBC_SOURCEMAP_FILE="$DEST/main.jsbundle.map"
"$NODE_BINARY" "$COMPOSE_SOURCEMAP_PATH" "$PACKAGER_SOURCEMAP_FILE" "$HBC_SOURCEMAP_FILE" -o "$SOURCEMAP_FILE"
rm "$HBC_SOURCEMAP_FILE"
rm "$PACKAGER_SOURCEMAP_FILE"
fi
BUNDLE_FILE="$DEST/main.jsbundle"
fi
if [[ $DEV != true && ! -f "$BUNDLE_FILE" ]]; then
echo "error: File $BUNDLE_FILE does not exist. Your environment is misconfigured as Metro was not able to produce the bundle so your release application won't work!" >&2
exit 2
fi
```
### Steps to reproduce
1. yarn ios
2. after yarn ios i am not able to run it
### React Native Version
0.75.2
### Affected Platforms
Runtime - iOS
### Output of `npx react-native info`
```text
my react-native info
System:
OS: macOS 14.5
CPU: (8) arm64 Apple M1
Memory: 104.50 MB / 16.00 GB
Shell: 5.9 - /bin/zsh
Binaries:
Node: 19.2.0 - ~/.nvm/versions/node/v19.2.0/bin/node
Yarn: 3.6.4 - /opt/homebrew/bin/yarn
npm: 9.6.6 - ~/.nvm/versions/node/v19.2.0/bin/npm
Watchman: 2022.09.19.00 - /opt/homebrew/bin/watchman
Managers:
CocoaPods: 1.15.2 - /opt/homebrew/bin/pod
SDKs:
iOS SDK:
Platforms: DriverKit 23.2, iOS 17.2, macOS 14.2, tvOS 17.2, visionOS 1.0, watchOS 10.2
Android SDK: Not Found
IDEs:
Android Studio: 2024.1 AI-241.18034.62.2411.12071903
Xcode: 15.2/15C500b - /usr/bin/xcodebuild
Languages:
Java: 17.0.11 - /usr/bin/javac
npmPackages:
@react-native-community/cli: Not Found
react: 18.3.1 => 18.3.1
react-native: 0.75.2 => 0.75.2
```
### Stacktrace or Logs
```text
error export CLANG_WARN_EMPTY_BODY\=YES
error unknown option `--config-cmd'
error Failed to build ios project.
```
### Reproducer
https://github.com/react-native-community/reproducer-react-native
### Screenshots and Videos
<img width="318" alt="image" src="https://github.com/user-attachments/assets/405e647e-6cb5-4548-a024-4679fc6ea3a4">
| Needs: Repro,Newer Patch Available,Needs: Attention | low | Critical |
2,532,213,431 | pytorch | High MacOS queue | > NOTE: Remember to label this issue with "`ci: sev`"
## Current Status
Ongoing
## Error looks like
Pending MacOS jobs
## Incident timeline (all times pacific)
Sep 17th
## User impact
Pending MacOS jobs, need to force merge
## Root cause
The number of available MacOS runners drops from 60 to 22
## Mitigation
TBD
## Prevention/followups
TBD
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @seemethere @malfet @pytorch/pytorch-dev-infra | high priority,module: ci,triaged | low | Critical |
2,532,243,253 | rust | Default derived `Hash` impl for `fn` types can lead to subtle bugs | <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
So I was a bit hesitant to file this bug because the behavior makes sense once you realize what's happening, but I think due to the nature of what can happen it was worth filing an issue. (I briefly searched existing issues and couldn't find anything similar)
Today, `#[derive(Hash)]` is able to derive an implementation for function pointers used as fields in a struct. For example, imagine:
```rs
use std::hash::{Hash, Hasher};
trait ExampleTrait {
fn example() -> ();
}
#[derive(Hash)]
struct FunctionWrapper {
func: fn() -> ()
}
impl FunctionWrapper {
fn new<T: ExampleTrait>(_: T) -> Self {
Self {
func: T::example
}
}
fn print_hash(&self) {
let mut hasher = std::hash::DefaultHasher::new();
self.hash(&mut hasher);
eprintln!("{}", hasher.finish());
}
}
struct Example;
impl ExampleTrait for Example {
fn example() -> () {
()
}
}
struct Example2;
impl ExampleTrait for Example2 {
fn example() -> () {
()
}
}
fn main() {
let example = Example;
let example2 = Example2;
let wrapper = FunctionWrapper::new(example);
let wrapper2 = FunctionWrapper::new(example2);
wrapper.print_hash();
wrapper2.print_hash();
}
```
Now I would expect that `wrapper.print_hash()` and `wrapper2.print_hash()` print different hashes. This works.
```
Compiling playground v0.0.1 (/playground)
Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.49s
Running `target/debug/playground`
13248834103875764188
6707210748105774930
```
What's unexpected is the second time I run this,
```
Compiling playground v0.0.1 (/playground)
Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.61s
Running `target/debug/playground`
5430550945448582829
17979896429865431302
```
I get different results. When I think about it, this makes sense - you can't make guarantees on where a function pointer is located, and when I look at the `Hash` implementations I'm guessing this ends up falling into the `*const usize` Hash implementation. (I was a bit surprised that there was an implementation for hashing raw pointers to be honest)
It's unexpected because typically when `Hash` provides implementations you half expect them to be semi-stable hashes (at least for the same compiler version), for example you can't derive an implementation for floats because you can't make the same guarantee you get the same float each time (At least that's why I assumed there's no default implementation for f32, f64, etc.) so then you leave it up to the developer to decide they want to handle it.
This would be a hack, but I kind of thought it would essentially end up hashing a string literal like, `<Example as ExampleTrait>::example_ptr_0_0` and `<Example2 as ExampleTrait>::example_ptr_0_0`.
To summarize, I don't think the problem is that the function hashes in an unexpected way, but I do feel like it's an anti-pattern to allow it to be derived in the first place.
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=ea63c430c7c816134027c5ff33c86821
`rustc --version --verbose`:
```
rustc 1.83.0-nightly (9b72238eb 2024-09-14)
binary: rustc
commit-hash: 9b72238eb813e9d06e9e9d270168512fbffd7ee7
commit-date: 2024-09-14
host: x86_64-unknown-linux-gnu
release: 1.83.0-nightly
LLVM version: 19.1.0
```
| T-libs-api,C-bug | low | Critical |
2,532,260,518 | go | cmd/go/internal/test: test cache id does not include umask | ### Go version
go version go1.23.1 darwin/arm64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE='on'
GOARCH='arm64'
GOBIN='/Users/twp/.local/bin'
GOCACHE='/Users/twp/Library/Caches/go-build'
GOENV='/Users/twp/Library/Application Support/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='arm64'
GOHOSTOS='darwin'
GOINSECURE=''
GOMODCACHE='/Users/twp/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='darwin'
GOPATH='/Users/twp'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/opt/homebrew/Cellar/go/1.23.1/libexec'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='local'
GOTOOLDIR='/opt/homebrew/Cellar/go/1.23.1/libexec/pkg/tool/darwin_arm64'
GOVCS=''
GOVERSION='go1.23.1'
GODEBUG=''
GOTELEMETRY='on'
GOTELEMETRYDIR='/Users/twp/Library/Application Support/go/telemetry'
GCCGO='gccgo'
GOARM64='v8.0'
AR='ar'
CC='cc'
CXX='c++'
CGO_ENABLED='1'
GOMOD='/Users/twp/src/go.googlesource.com/go/src/go.mod'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -arch arm64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -ffile-prefix-map=/var/folders/sc/4d1t02x92z73bqkq4dvn25h40000gn/T/go-build3190885029=/tmp/go-build -gno-record-gcc-switches -fno-common'
```
### What did you do?
I have several tests that create files and check the permissions of the created files. On UNIX systems, the permissions of the created files depend on the value of [umask](https://en.wikipedia.org/wiki/Umask), which is a process-global variable managed by the OS, the same way that environment variables are process-global variables managed by the OS.
I expect to be able to change umask, run `go test`, and have `go test` re-run my tests with the new umask. However, after changing umask, `go test` incorrectly re-uses its cached test results:
```
$ go version
go version go1.23.1 darwin/arm64
$ umask 002 # <--- set umask to 002
$ go test .
ok tmp 0.252s # <--- tests pass when umask is 002
$ umask 022 # <--- change umask to 022
$ go test . # <--- re-run tests
ok tmp (cached) # <--- BUG: test results re-used from cache
$ go test . -count=1 # <--- disabling cache gives correct test result
--- FAIL: TestUmask (0.00s)
umask_test.go:34: got 644, want 664
FAIL
FAIL tmp 0.249s
FAIL
```
The example test is:
```go
package main
import (
"io/fs"
"os"
"path/filepath"
"testing"
)
func TestUmask(t *testing.T) {
// This tests passes when the umask is 002 (e.g. on Debian-based systems),
// but fails when the umask is 022 (e.g. on RedHat-based systems and macOS).
filename := filepath.Join(t.TempDir(), "file")
if err := os.WriteFile(filename, nil, 0o664); err != nil {
t.Fatal(err)
}
fileInfo, err := os.Stat(filename)
if err != nil {
t.Fatal(err)
}
// Ensure that the file's permissions are 0o666 &^ 0o002 == 0o664.
if got, want := fileInfo.Mode().Perm(), fs.FileMode(0o664); got != want {
t.Fatalf("got %03o, want %03o", got, want)
}
}
```
Note that Go creates a test input ID that includes the values of environment variables and the mtimes and sizes of files accessed by the test. This test input ID should also include the initial umask value. I will create a CL that fixes this.
### What did you see happen?
`go test` re-used a cached test result, when it should not have.
### What did you expect to see?
`go test` should not re-use cached test results when the umask is changed. | NeedsInvestigation | low | Critical |
2,532,280,645 | go | os: Open("file/.") does not produce an error on wasip1 | ### Go version
master
### Output of `go env` in your module/workspace:
```shell
$ go env
GO111MODULE=''
GOARCH='arm64'
GOBIN=''
GOCACHE='/Users/dneil/Library/Caches/go-build'
GOENV='/Users/dneil/Library/Application Support/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='arm64'
GOHOSTOS='darwin'
GOINSECURE=''
GOMODCACHE='/Users/dneil/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='darwin'
GOPATH='/Users/dneil'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/Users/dneil/src/go'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/Users/dneil/src/go/pkg/tool/darwin_arm64'
GOVCS=''
GOVERSION='devel go1.24-adf220a5d5 Mon Sep 9 17:11:52 2024 +0000'
GODEBUG=''
GOTELEMETRY='on'
GOTELEMETRYDIR='/Users/dneil/Library/Application Support/go/telemetry'
GCCGO='gccgo'
GOARM64='v8.0'
AR='ar'
CC='clang'
CXX='clang++'
CGO_ENABLED='1'
GOMOD='/tmp/m2/go.mod'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -arch arm64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -ffile-prefix-map=/var/folders/kw/0t4d_x2n4plg9157krpjtxmw0047mf/T/go-build1969548244=/tmp/go-build -gno-record-gcc-switches -fno-common'
```
### What did you do?
```
package main
import (
"fmt"
"os"
)
func main() {
_, err := os.Open("file/.")
fmt.Println(err)
}
```
```
$ touch file
$ go run main.go
open file/.: not a directory
$ GOOS=wasip1 GOARCH=wasm go run main.go
<nil>
```
### What did you see happen?
When built with GOOS=wasip1, the os package performs some path cleaning on filenames which results in a terminal "/." being removed. This causes opening a non-directory file to unexpectedly succeed.
### What did you expect to see?
An error opening "file/.", because "file" is not a directory. | NeedsInvestigation | low | Critical |
2,532,292,985 | ollama | I'd like to request a new feature for a workflow that runs a security scan when there is a change to the build system. | How's it going? I'd like to request when the build-system i.e. the Dockerfile, go.mod, requirements.txt etc. are updated, a workflow is triggered that scans a slimmed down resulting docker image and reports security issues. The general idea can be found in https://github.com/rempel1234/ollama/blob/main/.github/workflows/owasp-scan.yaml https://github.com/rempel1234/ollama/blob/main/.github/workflows/qa-sec.yml
https://github.com/rempel1234/ollama/blob/main/.github/workflows/virustotal.yaml
Currently, they'd still need to be fine-tuned to make sure the report is formatted properly (removing any non-high or critical findings, or findings that are accepted risks), consolidated into one workflow, remove all failure states, possibly change the reporting mechanism to be an email to hello@ollama... and the commits redone to be fewer commits, and more meaningful messages... | feature request | low | Critical |
2,532,293,282 | rust | rustdoc: make linking to examples easier and detect dead links | When writing module-level or other high-level documentation, I often want to link to our examples to show users how to do something in more detail. See https://github.com/bevyengine/bevy/pull/15056 for an example of this problem.
This is distinct from the automatically generated links to examples, `rustdoc-scrape-examples`, (which are great!): we want to be able to manually link to specific examples.
@GuillaumeGomez pointed out that this can be done in part using relative paths, e.g. `../../src/custom_loop/custom_loop.rs.html#19`. However, this is very fragile: any change to either file breaks the link with no tooling to detect the breakage. This functionality is also undocumented, and could be broken by rustdoc / docs.rs at any time.
To add full support, he thinks we might need an intra-doc link extension to make it work.
| T-rustdoc,C-enhancement | low | Critical |
2,532,302,745 | langchain | AzureSearch Oauth with ManagedIdentity using DefaultCredentials fallback results in a 403 | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
When running in an AppService configured with a User assigned managed identity which has a number of permissions assinged I am unable to use the `AzureSearch` class.
As noted in #26216 explicitly passing an access token fails.
However the workaround to of supplying `None` also does not work when trying to use ManagedIdentity rather than a less secure option like Service Principal based auth.
```python
from langchain_community.vectorstores.azuresearch import AzureSearch
# setup your connection params:
SEARCH_SERVICE_ENDPOINT= ".." # Azure AI Search URL of the service to connect to (default URL is https://RESOURCE_NAME.search.windows.net)
embeddings = ... # replace with an instance of langchain.embeddings.base.Embeddings
# Try None for access_token to force fallback behavior in AzureSearch object ==> fails
db = AzureSearch(
azure_search_endpoint= SEARCH_SERVICE_ENDPOINT,
index_name="indexname",
embedding_function=embeddings,
azure_ad_access_token=None,
azure_search_key=None
)
```
### Error Message and Stack Trace (if applicable)
Here are the logs from our AppService, I've redacted a few things but the important things are still here.
```log
2024-09-17T20:33:04.8837325Z 2024-09-17 20:33:04 - Incomplete environment configuration for EnvironmentCredential. These variables are set: AZURE_CLIENT_ID
2024-09-17T20:33:04.8838328Z 2024-09-17 20:33:04 - ManagedIdentityCredential will use App Service managed identity
2024-09-17T20:33:05.0344639Z 2024-09-17 20:33:05 - Request URL: 'http://<host>/msi/token?api-version=REDACTED&resource=REDACTED&client_id=REDACTED'
2024-09-17T20:33:05.0345552Z Request method: 'GET'
2024-09-17T20:33:05.0345607Z Request headers:
2024-09-17T20:33:05.0345647Z 'X-IDENTITY-HEADER': 'REDACTED'
2024-09-17T20:33:05.0345692Z 'User-Agent': 'azsdk-python-identity/1.17.1 Python/3.12.5 (Linux-5.15.158.2-1.cm2-x86_64-with-glibc2.36)'
2024-09-17T20:33:05.0345733Z No body was attached to the request
2024-09-17T20:33:05.2362589Z 2024-09-17 20:33:05 - Response status: 200
2024-09-17T20:33:05.2473564Z Response headers:
2024-09-17T20:33:05.2474192Z 'Content-Type': 'application/json; charset=utf-8'
2024-09-17T20:33:05.2474331Z 'Date': 'Tue, 17 Sep 2024 20:33:05 GMT'
2024-09-17T20:33:05.2474374Z 'Server': 'Kestrel'
2024-09-17T20:33:05.2474414Z 'Transfer-Encoding': 'chunked'
2024-09-17T20:33:05.2474456Z 'X-CORRELATION-ID': 'REDACTED'
2024-09-17T20:33:05.2655045Z 2024-09-17 20:33:05 - DefaultAzureCredential acquired a token from ManagedIdentityCredential
2024-09-17T20:33:05.2655454Z 2024-09-17 20:33:05 - Request URL: 'https://<search-service>.search.windows.net/indexes('index-name')?api-version=REDACTED'
2024-09-17T20:33:05.2662727Z Request method: 'GET'
2024-09-17T20:33:05.2662853Z Request headers:
2024-09-17T20:33:05.2662896Z 'Accept': 'application/json;odata.metadata=minimal'
2024-09-17T20:33:05.2662938Z 'x-ms-client-request-id': '0ab7a9d6-7534-11ef-acd8-da8272092cdd'
2024-09-17T20:33:05.2662983Z 'User-Agent': 'langchain azsdk-python-search-documents/11.5.1 Python/3.12.5 (Linux-5.15.158.2-1.cm2-x86_64-with-glibc2.36)'
2024-09-17T20:33:05.2663021Z 'Authorization': 'REDACTED'
2024-09-17T20:33:05.2663060Z No body was attached to the request
2024-09-17T20:33:05.7624539Z 2024-09-17 20:33:05 - Response status: 403
2024-09-17T20:33:05.7635306Z Response headers:
2024-09-17T20:33:05.7635393Z 'Content-Length': '55'
2024-09-17T20:33:05.7635437Z 'Content-Type': 'application/json; charset=utf-8'
2024-09-17T20:33:05.7635476Z 'Content-Language': 'REDACTED'
2024-09-17T20:33:05.7635514Z 'Server': 'Microsoft-IIS/10.0'
2024-09-17T20:33:05.7635553Z 'Strict-Transport-Security': 'REDACTED'
2024-09-17T20:33:05.7635591Z 'Preference-Applied': 'REDACTED'
2024-09-17T20:33:05.7635632Z 'request-id': '0ab7a9d6-7534-11ef-acd8-da8272092cdd'
2024-09-17T20:33:05.7643451Z 'elapsed-time': 'REDACTED'
2024-09-17T20:33:05.7643616Z 'Date': 'Tue, 17 Sep 2024 20:33:05 GMT'
2024-09-17T20:33:05.8131161Z 2024-09-17 20:33:05 - () Authorization failed.
2024-09-17T20:33:05.8139978Z Code:
2024-09-17T20:33:05.8140130Z Message: Authorization failed.
2024-09-17T20:33:05.8140181Z Traceback (most recent call last):
2024-09-17T20:33:05.8140235Z File "/usr/local/lib/python3.12/site-packages/chainlit/utils.py", line 44, in wrapper
2024-09-17T20:33:05.8140282Z return await user_function(**params_values)
2024-09-17T20:33:05.8530629Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-09-17T20:33:05.8533670Z File "/usr/local/lib/python3.12/site-packages/chainlit/__init__.py", line 164, in with_parent_id
2024-09-17T20:33:05.8533730Z await func(message)
2024-09-17T20:33:05.8533771Z File "/app/chainlit_app.py", line 63, in on_message
2024-09-17T20:33:05.8533811Z vector_store = get_vector_store(default_credential)
2024-09-17T20:33:05.8533850Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-09-17T20:33:05.8533896Z File "/app/retrievers.py", line 30, in get_vector_store
2024-09-17T20:33:05.8533936Z vector_store: AzureSearch = AzureSearch(
2024-09-17T20:33:05.8627932Z ^^^^^^^^^^^^
2024-09-17T20:33:05.8628046Z File "/usr/local/lib/python3.12/site-packages/langchain_community/vectorstores/azuresearch.py", line 335, in __init__
2024-09-17T20:33:05.8628086Z self.client = _get_search_client(
2024-09-17T20:33:05.8628124Z ^^^^^^^^^^^^^^^^^^^
2024-09-17T20:33:05.8628168Z File "/usr/local/lib/python3.12/site-packages/langchain_community/vectorstores/azuresearch.py", line 145, in _get_search_client
2024-09-17T20:33:05.8628208Z index_client.get_index(name=index_name)
2024-09-17T20:33:05.8628251Z File "/usr/local/lib/python3.12/site-packages/azure/core/tracing/decorator.py", line 94, in wrapper_use_tracer
2024-09-17T20:33:05.8690099Z return func(*args, **kwargs)
2024-09-17T20:33:05.8690369Z ^^^^^^^^^^^^^^^^^^^^^
2024-09-17T20:33:05.8690419Z File "/usr/local/lib/python3.12/site-packages/azure/search/documents/indexes/_search_index_client.py", line 155, in get_index
2024-09-17T20:33:05.8690459Z result = self._client.indexes.get(name, **kwargs)
2024-09-17T20:33:05.8690500Z ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-09-17T20:33:05.8690543Z File "/usr/local/lib/python3.12/site-packages/azure/core/tracing/decorator.py", line 94, in wrapper_use_tracer
2024-09-17T20:33:05.8690581Z return func(*args, **kwargs)
2024-09-17T20:33:05.8690619Z ^^^^^^^^^^^^^^^^^^^^^
2024-09-17T20:33:05.8786775Z File "/usr/local/lib/python3.12/site-packages/azure/search/documents/indexes/_generated/operations/_indexes_operations.py", line 849, in get
2024-09-17T20:33:05.8789138Z raise HttpResponseError(response=response, model=error)
2024-09-17T20:33:05.8789925Z azure.core.exceptions.HttpResponseError: () Authorization failed.
2024-09-17T20:33:05.8790517Z Code:
2024-09-17T20:33:05.8790563Z Message: Authorization failed.
```
### Description
In `azuresearch.py` the function `_get_search_client` uses fallback logic if the value for `key` and `azure_ad_access_token` are `None` then the logic on line 141 to build the SeachIndexClient looks like this:
```python
SearchIndexClient(endpoint=endpoint, credential=credential, user_agent=user_agent)
```
I believe that this is the cause of the failure as digging deeper into the internal logic of the Azure library there is logic that will try to read an `audience` from the kwargs. When the SearchClient is using a TokenCredential this value is used to generate the scope for the underlying token request.
I believe that the fix for this issue is to modify the constructor call to pass the audience string for Azure search like this:
```python
SearchIndexClient(endpoint=endpoint, credential=credential, user_agent=user_agent, audience="https://search.azure.com/")
```
### System Info
Running those commands on my dev machine results in failure, but I build a container image based on `python:3.12.5-slim-bookworm` that install following packaged via requirements,txt
``` log
azure-identity==1.17.1
azure-search-documents==11.5.1
beautifulsoup4==4.12.3
bs4==0.0.2
chainlit==1.1.402
chardet==5.2.0
fastapi==0.110.3
idna==3.8
langchain==0.3.0
langchain-community==0.3.0
langchain-core==0.3.0
langchain-experimental==0.3.0
langchain-openai==0.2.0
langchain-text-splitters==0.3.0
langgraph==0.2.22
langgraph-checkpoint==1.0.10
langsmith==0.1.121
lxml==5.3.0
msal==1.30.0
msal-extensions==1.2.0
pydantic==2.8.2
pydantic_core==2.20.1
PyJWT==2.9.0
python-dotenv==1.0.1
tiktoken==0.7.0
uptrace==1.26.0
urllib3==2.2.2
uvicorn==0.25.0
platform = linux
python = 3.12.5
``` | Ɑ: vector store,stale | low | Critical |
2,532,318,125 | godot | ERROR: Caller thread can't call this function in this node (/root). Use call_deferred() or call_thread_group() instead | ### Tested versions
Reproducible in Godot_v4.4-dev2_linux.x86_64, and Godot_v3
### System information
Linux fedora 38,
### Issue description
I download Godot, on the link: https://github.com/godotengine/godot-builds/releases/download/4.4-dev2/Godot_v4.4-dev2_linux.x86_64.zip, I extract the program, I do create the project, then when I edit is error is showed:
```
ERROR: Caller thread can't call this function in this node (/root). Use call_deferred() or call_thread_group() instead.
at: propagate_notification (scene/main/node.cpp:2446)
================================================================
handle_crash: Program crashed with signal 11
Engine version: Godot Engine v4.4.dev2.official (97ef3c837263099faf02d8ebafd6c77c94d2aaba)
Dumping the backtrace. Please include this when reporting the bug to the project developer.
[1] /lib64/libc.so.6(+0x3dbb0) [0x7f624994abb0] (??:0)
[2] llvm::CmpInst::Create(llvm::Instruction::OtherOps, llvm::CmpInst::Predicate, llvm::Value*, llvm::Value*, llvm::Twine const&, llvm::Instruction*) (??:0)
[3] [0x7f622026e8c0] (??:0)
-- END OF BACKTRACE --
```





### Steps to reproduce
The images on issue description reproduce the errors.
### Minimal reproduction project (MRP)
[fb_godot_2.zip](https://github.com/user-attachments/files/17036182/fb_godot_2.zip)
| needs testing,crash | low | Critical |
2,532,321,153 | svelte | Compiler option for bundler to provide optimization info about asset import | ### Describe the problem
Developers and the bundler know that `img` is an asset in `import img from './myimg.png';`. However, the Svelte compiler does not know this, which limits the ability to inline it into the template and otherwise optimize it. The fact that it's an asset means it has a few key properties: it is a string, it is always defined, it is immutable, and it is the same on server and client so does not affect hydration.
### Describe the proposed solution
Four options:
- Harcode knowledge that `.png`, `.jpg`, `.svg`, etc. file extension represents an asset. This is probably a pretty safe assumption an would minimize API surface area. It's not the ideal long-term solution, but would be a quick win in terms of reducing the output file size
- A compiler option such as `isAsset`. The bundler can provide a function which Svelte can call to determine whether an import URL is an asset. This is still relatively easy to implement, but introduces an API that may eventually be obsoleted by some more general API. I think this is okay as we could deprecate it when that time comes and as it's just an optimization is easy to ignore any value passed in that manner
- Some more general API. This is impossible to do without an API because only the bundler knows if the same value will be returned both on the client side and server side (e.g. aliases or exports conditions). This cannot be implemented purely in Svelte (https://github.com/sveltejs/svelte/pull/13242) or purely in Vite (https://github.com/vitejs/vite/issues/18119) and requires the two to communicate. This could be something like:
```
bundler: {
resolveId,
load
}
```
- `vite-plugin-svelte` could do some preprocessing to turn imports into constants so that they can be inlined in the templates
### Importance
nice to have | perf | low | Minor |
2,532,323,414 | go | cmd/go/internal/modcmd: download with -json flag doesn't print JSON objects to stdout for some module download errors | ### Go version
go version go1.23.1 darwin/arm64
### Output of `go env -changed` in your module/workspace
<details><br>
```sh
GOPROXY='http://127.0.0.1'
```
</details>
### What did you do?
https://pkg.go.dev/cmd/go#hdr-Download_modules_to_local_cache says:
> The -json flag causes download to print a sequence of JSON objects to standard output, describing each downloaded module (or failure), corresponding to this Go struct: `type Module struct { ... }`
It seems that doesn't always happen for some networking errors observed in the Go build system ([build 8736907273916323825](https://logs.chromium.org/logs/golang/buildbucket/cr-buildbucket/8736907273916323825/+/u/step/18/log/3) being one example). Consider a minimal set of steps which seems to reproduce the problem:
```
$ export GOPROXY=http://127.0.0.1 # simulate an error downloading modules
$ cd $(mktemp -d)
$ printf 'module test\nrequire example.com v0.0.0\n' > go.mod
$ go mod download -json
```
### What did you see happen?
No JSON is printed to stdout:
```
$ go mod download -json 2>/dev/null
$
```
### What did you expect to see?
JSON is printed to stdout, describing the error:
```
$ go mod download -json 2>/dev/null
{
"Path": "example.com",
"Version": "v0.0.0",
"Error": "Get \"http://127.0.0.1/example.com/@v/v0.0.0.mod\": dial tcp 127.0.0.1:80: connect: connection refused"
}
$
```
CC @matloob, @samthanawalla. | NeedsInvestigation,GoCommand | low | Critical |
2,532,393,721 | rust | Tracking issue for pin ergonomics | This is a tracking issue for work on pin ergonomics.
The feature gate for the issue is `#![feature(pin_ergonomics)]`.
### About tracking issues
Tracking issues are used to record the overall progress of implementation. They are also used as hubs connecting to other relevant issues, e.g., bugs or open design questions. A tracking issue is however *not* meant for large scale discussion, questions, or bug reports about a feature. Instead, open a dedicated issue for the specific matter and add the relevant feature gate label.
### Steps
- [x] Approve as lang experiment.
- We accepted this experiment in the 2024-09-18 lang triage meeting.
- [ ] Accept an RFC.
- [x] Implement pin reborrowing in nightly.
- https://github.com/rust-lang/rust/pull/130526
- https://github.com/rust-lang/rust/pull/130633
- [ ] Implement pin autoref in nightly.
- [ ] Implement `&pin const` / `&pin mut` constructor syntax in nightly.
- [x] Implement `&pin const` / `&pin mut` type syntax in nightly.
- https://github.com/rust-lang/rust/pull/130635
- [ ] Implement `&pin const self` / `&pin mut self` argument syntax in nightly.
- [ ] Implement `#[pin]` struct field annotations and `drop` changes in nightly.
- [ ] Investigate affordances for unsafe parts of the `Pin` API.
- [ ] Add documentation to the [dev guide][].
- See the [instructions][doc-guide].
- [ ] Add documentation to the [reference][].
- See the [instructions][reference-instructions].
- [ ] Add formatting for new syntax to the [style guide][].
- See the [nightly style procedure][].
- [ ] Stabilize.
- See the [instructions][stabilization-instructions].
[dev guide]: https://github.com/rust-lang/rustc-dev-guide
[doc-guide]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#documentation-prs
[edition guide]: https://github.com/rust-lang/edition-guide
[nightly style procedure]: https://github.com/rust-lang/style-team/blob/master/nightly-style-procedure.md
[reference]: https://github.com/rust-lang/reference
[reference-instructions]: https://github.com/rust-lang/reference/blob/master/CONTRIBUTING.md
[stabilization-instructions]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#stabilization-pr
[style guide]: https://github.com/rust-lang/rust/tree/master/src/doc/style-guide
### Unresolved Questions
TODO.
### Related
TODO.
cc @eholk @rust-lang/lang
| T-lang,C-tracking-issue,B-experimental,F-pin_ergonomics | low | Critical |
2,532,421,835 | flutter | `TextPainter.getFullHeightForCaret` reports incorrect results for a space as the last character | ### Steps to reproduce
1. Run the provided sample code
### Expected results
The caret at the end should be the same size as the other two.
### Actual results
The caret at the end is shorter than the other two.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
import 'package:flutter/rendering.dart';
void main() {
runApp(const MainApp());
}
class MainApp extends StatefulWidget {
const MainApp({super.key});
@override
State<MainApp> createState() => _MainAppState();
}
class _MainAppState extends State<MainApp> {
final _textKey = GlobalKey();
final _sampleText = 'ABC DEFGH '; // Length is 10
final _startTextPosition = const TextPosition(offset: 0);
Offset? _startCaretOffset;
double _startCaretHeight = 0;
final _middleTextPosition = const TextPosition(offset: 4);
Offset? _middleCaretOffset;
double _middleCaretHeight = 0;
/// The position immediately after the last character (the space character).
final _endTextPosition = const TextPosition(offset: 10);
Offset? _endCaretOffset;
double _endCaretHeight = 0;
@override
void initState() {
super.initState();
WidgetsBinding.instance.addPostFrameCallback((duration) {
if (!mounted) {
return;
}
final renderParagraph = _textKey.currentContext!.findRenderObject() as RenderParagraph;
setState(() {
_startCaretOffset = renderParagraph.getOffsetForCaret(_startTextPosition, Rect.zero);
_startCaretHeight = renderParagraph.getFullHeightForCaret(_startTextPosition);
_middleCaretOffset = renderParagraph.getOffsetForCaret(_middleTextPosition, Rect.zero);
_middleCaretHeight = renderParagraph.getFullHeightForCaret(_middleTextPosition);
_endCaretOffset = renderParagraph.getOffsetForCaret(_endTextPosition, Rect.zero);
_endCaretHeight = renderParagraph.getFullHeightForCaret(_endTextPosition);
});
});
}
@override
Widget build(BuildContext context) {
return MaterialApp(
home: Scaffold(
body: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: [
Stack(
clipBehavior: Clip.none,
children: [
Container(
color: Colors.yellow,
child: Text(
_sampleText,
key: _textKey,
style: const TextStyle(
fontSize: 40,
color: Colors.black,
),
),
),
if (_startCaretOffset != null)
Positioned(
left: _startCaretOffset!.dx,
top: _startCaretOffset!.dy,
child: Container(
width: 2,
height: _startCaretHeight,
color: Colors.red,
),
),
if (_middleCaretOffset != null)
Positioned(
left: _middleCaretOffset!.dx,
top: _middleCaretOffset!.dy,
child: Container(
width: 2,
height: _middleCaretHeight,
color: Colors.red,
),
),
if (_endCaretOffset != null)
Positioned(
left: _endCaretOffset!.dx,
top: _endCaretOffset!.dy,
child: Container(
width: 2,
height: _endCaretHeight,
color: Colors.red,
),
),
],
),
if (_startCaretOffset != null) Text('Letter "B" caret height: $_startCaretHeight'),
if (_middleCaretOffset != null) Text('Letter "F" caret height: $_middleCaretHeight'),
if (_endCaretOffset != null) Text('Space caret height: $_endCaretHeight'),
],
),
),
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>

</details>
### Logs
<details open><summary>Logs</summary>
No relevant info.
</details>
### Flutter Doctor output
<details><summary>Doctor output</summary>
```console
$ flutter doctor -v
[✓] Flutter (Channel master, 3.25.0-1.0.pre.153, on macOS 15.0 24A335
darwin-arm64, locale en-BR)
• Flutter version 3.25.0-1.0.pre.153 on channel master at
/Users/angelosilvestre/dev/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 17f63272a0 (3 weeks ago), 2024-08-27 19:30:17 +0000
• Engine revision 7d751acc81
• Dart version 3.6.0 (build 3.6.0-175.0.dev)
• DevTools version 2.39.0-dev.15
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /Users/angelosilvestre/Library/Android/sdk
• Platform android-35, build-tools 34.0.0
• Java binary at: /Applications/Android
Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build
17.0.6+0-17.0.6b829.9-10027231)
• All Android licenses accepted.
[!] Xcode - develop for iOS and macOS (Xcode 16.0)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16A242d
! iOS 18.0 Simulator not installed; this may be necessary for iOS and macOS
development.
To download and install the platform, open Xcode, select Xcode > Settings
> Platforms,
and click the GET button for the required platform.
For more information, please visit:
https://developer.apple.com/documentation/xcode/installing-additional-si
mulator-runtimes
! CocoaPods 1.12.1 out of date (1.13.0 is recommended).
CocoaPods is a package manager for iOS or macOS platform code.
Without CocoaPods, plugins will not work on iOS or macOS.
For more info, see https://flutter.dev/to/platform-plugins
To update CocoaPods, see
https://guides.cocoapods.org/using/getting-started.html#updating-cocoapods
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2022.3)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build
17.0.6+0-17.0.6b829.9-10027231)
[✓] VS Code (version 1.93.1)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.96.0
[✓] Connected device (3 available)
• macOS (desktop) • macos • darwin-arm64 • macOS 15.0 24A335
darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 15.0 24A335
darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 129.0.6668.58
[✓] Network resources
• All expected network resources are available.
! Doctor found issues in 1 category.
```
</details>
| framework,d: api docs,has reproducible steps,P2,team-text-input,triaged-text-input,found in release: 3.24,found in release: 3.26 | low | Minor |
2,532,435,531 | flutter | StateError from Multiple Stream Listeners in Debugger Attach | Flutter version 3.24.3 on channel stable
When attempting to attach a debugger to a running Flutter application using the flutter attach command, an error occurs indicating that a stream has already been listened to, leading to an unexpected termination of the Flutter process. This issue surfaces when multiple instances of the Dart Development Service (DDS) attempt to listen on the same stream without proper handling for single-subscription streams, causing a StateError.
This was the solution:
packages/flutter_tools/lib/src/resident_runner.dart
From:
```
Stream<Uri?>? vmServiceUris;
```
To:
```
StreamController<Uri?>? _vmServiceUrisController;
Stream<Uri?> get vmServiceUris => _vmServiceUrisController?.stream ?? const Stream<Uri?>.empty();
set vmServiceUris(Stream<Uri?> value) => _vmServiceUrisController?.sink.addStream(value);
```
## command
flutter attach --no-version-check --suppress-analytics --debug --device-id emulator-5554 --target /integration_test/test_bundle.dart
## exception
StateError: Bad state: Stream has already been listened to.
```
#0 _StreamController._subscribe (dart:async/stream_controller.dart:686:7)
#1 _ControllerStream._createSubscription (dart:async/stream_controller.dart:837:19)
#2 _StreamImpl.listen (dart:async/stream_impl.dart:497:9)
#3 new _ForwardingStreamSubscription (dart:async/stream_pipe.dart:114:10)
#4 _ForwardingStream._createSubscription (dart:async/stream_pipe.dart:86:16)
#5 _ForwardingStream.listen (dart:async/stream_pipe.dart:81:12)
#6 FlutterDevice.connect (package:flutter_tools/src/resident_runner.dart:267:35)
#7 ResidentRunner.connectToServiceProtocol (package:flutter_tools/src/resident_runner.dart:1404:21)
#8 HotRunner.attach (package:flutter_tools/src/run_hot.dart:236:13)
#9 AttachCommand._attachToDevice (package:flutter_tools/src/commands/attach.dart:403:31)
<asynchronous suspension>
#10 AttachCommand.runCommand (package:flutter_tools/src/commands/attach.dart:265:5)
<asynchronous suspension>
#11 FlutterCommand.run.<anonymous closure> (package:flutter_tools/src/runner/flutter_command.dart:1408:27)
<asynchronous suspension>
#12 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:153:19)
<asynchronous suspension>
#13 CommandRunner.runCommand (package:args/command_runner.dart:212:13)
<asynchronous suspension>
#14 FlutterCommandRunner.runCommand.<anonymous closure> (package:flutter_tools/src/runner/flutter_command_runner.dart:420:9)
<asynchronous suspension>
#15 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:153:19)
<asynchronous suspension>
#16 FlutterCommandRunner.runCommand (package:flutter_tools/src/runner/flutter_command_runner.dart:364:5)
<asynchronous suspension>
#17 run.<anonymous closure>.<anonymous closure> (package:flutter_tools/runner.dart:130:9)
<asynchronous suspension>
#18 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:153:19)
<asynchronous suspension>
#19 main (package:flutter_tools/executable.dart:93:3)
<asynchronous suspension>
```
## flutter doctor
```
[!] Flutter (Channel stable, 3.24.3, on macOS 13.6.4 22G513 darwin-x64, locale en-BR)
• Flutter version 3.24.3 on channel stable at /Users/superuser/.asdf/installs/flutter/3.24.3-stable
! Upstream repository git@github.com:superuser/packages.git is not a standard remote.
Set environment variable "FLUTTER_GIT_URL" to git@github.com:superuser/packages.git to dismiss this error.
• Framework revision 2663184aa7 (6 days ago), 2024-09-11 16:27:48 -0500
• Engine revision 36335019a8
• Dart version 3.5.3
• DevTools version 2.37.3
• If those were intentional, you can disregard the above warnings; however it is recommended to use "git" directly to perform update checks and upgrades.
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /Users/superuser/Library/Android/sdk
• Platform android-34, build-tools 34.0.0
• ANDROID_HOME = /Users/superuser/Library/Android/sdk
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.9+0-17.0.9b1087.7-11185874)
• All Android licenses accepted.
[!] Xcode - develop for iOS and macOS (Xcode 15.2)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 15C500b
✗ CocoaPods installed but not working.
You appear to have CocoaPods installed but it is not working.
This can happen if the version of Ruby that CocoaPods was installed with is different from the one being used to invoke it.
This can usually be fixed by re-installing CocoaPods.
For re-installation instructions, see https://guides.cocoapods.org/using/getting-started.html#installation
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2023.2)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.9+0-17.0.9b1087.7-11185874)
[✓] IntelliJ IDEA Ultimate Edition (version 2023.3.7)
• IntelliJ at /Applications/IntelliJ IDEA.app
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
[✓] VS Code (version 1.93.1)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.96.0
[✓] Connected device (4 available)
• sdk gphone64 x86 64 (mobile) • emulator-5554 • android-x64 • Android 14 (API 34) (emulator)
• sdk gphone64 x86 64 (mobile) • emulator-5556 • android-x64 • Android 14 (API 34) (emulator)
• macOS (desktop) • macos • darwin-x64 • macOS 13.6.4 22G513 darwin-x64
• Chrome (web) • chrome • web-javascript • Google Chrome 128.0.6613.138
[✓] Network resources
• All expected network resources are available.
! Doctor found issues in 2 categories.
```
| c: crash,tool,team-tool | low | Critical |
2,532,530,849 | next.js | revalidatePath doesn't work with debouncing and page navigation | ### Link to the code that reproduces this issue
https://codesandbox.io/p/github/eduardodallmann/app-next-server-action-problem-2
### To Reproduce
1. Go to the events page
2. Edit one of the events by clicking on edit
3. Quickly click on the backdrop
4. The list in the table will not be updated

### Current vs. Expected behavior
I will describe how my application works.
It has a listing of events. When I click edit, I navigate to events/[slug]. The form will use a server action with debouncing to save. To control this debouncing I use context api, also to show on the screen that it is saving. After making an edit to the form and waiting 2 seconds, the data is saved and the /events path is revalidated. Everything works perfectly.
Now I will describe the problem.
When I edit the form and click on the backdrop quickly before 2 seconds, the drawer is closed and the application navigates to /events again. When the 2 seconds are complete, the server action is executed, the data change is saved, but the list in the table is not updated.
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: #1 SMP Fri Mar 29 23:14:13 UTC 2024
Available memory (MB): 31944
Available CPU cores: 24
Binaries:
Node: 18.19.0
npm: 10.8.1
Yarn: N/A
pnpm: 9.10.0
Relevant Packages:
next: 14.2.12 // Latest available version is detected (14.2.12).
eslint-config-next: 14.2.12
react: 18.3.1
react-dom: 18.3.1
typescript: 5.6.2
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Navigation
### Which stage(s) are affected? (Select all that apply)
next dev (local), Vercel (Deployed)
### Additional context
I tested with 14.2.12 and 15.0.0-canary.158
Repo https://github.com/eduardodallmann/app-next-server-action-problem-2 | bug,Navigation | low | Minor |
2,532,574,848 | flutter | flutter web textfield is hard to open copy and paste menu with double-tap on ios phone | ### Steps to reproduce
On Flutter Web for Android, after long-pressing a TextField, the paste functionality appears correctly. However, on Flutter Web for iOS, users typically double-tap to bring up the paste menu. When double-tapping, the menu briefly appears but quickly disappears. If the TextField contains text, it gets fully selected. It seems that the "select all" action overrides the paste menu. Long-pressing, however, works as expected and correctly opens the paste functionality.
### Expected results
When double-tapping on ios web, shows the copy and paste menu.
### Actual results
When double-tapping, the menu briefly appears but quickly disappears.
### Code sample
<details open><summary>Code sample</summary>
```dart
@override
Widget build(BuildContext context) {
return TextField();
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
https://github.com/user-attachments/assets/59595084-3379-40f4-a5b1-e51f6a67a66e
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.24.1, on macOS 14.5 23F79 darwin-arm64, locale zh-Hant-TW)
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
[✓] Xcode - develop for iOS and macOS (Xcode 15.0.1)
[✓] Chrome - develop for the web
[✓] Android Studio (version 2022.3)
[✓] VS Code (version 1.92.2)
```
</details>
| a: text input,platform-web,e: OS-version specific,has reproducible steps,P2,browser: safari-ios,team-web,triaged-web,found in release: 3.24,found in release: 3.26 | low | Major |
2,532,591,465 | pytorch | Async NCCL communication blocks CUDA kernel in the first run | ### 🐛 Describe the bug
Async NCCL comminucations from `torch.distributed` should run in parallel with CUDA computing kernels, but traces from `torch.profiler` shows it is not true for the first run. However, the asynchronicity works from the second run.
Reproduction:
```python
import torch
torch.distributed.init_process_group(backend='nccl')
rank = torch.distributed.get_rank()
torch.cuda.set_device(rank)
torch.distributed.barrier()
with torch.profiler.profile(
activities=[
torch.profiler.ProfilerActivity.CPU,
torch.profiler.ProfilerActivity.CUDA,
]
) as p:
if rank == 0:
x = torch.empty(1000000000, device='cuda')
x.fill_(2.3)
work = torch.distributed.isend(x, 1)
work.wait()
else:
y = torch.empty(1000000000, device='cuda')
work = torch.distributed.irecv(y, 0)
a = torch.ones((10000, 10000), dtype=torch.float32, device='cuda')
b = torch.ones((10000, 10000), dtype=torch.float32, device='cuda')
c = a @ b
work.wait()
if rank == 0:
x = torch.empty(1000000000, device='cuda')
x.fill_(2.3)
work = torch.distributed.isend(x, 1)
work.wait()
else:
y = torch.empty(1000000000, device='cuda')
work = torch.distributed.irecv(y, 0)
a = torch.ones((10000, 10000), dtype=torch.float32, device='cuda')
b = torch.ones((10000, 10000), dtype=torch.float32, device='cuda')
c = a @ b
work.wait()
p.export_chrome_trace(f"rank{rank}.json")
```
The trace from Rank 1 (the receiving rank) is like this:

The two pairs of `nccl:recv` and `volta_sgemm_128x64tn` should both overlap, but only one of them overlaps.
### Versions
```
Collecting environment information...
PyTorch version: 2.3.0+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 12 (bookworm) (x86_64)
GCC version: (Spack GCC) 11.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.36
Python version: 3.9.12 (main, Nov 2 2022, 21:57:59) [GCC 10.2.1 20210110] (64-bit runtime)
Python platform: Linux-6.1.0-22-amd64-x86_64-with-glibc2.36
Is CUDA available: False
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
Stepping: 7
CPU(s) scaling MHz: 37%
CPU max MHz: 3700.0000
CPU min MHz: 1000.0000
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.5 MiB (48 instances)
L1i cache: 1.5 MiB (48 instances)
L2 cache: 48 MiB (48 instances)
L3 cache: 71.5 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==1.26.3
[pip3] torch==2.3.0+cu118
[pip3] triton==2.3.0
[conda] Could not collect
```
cc @XilunWu @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed | low | Critical |
2,532,596,283 | flutter | Context null in MaterialLocalizations.of(context) (package:flutter/src/material/material_localizations.dart:684) | ### Steps to reproduce
The iPhone14pro(17.4.1 System version) occasionally displays an abnormal Tabbar
Not all iOS devices will show up
Android devices were not found to have this problem
```dart
static MaterialLocalizations of(BuildContext context) {
assert(debugCheckHasMaterialLocalizations(context));
return Localizations.of<MaterialLocalizations>(context, MaterialLocalizations)!;
}
```
This API returns null
Two similar issues premised on this
https://github.com/flutter/flutter/issues/152979
https://github.com/flutter/flutter/issues/101292
Our solution is to let the user reinstall the APP, which is likely to avoid repetition
Full stacktrace
```
# Null check operator used on a null value
#0 MaterialLocalizations.of (package:flutter/src/material/material_localizations.dart:684)
#1 _TabBarState.build (package:ylcstudent/kffa/widgets/custom_tabbar.dart:1323)
#2 StatefulElement.build (package:flutter/src/widgets/framework.dart:5583)
#3 ComponentElement.performRebuild (package:flutter/src/widgets/framework.dart:5471)
#4 StatefulElement.performRebuild (package:flutter/src/widgets/framework.dart:5634)
#5 Element.rebuild (package:flutter/src/widgets/framework.dart:5187)
#6 StatefulElement.update (package:flutter/src/widgets/framework.dart:5657)
#7 Element.updateChild (package:flutter/src/widgets/framework.dart:3815)
#8 SingleChildRenderObjectElement.update (package:flutter/src/widgets/framework.dart:6743)
#9 Element.updateChild (package:flutter/src/widgets/framework.dart:3815)
#10 ComponentElement.performRebuild (package:flutter/src/widgets/framework.dart:5496)
#11 Element.rebuild (package:flutter/src/widgets/framework.dart:5187)
#12 StatelessElement.update (package:flutter/src/widgets/framework.dart:5547)
#13 Element.updateChild (package:flutter/src/widgets/framework.dart:3815)
#14 Element.updateChildren (package:flutter/src/widgets/framework.dart:3964)
#15 MultiChildRenderObjectElement.update (package:flutter/src/widgets/framework.dart:6896)
#16 Element.updateChild (package:flutter/src/widgets/framework.dart:3815)
#17 ComponentElement.performRebuild (package:flutter/src/widgets/framework.dart:5496)
#18 Element.rebuild (package:flutter/src/widgets/framework.dart:5187)
#19 ProxyElement.update (package:flutter/src/widgets/framework.dart:5800)
#20 Element.updateChild (package:flutter/src/widgets/framework.dart:3815)
#21 Element.updateChildren (package:flutter/src/widgets/framework.dart:3964)
#22 MultiChildRenderObjectElement.update (package:flutter/src/widgets/framework.dart:6896)
#23 Element.updateChild (package:flutter/src/widgets/framework.dart:3815)
#24 SingleChildRenderObjectElement.update (package:flutter/src/widgets/framework.dart:6743)
#25 Element.updateChild (package:flutter/src/widgets/framework.dart:3815)
#26 SingleChildRenderObjectElement.update (package:flutter/src/widgets/framework.dart:6743)
#27 Element.updateChild (package:flutter/src/widgets/framework.dart:3815)
#28 ComponentElement.performRebuild (package:flutter/src/widgets/framework.dart:5496)
#29 Element.rebuild (package:flutter/src/widgets/framework.dart:5187)
#30 StatelessElement.update (package:flutter/src/widgets/framework.dart:5547)
#31 Element.updateChild (package:flutter/src/widgets/framework.dart:3815)
#32 Element.updateChildren (package:flutter/src/widgets/framework.dart:3964)
#33 MultiChildRenderObjectElement.update (package:flutter/src/widgets/framework.dart:6896)
#34 Element.updateChild (package:flutter/src/widgets/framework.dart:3815)
#35 SingleChildRenderObjectElement.update (package:flutter/src/widgets/framework.dart:6743)
#36 Element.updateChild (package:flutter/src/widgets/framework.dart:3815)
#37 SingleChildRenderObjectElement.update (package:flutter/src/widgets/framework.dart:6743)
#38 Element.updateChild (package:flutter/src/widgets/framework.dart:3815)
#39 ComponentElement.performRebuild (package:flutter/src/widgets/framework.dart:5496)
#40 Element.rebuild (package:flutter/src/widgets/framework.dart:5187)
#41 StatelessElement.update (package:flutter/src/widgets/framework.dart:5547)
#42 Element.updateChild (package:flutter/src/widgets/framework.dart:3815)
#43 ComponentElement.performRebuild (package:flutter/src/widgets/framework.dart:5496)
#44 StatefulElement.performRebuild (package:flutter/src/widgets/framework.dart:5634)
#45 Element.rebuild (package:flutter/src/widgets/framework.dart:5187)
#46 StatefulElement.update (package:flutter/src/widgets/framework.dart:5657)
#47 Element.updateChild (package:flutter/src/widgets/framework.dart:3815)
#48 SingleChildRenderObjectElement.update (package:flutter/src/widgets/framework.dart:6743)
#49 Element.updateChild (package:flutter/src/widgets/framework.dart:3815)
#50 SingleChildRenderObjectElement.update (package:flutter/src/widgets/framework.dart:6743)
#51 Element.updateChild (package:flutter/src/widgets/framework.dart:3815)
#52 ComponentElement.performRebuild (package:flutter/src/widgets/framework.dart:5496)
#53 Element.rebuild (package:flutter/src/widgets/framework.dart:5187)
#54 StatelessElement.update (package:flutter/src/widgets/framework.dart:5547)
#55 Element.updateChild (package:flutter/src/widgets/framework.dart:3815)
#56 _SliverPersistentHeaderElement._build.<anonymous closure> (package:flutter/src/widgets/sliver_persistent_header.dart:298)
#57 BuildOwner.buildScope (package:flutter/src/widgets/framework.dart:2835)
#58 _SliverPersistentHeaderElement._build (package:flutter/src/widgets/sliver_persistent_header.dart:296)
#59 _RenderSliverPersistentHeaderForWidgetsMixin.updateChild (package:flutter/src/widgets/sliver_persistent_header.dart:382)
#60 RenderSliverPersistentHeader.layoutChild.<anonymous closure> (package:flutter/src/rendering/sliver_persistent_header.dart:223)
#61 RenderObject.invokeLayoutCallback.<anonymous closure> (package:flutter/src/rendering/object.dart:2657)
#62 PipelineOwner._enableMutationsToDirtySubtrees (package:flutter/src/rendering/object.dart:1071)
#63 RenderObject.invokeLayoutCallback (package:flutter/src/rendering/object.dart:2657)
#64 RenderSliverPersistentHeader.layoutChild (package:flutter/src/rendering/sliver_persistent_header.dart:221)
#65 RenderSliverPinnedPersistentHeader.performLayout (package:flutter/src/rendering/sliver_persistent_header.dart:417)
#66 RenderObject.layout (package:flutter/src/rendering/object.dart:2546)
#67 RenderViewportBase.layoutChildSequence (package:flutter/src/rendering/viewport.dart:601)
#68 RenderViewport._attemptLayout (package:flutter/src/rendering/viewport.dart:1554)
#69 RenderViewport.performLayout (package:flutter/src/rendering/viewport.dart:1463)
#70 RenderObject._layoutWithoutResize (package:flutter/src/rendering/object.dart:2385)
#71 PipelineOwner.flushLayout (package:flutter/src/rendering/object.dart:1025)
#72 PipelineOwner.flushLayout (package:flutter/src/rendering/object.dart:1038)
#73 RendererBinding.drawFrame (package:flutter/src/rendering/binding.dart:591)
#74 WidgetsBinding.drawFrame (package:flutter/src/widgets/binding.dart:986)
#75 RendererBinding._handlePersistentFrameCallback (package:flutter/src/rendering/binding.dart:457)
#76 SchedulerBinding._invokeFrameCallback (package:flutter/src/scheduler/binding.dart:1325)
#77 SchedulerBinding.handleDrawFrame (package:flutter/src/scheduler/binding.dart:1255)
#78 SchedulerBinding._handleDrawFrame (package:flutter/src/scheduler/binding.dart:1113)
#79 _rootRun (dart:async/zone.dart:1399)
#80 _CustomZone.run (dart:async/zone.dart:1301)
#81 _CustomZone.runGuarded (dart:async/zone.dart:1209)
#82 _invoke (dart:ui/hooks.dart:314)
#83 PlatformDispatcher._drawFrame (dart:ui/platform_dispatcher.dart:383)
#84 _drawFrame (dart:ui/hooks.dart:283)
```
### Expected results
The iPhone14pro(17.4.1 System version) occasionally displays an normal Tabbar
### Actual results
The iPhone14pro(17.4.1 System version) occasionally displays an abnormal Tabbar
### Code sample
<details open><summary>Code sample</summary>
```dart
class HomeApp extends StatelessWidget {
const HomeApp({Key? key}) : super(key: key);
@override
Widget build(BuildContext context) {
return MaterialApp(
localizationsDelegates: const [
// 这行是关键
RefreshLocalizations.delegate,
GlobalMaterialLocalizations.delegate,
GlobalCupertinoLocalizations.delegate,
GlobalWidgetsLocalizations.delegate,
],
supportedLocales: const [
Locale('zh', 'CN'),
],
home: HomePage(),
);
}
}
class HomePage extends StatefulWidget {
@override
State<HomePage> createState() => _HomePageState();
}
class _HomePageState extends State<HomePage> {
@override
Widget build(BuildContext context) {
return DefaultTabController(
length: 2,
child: Scaffold(
appBar: AppBar(
title: Text('Test'),
bottom:
TabBar(tabs: ['tab1', 'tab2'].map((e) => Tab(text: e)).toList()),
),
body: Column(
children: [TextField()],
),
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.16.5, on macOS 14.1.1 23B81 darwin-arm64, locale
zh-Hans-CN)
• Flutter version 3.16.5 on channel stable at
/Users/edy/Documents/3.16.5/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 78666c8dc5 (9 months ago), 2023-12-19 16:14:14 -0800
• Engine revision 3f3e560236
• Dart version 3.2.3
• DevTools version 2.28.4
• Pub download mirror https://pub.flutter-io.cn
• Flutter download mirror https://storage.flutter-io.cn
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /Users/edy/Library/Android/sdk
• Platform android-34, build-tools 34.0.0
• ANDROID_HOME = /Users/edy/Library/Android/sdk
• Java binary at: /Applications/Android
Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build
17.0.6+0-17.0.6b802.4-9586694)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 15.0.1)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 15A507
• CocoaPods version 1.15.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2022.2)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build
17.0.6+0-17.0.6b802.4-9586694)
[✓] VS Code (version 1.92.2)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.96.0
[✓] Connected device (3 available)
• iPhoneXs (mobile) • 00008020-000270543A83002E • ios • iOS 17.2.1 21C66
• macOS (desktop) • macos • darwin-arm64 • macOS 14.1.1 23B81
darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome
128.0.6613.138
[✓] Network resources
• All expected network resources are available.
```
</details>
| framework,f: material design,a: internationalization,a: error message,P2,team-design,triaged-design | low | Critical |
2,532,598,802 | godot | Unexpected resizing behavior between Godot Simulator and mobile hardware. | ### Tested versions
Reproducible in Godot 4.3 and 4.2 at least.
### System information
macOS 14.5 - 15.0 - Godot 4.3 and 4.2 stable - Mobile
### Issue description
The stretch mode is inconsistent between the Godot editor window and mobile devices. The window does exactly what you'd expect, where as the mobile device pins the screen to the upper left, meaning your game is stuck on phones like the iPhone pros due to the Dynamic Island (see image):

The stretch mode is set to Viewport and Aspect is set to Expand in order to fill the device screen without any black borders. In the first image, the game is as it appears in the editor. In the second image, the game is running in the Godot simulator (or whatever it's called). Stretch mode works perfectly when sizing the screen but the last image is on actual device hardware where the game screen is pinned to the upper left.
### Steps to reproduce
There is a text file explaining how to reproduce it in the MRP attached below. I also will paste it here:
Please note: This isn't so much of a bug as it is an unexpected behavior with the stretch settings.
If you run the scene and pull the window straight down, you'll get the expected behavior of the mode: viewport, aspect: expand Stretch settings. Many games want expansion to happen from the center so we can just add more of a scene behind the main game for different device sizes for anything that goes outside the camera.
If you run it on device, it does the unexpected (but current according to the docs) behavior of expanding from the top and adding space to the bottom.
In the document for ContentScaleAspect, it does state the content will expand from the bottom:
_ContentScaleAspect CONTENT_SCALE_ASPECT_EXPAND = 4
The content's aspect will be preserved. If the target size has different aspect from the base one, the content will stay in the top-left corner and add an extra visible area in the stretched space._
But that isn't the behavior you see in the game window of Godot nor is it really useful. In project settings the initial position type is set to "Center of Primary Screen" so the behavior I would expect is that the content expands from the center of the screen so that content can expand without black bars on the top and bottom.
### Minimal reproduction project (MRP)
[mrp-expansion-issue.zip](https://github.com/user-attachments/files/17037815/mrp-expansion-issue.zip)
| bug,topic:porting,needs testing | low | Critical |
2,532,618,078 | kubernetes | Can we add necessary labels when volume controller annotate PVC? | ### What would you like to be added?
Currently, volume controller will add `volume.kubernetes.io/storage-provisioner` annotation to the PVC to inform specific external-provisioner to provision the volume.
However, external-provisioner list and watch PVC objects without any filter, that means it will save all the PVC objects in memory, not only the ones should be handled by it. This will cause oom when there are too many PVCs in the cluster, even though the PVCs are not to be handled by the external-provisioner.
Can we add labels when volume controller add `volume.kubernetes.io/storage-provisioner` annotation to the PVC and add label filter in external-provisioner's informer to only list-watch the PVCs that should be provisioned by it?
### Why is this needed?
external-provisioner will OOM when there are too many PVCs in the cluster, even the PVCs should not be handled by it. We can use label-filter to avoid saving all the PVCs in memory. | sig/storage,kind/feature,lifecycle/stale,needs-triage | low | Major |
2,532,695,380 | transformers | [Whisper] TypeError: '<=' not supported between instances of 'NoneType' and 'float' | ### System Info
- `transformers` version: 4.44.2
- Platform: macOS-15.0-arm64-arm-64bit
- Python version: 3.12.6
- Huggingface_hub version: 0.24.7
- Safetensors version: 0.4.5
- Accelerate version: 0.34.2
- Accelerate config: not found
- PyTorch version (GPU?): 2.6.0.dev20240916 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
### Who can help?
@kamilakesbi @ArthurZucker @itazap
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Hi, I am attempting to transcribe several audio files; however, the process intermittently encounters an exception with some of the files. The transcription works successfully in approximately 90% of the cases, but certain files trigger this exception unexpectedly. I am attaching one of the audio files that generates this exception for your review. Thank you.
- I was able replicate it on a MacOS on CPU and Linux on CUDA.
1 Install Stable TS
`pip install stable-ts`
2 Run the code:
```python
import stable_whisper
model = stable_whisper.load_hf_whisper('medium')
result = model.transcribe(
audio = 'radio_18596_1726554951_1726554981.mp3',
)
print(result.text)
```
Audio sample: https://filebin.net/hivqswoer298m65m
Than I receive the follow exception:
```
Traceback (most recent call last):
File "/tests/test.py", line 4, in <module>
result = model.transcribe(
^^^^^^^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/stable_whisper/whisper_word_level/hf_whisper.py", line 236, in transcribe
return transcribe_any(
^^^^^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/stable_whisper/non_whisper.py", line 342, in transcribe_any
result = inference_func(**inference_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/stable_whisper/whisper_word_level/hf_whisper.py", line 116, in _inner_transcribe
output = self._pipe(audio, **pipe_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/transformers/pipelines/automatic_speech_recognition.py", line 284, in __call__
return super().__call__(inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/transformers/pipelines/base.py", line 1255, in __call__
return next(
^^^^^
File "/.venv/lib/python3.12/site-packages/transformers/pipelines/pt_utils.py", line 125, in __next__
processed = self.infer(item, **self.params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/transformers/pipelines/automatic_speech_recognition.py", line 587, in postprocess
text, optional = self.tokenizer._decode_asr(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/transformers/models/whisper/tokenization_whisper.py", line 835, in _decode_asr
return _decode_asr(
^^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/transformers/models/whisper/tokenization_whisper.py", line 1086, in _decode_asr
resolved_tokens, resolved_token_timestamps = _find_longest_common_sequence(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venv/lib/python3.12/site-packages/transformers/models/whisper/tokenization_whisper.py", line 1193, in _find_longest_common_sequence
matches = sum(
^^^^
File "/.venv/lib/python3.12/site-packages/transformers/models/whisper/tokenization_whisper.py", line 1198, in <genexpr>
and left_token_timestamp_sequence[left_start + idx]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: '<=' not supported between instances of 'NoneType' and 'float'
```
### Expected behavior
To be able to transcibe the audio files without this exception. | Core: Tokenization,bug,Audio | medium | Critical |
2,532,748,518 | pytorch | A question about `getDLDevice` | ### 🐛 Describe the bug
I'm wondering why pytorch do not return a `DLDeviceType::kDLCPUPinned` directly as code below when it is a CPU pinned device, as `kDLCPU` and `kDLCPUPinned` is discriminative in dlpack and having their own macro number, kDLCPU=1, kDLCPUPinned=3. Is there something I missed or it is a small bug?
In addition, `kDLCPUPinned` in pytorch python code seems deprecated for it is renamed to `kDLCUDAHost` and not consistent with `aten/src/Aten/dlpack.h`.
<https://github.com/pytorch/pytorch/blob/a0207c8471989f13b0a30d7b532545793fc20cc1/aten/src/ATen/DLConvertor.cpp#L91-L125>
### Versions
main | triaged,module: dlpack | low | Critical |
2,532,804,072 | terminal | Cannot install/update Windows Terminal: error 0x80070005 | ### Windows Terminal version
1.21.2361.0
### Windows build number
10.0.19045.4894
### Other Software
_No response_
### Steps to reproduce
1. Download the `msixbundle` and double-click to install
Note that I _am_ able to install the latest version on Microsoft Store (v1.20.11781.0; I've checked after uninstalling and trying to install again), but I _am not_ able to install the latest GitHub release (v1.21.2361.0). Some hasty Googling tells me this is a probably an issue with permissions; how do I begin to debug this?
### Expected Behavior
I am able to install/update Windows Terminal successfully.
### Actual Behavior
I'm getting the following error message in the installer UI:
> App installation failed with error message: error 0x80070005: Opening the package from location Microsoft.WindowsTerminal_1.21.2361.0_8wekyb3d8bbwe.msixbundle failed. (0x80070005) | Issue-Bug,Product-Terminal,Needs-Tag-Fix,Culprit-Centennial | low | Critical |
2,532,811,937 | vscode | Messy underlining and icon clipping in Start section of Welcome page | Type: <b>Bug</b>
Only in Insiders.
The underlining issue only shows when `"accessibility.underlineLinks": true` but the clipping of the bottom of the `Clone Git Repository...` icon is independent of that setting.
Screenshot below has Stable on left and Insiders on right, both husing temporary profile.

VS Code version: Code - Insiders 1.94.0-insider (c4efe1dc9eec4914f3076b2d954fe4fe174a5820, 2024-09-17T05:03:51.290Z)
OS version: Windows_NT x64 10.0.22631
Modes:
<!-- generated by issue reporter --> | bug,workbench-welcome,confirmation-pending | low | Critical |
2,532,828,453 | react-native | react-native version 0.75.3 Hermes logs not working properly: Not expanding the object | ### Description
In React Native version 0.75.3, Hermes engine logs are not working as expected. Specifically, when logging objects, the objects are not expanding fully, making it difficult to debug and inspect object properties.
### Steps to reproduce
### Steps to Reproduce:
1. Update the react-native version from 0.74.2 to 0.75.3.
2. Enable Hermes in iOS and Android both
3. Create a component that logs an object in the console, for example:
```javascript
const obj = { name: 'John', age: 30, details: { city: 'New York' } };
console.log(obj);
```
4. Run the app on an Android device or emulator (`npx react-native run-android`).
5. Open the debug console to view the Hermes logs.
### Expected behavior:
- The logged object should expand, allowing inspection of its nested properties.
### Actual behavior:
- The object is not expanding in the console, only showing a collapsed version, preventing inspection of deeper object properties.
### React Native Version
0.75.3
### Output of `npx react-native info`
```text
System:
OS: macOS 15.0
CPU: (8) arm64 Apple M1
Memory: 126.66 MB / 8.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 22.3.0
path: ~/.nvm/versions/node/v22.3.0/bin/node
Yarn:
version: 3.6.4
path: /opt/homebrew/bin/yarn
npm:
version: 10.8.3
path: ~/.nvm/versions/node/v22.3.0/bin/npm
Watchman:
version: 2024.04.29.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods:
version: 1.14.0
path: /opt/homebrew/lib/ruby/gems/3.3.0/bin/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 24.0
- iOS 18.0
- macOS 15.0
- tvOS 18.0
- visionOS 2.0
- watchOS 11.0
Android SDK:
Android NDK: 22.1.7171670
IDEs:
Android Studio: 2024.1 AI-241.18034.62.2412.12266719
Xcode:
version: 16.0/16A242d
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.12
path: /usr/bin/javac
Ruby:
version: 3.3.3
path: /opt/homebrew/opt/ruby/bin/ruby
npmPackages:
"@react-native-community/cli": Not Found
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.75.3
wanted: 0.75.3
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: false
iOS:
hermesEnabled: true
newArchEnabled: false
```
### Screenshots and Videos
https://github.com/user-attachments/assets/ff412bed-0cb8-4b70-ab1a-762ea5f07dd5
| Debugger,Needs: Attention | low | Critical |
2,532,851,479 | opencv | Quantized model using opencv_zoo/quantize throws error in block_repear when src.dims() = 1 and axis = 1 | ### System Information
Python 3.10.10
General configuration for 5.x =====================================
Platform:
Host: Linux (Ubuntu 22.04) x86_64
CMake: 3.22.1
Configuration: Debug Release
### Detailed description
cv.dnn.readNetFromONNX() fails for quantized dexined onnx edge detection in opencv 5.0 with error:
```
net = cv.dnn.readNetFromONNX(args.model)
cv2.error: OpenCV(5.0.0-pre) opencv/modules/dnn/src/onnx/onnx_importer.cpp:1070: error: (-2:Unspecified error) in function 'handleNode'
> Node [DequantizeLinear@ai.onnx]:(onnx_node!up_block_6.features.6.weight_quantized_node) parse error: OpenCV(5.0.0-pre) /media/pincambv/hugedrive1/opencv5_gursimar/opencv/modules/dnn/src/int8layers/quantization_utils.cpp:45: error: (-2:Unspecified error) in function 'void cv::dnn::block_repeat(cv::InputArray, const MatShape&, int, int, cv::OutputArray)'
> > Axis out of range:
> > 'axis >= 0 && axis < src.dims()'
> > where
> > 'axis' is 1
```
The original model can be downloaded from:
[dexined.onnx](https://drive.google.com/file/d/1u_qXqXqaIP_SqdGaq4CbZyjzkZb02XTs/view?usp=sharing)
Quantization command:
`python --input_model=dexined.onnx --block_size=16 --output_model=dexined_compressed.onnx`
Quantized model link: [quantized_dexined.onnx](https://github.com/gursimarsingh/opencv_zoo/raw/dexined_model/models/edge_detection_dexined/dexined.onnx)
### Steps to reproduce
Download the quantized model from: [quantized_dexined.onnx](https://github.com/gursimarsingh/opencv_zoo/raw/dexined_model/models/edge_detection_dexined/dexined.onnx)
```cpp
net = readNetFromONNX("quantized_dexined.onnx")
```
```python
import cv2
net = cv2.dnn.readNetFromONNX("quantized_dexined.onnx")
```
### Issue submission checklist
- [X] I report the issue, it's not a question
- [x] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [X] I updated to the latest OpenCV version and the issue is still there
- [X] There is reproducer code and related data files (videos, images, onnx, etc) | bug | low | Critical |
2,532,874,217 | storybook | [Bug]: Storybook injecting packageManager pnpm 8.1.0 when you run `npm run storybook` | ### Describe the bug
When you run `npm run storybook` on a vite project, it is injecting this into the package.json every single time:
```
"packageManager": "pnpm@8.1.0+sha1.09ebf306075e96037432071992bb00340c263d85"
```
1. This breaks the Chromatic Github Action.
2. If I want to use pnpm, I will. Don't force it on me.
3. [Version 8.1.0 is bugged with Node 20](https://github.com/pnpm/pnpm/issues/6424). You have to use 8.3.1 minimum or pnpm can't install libraries.
### Reproduction link
https://storybook.js.org/
### Reproduction steps
As stated in the description. Make a vite project and `npm run storybook` and it will inject that line into package.json every time.
Delete the line from package.json, `npm run storybook`, and it's back again.
### System
```bash
Storybook Environment Info:
System:
OS: macOS 14.6.1
CPU: (12) arm64 Apple M2 Pro
Shell: 5.9 - /bin/zsh
Binaries:
Node: 20.17.0 - ~/.nvm/versions/node/v20.17.0/bin/node
npm: 10.8.3 - ~/.nvm/versions/node/v20.17.0/bin/npm <----- active
pnpm: 8.1.0 - ~/.nvm/versions/node/v20.17.0/bin/pnpm
Browsers:
Chrome: 128.0.6613.139
Safari: 17.6
npmPackages:
@storybook/addon-essentials: 8.3.1 => 8.3.1
@storybook/addon-interactions: 8.3.1 => 8.3.1
@storybook/addon-links: 8.3.1 => 8.3.1
@storybook/blocks: 8.3.1 => 8.3.1
@storybook/manager-api: 8.3.1 => 8.3.1
@storybook/preview-api: 8.3.1 => 8.3.1
@storybook/react: 8.3.1 => 8.3.1
@storybook/react-vite: 8.3.1 => 8.3.1
@storybook/test: 8.3.1 => 8.3.1
@storybook/theming: 8.3.1 => 8.3.1
@storybook/types: 8.3.1 => 8.3.1
chromatic: 11.10.2 => 11.10.2
eslint-plugin-storybook: 0.8.0 => 0.8.0
storybook: 8.3.1 => 8.3.1
storybook-dark-mode: 4.0.2 => 4.0.2
storybook-react-i18next: 3.1.7 => 3.1.7
```
### Additional context
_No response_ | bug,core | low | Critical |
2,532,886,551 | PowerToys | Volume locker | ### Description of the new feature / enhancement
A feature that would allow you to set a minimum below which your volume can't go. A maximum above which your volume can't go. Or lock your volume in place entirely, and don't allow it to change by any means.
### Scenario when this would be used?
It would be great for audio work. So you can ensure that your audio outputs are not changed after some drivers update and want to change volume levels to default.
It would be great if you have a volume knob in your setup, that you always move on mistake. So you don't make yourself deaf with sudden loudness.
It would be great if you listen on low volume and don't want to mute it by mistake.
It would be great if you work with software that changes your volume each time you open it.
### Supporting information
Having that lock feature per device would be great. But even having it for global volume would be a ear saver sometimes.
I've tried a couple of programs that offer similar functions. But not one had all of them, and not one worked exactly like this.
If you know of a program that does this already, please let me know. I would be very grateful. | Needs-Triage | low | Minor |
2,532,887,154 | kubernetes | Endpoints controller uses stale endpoints in reconciling, the endpoint Subsets will be wrong and never restores correctly | ### What happened?
I have a service with two pods . These pods are ready, the endpoints subset contains these pods.
1. these pod became not ready at 12:35:39.149052Z
2. the endpoints controller update the endpoint, and move these pods to `notReadyAddresses`
3. one pod became Ready at 12:35:39.763051Z
4. the endpoints controller try to update the endpoint, but it failed with error: the object has been modified; please apply your changes to the latest version and try again
5. the other pod became Ready at 12:35:39.786936Z
6. the endpoints controller don't reconcile this endpoint any more, these pod are all in `notReadyAddresses`
The endpoint controller compare the endpoint subset (from cache) to pod status. At step 6, the endpoint informer watch is delayed, the controller use the stale endpoint for comparison. From then on, the endpoint subset is wrong.
### What did you expect to happen?
The endpoint subset should be reconciled to correctly status
### How can we reproduce it (as minimally and precisely as possible)?
It is a little hard to reproduce, we need mock the endpoint watch delay.
### Anything else we need to know?
The controller pod informer resync not work in this case. Pod sync event will be ignore in `podEndpointsChanged`
### Kubernetes version
<details>
```console
$ kubectl version
# paste output here
Client Version: v1.31.0-aliyun.1
Kustomize Version: v5.4.2
Server Version: v1.31.0-aliyun.1
```
</details>
### Cloud provider
<details>
Alibaba cloud
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/apps,lifecycle/rotten,needs-triage | low | Critical |
2,532,893,306 | godot | In Godot 4.4 dev 2, typed dictionaries not compatible with JSON.parse_string | ### Tested versions
-Reproducible in Godot 4.4 dev 2
### System information
Windows 10, Godot 4.4 dev 2, Forward+
### Issue description
(I'm sorry if there is an open issue about this case and I didn't find it before opening this issue)
When using JSON.parse_string as an assignment to a typed dictionary that is not declared as `Dictionary[Variant, Variant]` (Example from my use case: `Dictionary[String, Dictionary]`) I get the error:
> Invalid assignment of property or key with value of type 'Dictionary'
### Steps to reproduce
- Add a new script to a node
- Add the following script:
```
extends Node
var d : Dictionary[String, Variant] = {}
var s : String = '{"A": "Hi","B": "Hey"}'
"""
func _ready() -> void:
d = JSON.parse_string(s)
```
- Run the script (I got an error)
- Modify the dictionary to be `Dictionary[Variant, Variant]`
- Run the code again (Worked for me)
There is a MRP that has this script, with an extra line (commented by default) that loads the data from a .json file
### Minimal reproduction project (MRP)
[MRP.zip](https://github.com/user-attachments/files/17039753/MRP.zip) | discussion,topic:core,documentation | low | Critical |
2,532,910,601 | flutter | [BUILD ERROR]:Error waiting for a debug connection: The log reader stopped unexpectedly, or never started. | ### Steps to reproduce
on Ubuntu LTS 24.04.01
1. flutter create my_app
2. cd my_app
3. flutter pub get
4. flutter run -d linux --verbose
### Expected results
Since application was compiled a window frame should be opened. This happens with starter template as well
### Actual results
```
[ +1 ms] ✓ Built build/linux/x64/debug/bundle/iprepdigiclass
[ +9 ms] Error waiting for a debug connection: The log reader stopped unexpectedly, or never started.
[ ] Error launching application on Linux.
[ +1 ms] "flutter run" took 19,685ms.
[ +2 ms]
#0 throwToolExit (package:flutter_tools/src/base/common.dart:10:3)
#1 RunCommand.runCommand (package:flutter_tools/src/commands/run.dart:874:9)
<asynchronous suspension>
#2 FlutterCommand.run.<anonymous closure> (package:flutter_tools/src/runner/flutter_command.dart:1408:27)
<asynchronous suspension>
#3 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:153:19)
<asynchronous suspension>
#4 CommandRunner.runCommand (package:args/command_runner.dart:212:13)
<asynchronous suspension>
#5 FlutterCommandRunner.runCommand.<anonymous closure> (package:flutter_tools/src/runner/flutter_command_runner.dart:420:9)
<asynchronous suspension>
#6 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:153:19)
<asynchronous suspension>
#7 FlutterCommandRunner.runCommand (package:flutter_tools/src/runner/flutter_command_runner.dart:364:5)
<asynchronous suspension>
#8 run.<anonymous closure>.<anonymous closure> (package:flutter_tools/runner.dart:130:9)
<asynchronous suspension>
#9 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:153:19)
<asynchronous suspension>
#10 main (package:flutter_tools/executable.dart:93:3)
<asynchronous suspension>
[ +183 ms] ensureAnalyticsSent: 182ms
[ ] Running 2 shutdown hooks
[ +6 ms] Shutdown hooks complete
[ +2 ms] exiting with code 1
```
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
import 'package:flutter_bloc/flutter_bloc.dart';
import 'package:iprepdigiclass/bloc/global_bloc_observer.dart';
import 'package:iprepdigiclass/config/env.dart';
import 'package:iprepdigiclass/index.dart';
import 'package:iprepdigiclass/services/injection/getit.dart' as di;
import 'package:iprepdigiclass/services/services.dart';
Future<void> main() async {
WidgetsFlutterBinding.ensureInitialized();
await di.init(); // dependency initilization
await Environment.loadEnvironment(); // load Application Environment
await Services.loadServices(); // load Application Services
Bloc.observer = GlobalBlocObserver(); // Global Bloc Observer
runApp(const App());
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Screencast from 2024-09-18 12-33-23.webm](https://github.com/user-attachments/assets/c5051ce3-52c8-4fd8-a666-2bcce641db92)
</details>
### Logs
<details open><summary>Logs</summary>
```console
varun@varun-idreamhp:/media/varun/workspace/code/iprepdigiclass$ flutter run -d linux --verbose
[ +11 ms] Unable to locate an Android SDK.
[ +5 ms] executing: uname -m
[ +1 ms] Exit code 0 from: uname -m
[ ] x86_64
[ +50 ms] Artifact Instance of 'AndroidGenSnapshotArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'AndroidInternalBuildArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'IOSEngineArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'FlutterWebSdk' is not required, skipping update.
[ ] Artifact Instance of 'LegacyCanvasKitRemover' is not required, skipping update.
[ ] Artifact Instance of 'WindowsEngineArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'MacOSEngineArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'LinuxEngineArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'LinuxFuchsiaSDKArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'MacOSFuchsiaSDKArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'FlutterRunnerSDKArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'FlutterRunnerDebugSymbols' is not required, skipping update.
[ +31 ms] Artifact Instance of 'AndroidGenSnapshotArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'AndroidInternalBuildArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'IOSEngineArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'FlutterWebSdk' is not required, skipping update.
[ ] Artifact Instance of 'LegacyCanvasKitRemover' is not required, skipping update.
[ ] Artifact Instance of 'WindowsEngineArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'MacOSEngineArtifacts' is not required, skipping update.
[ +4 ms] Artifact Instance of 'LinuxFuchsiaSDKArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'MacOSFuchsiaSDKArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'FlutterRunnerSDKArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'FlutterRunnerDebugSymbols' is not required, skipping update.
[ +43 ms] Skipping pub get: version match.
[ +37 ms] Found plugin connectivity_plus at /home/varun/.pub-cache/hosted/pub.dev/connectivity_plus-6.0.5/
[ +6 ms] Found plugin cryptography_flutter at /home/varun/.pub-cache/hosted/pub.dev/cryptography_flutter-2.3.2/
[ +6 ms] Found plugin file_picker at /home/varun/.pub-cache/hosted/pub.dev/file_picker-8.1.2/
[ +3 ms] Found plugin flutter_plugin_android_lifecycle at /home/varun/.pub-cache/hosted/pub.dev/flutter_plugin_android_lifecycle-2.0.22/
[ +13 ms] Found plugin media_kit_libs_android_audio at /home/varun/.pub-cache/hosted/pub.dev/media_kit_libs_android_audio-1.3.6/
[ +1 ms] Found plugin media_kit_libs_android_video at /home/varun/.pub-cache/hosted/pub.dev/media_kit_libs_android_video-1.3.6/
[ +1 ms] Found plugin media_kit_libs_ios_audio at /home/varun/.pub-cache/hosted/pub.dev/media_kit_libs_ios_audio-1.1.4/
[ ] Found plugin media_kit_libs_ios_video at /home/varun/.pub-cache/hosted/pub.dev/media_kit_libs_ios_video-1.1.4/
[ ] Found plugin media_kit_libs_linux at /home/varun/.pub-cache/hosted/pub.dev/media_kit_libs_linux-1.1.3/
[ ] Found plugin media_kit_libs_macos_audio at /home/varun/.pub-cache/hosted/pub.dev/media_kit_libs_macos_audio-1.1.4/
[ ] Found plugin media_kit_libs_macos_video at /home/varun/.pub-cache/hosted/pub.dev/media_kit_libs_macos_video-1.1.4/
[ ] Found plugin media_kit_libs_windows_audio at /home/varun/.pub-cache/hosted/pub.dev/media_kit_libs_windows_audio-1.0.9/
[ ] Found plugin media_kit_libs_windows_video at /home/varun/.pub-cache/hosted/pub.dev/media_kit_libs_windows_video-1.0.10/
[ +2 ms] Found plugin media_kit_native_event_loop at /home/varun/.pub-cache/hosted/pub.dev/media_kit_native_event_loop-1.0.9/
[ ] Found plugin media_kit_video at /home/varun/.pub-cache/hosted/pub.dev/media_kit_video-1.2.5/
[ +2 ms] Found plugin package_info_plus at /home/varun/.pub-cache/hosted/pub.dev/package_info_plus-8.0.2/
[ +1 ms] Found plugin path_provider_linux at /home/varun/.pub-cache/hosted/pub.dev/path_provider_linux-2.2.1/
[ ] Found plugin path_provider_windows at /home/varun/.pub-cache/hosted/pub.dev/path_provider_windows-2.3.0/
[ +4 ms] Found plugin screen_brightness at /home/varun/.pub-cache/hosted/pub.dev/screen_brightness-0.2.2+1/
[ ] Found plugin screen_brightness_android at /home/varun/.pub-cache/hosted/pub.dev/screen_brightness_android-0.1.0+2/
[ ] Found plugin screen_brightness_ios at /home/varun/.pub-cache/hosted/pub.dev/screen_brightness_ios-0.1.0/
[ +1 ms] Found plugin screen_brightness_macos at /home/varun/.pub-cache/hosted/pub.dev/screen_brightness_macos-0.1.0+1/
[ ] Found plugin screen_brightness_windows at /home/varun/.pub-cache/hosted/pub.dev/screen_brightness_windows-0.1.3/
[ ] Found plugin screen_retriever at /home/varun/.pub-cache/hosted/pub.dev/screen_retriever-0.1.9/
[ ] Found plugin shared_preferences at /home/varun/.pub-cache/hosted/pub.dev/shared_preferences-2.3.2/
[ ] Found plugin shared_preferences_android at /home/varun/.pub-cache/hosted/pub.dev/shared_preferences_android-2.3.2/
[ ] Found plugin shared_preferences_foundation at /home/varun/.pub-cache/hosted/pub.dev/shared_preferences_foundation-2.5.2/
[ ] Found plugin shared_preferences_linux at /home/varun/.pub-cache/hosted/pub.dev/shared_preferences_linux-2.4.1/
[ ] Found plugin shared_preferences_web at /home/varun/.pub-cache/hosted/pub.dev/shared_preferences_web-2.4.2/
[ ] Found plugin shared_preferences_windows at /home/varun/.pub-cache/hosted/pub.dev/shared_preferences_windows-2.4.1/
[ +12 ms] Found plugin volume_controller at /home/varun/.pub-cache/hosted/pub.dev/volume_controller-2.0.8/
[ +1 ms] Found plugin wakelock_plus at /home/varun/.pub-cache/hosted/pub.dev/wakelock_plus-1.2.8/
[ +11 ms] Found plugin window_manager at /home/varun/.pub-cache/hosted/pub.dev/window_manager-0.4.2/
[ +43 ms] Found plugin connectivity_plus at /home/varun/.pub-cache/hosted/pub.dev/connectivity_plus-6.0.5/
[ +2 ms] Found plugin cryptography_flutter at /home/varun/.pub-cache/hosted/pub.dev/cryptography_flutter-2.3.2/
[ +3 ms] Found plugin file_picker at /home/varun/.pub-cache/hosted/pub.dev/file_picker-8.1.2/
[ +1 ms] Found plugin flutter_plugin_android_lifecycle at /home/varun/.pub-cache/hosted/pub.dev/flutter_plugin_android_lifecycle-2.0.22/
[ +9 ms] Found plugin media_kit_libs_android_audio at /home/varun/.pub-cache/hosted/pub.dev/media_kit_libs_android_audio-1.3.6/
[ ] Found plugin media_kit_libs_android_video at /home/varun/.pub-cache/hosted/pub.dev/media_kit_libs_android_video-1.3.6/
[ +1 ms] Found plugin media_kit_libs_ios_audio at /home/varun/.pub-cache/hosted/pub.dev/media_kit_libs_ios_audio-1.1.4/
[ +1 ms] Found plugin media_kit_libs_ios_video at /home/varun/.pub-cache/hosted/pub.dev/media_kit_libs_ios_video-1.1.4/
[ ] Found plugin media_kit_libs_linux at /home/varun/.pub-cache/hosted/pub.dev/media_kit_libs_linux-1.1.3/
[ +1 ms] Found plugin media_kit_libs_macos_audio at /home/varun/.pub-cache/hosted/pub.dev/media_kit_libs_macos_audio-1.1.4/
[ ] Found plugin media_kit_libs_macos_video at /home/varun/.pub-cache/hosted/pub.dev/media_kit_libs_macos_video-1.1.4/
[ ] Found plugin media_kit_libs_windows_audio at /home/varun/.pub-cache/hosted/pub.dev/media_kit_libs_windows_audio-1.0.9/
[ ] Found plugin media_kit_libs_windows_video at /home/varun/.pub-cache/hosted/pub.dev/media_kit_libs_windows_video-1.0.10/
[ ] Found plugin media_kit_native_event_loop at /home/varun/.pub-cache/hosted/pub.dev/media_kit_native_event_loop-1.0.9/
[ ] Found plugin media_kit_video at /home/varun/.pub-cache/hosted/pub.dev/media_kit_video-1.2.5/
[ +2 ms] Found plugin package_info_plus at /home/varun/.pub-cache/hosted/pub.dev/package_info_plus-8.0.2/
[ +1 ms] Found plugin path_provider_linux at /home/varun/.pub-cache/hosted/pub.dev/path_provider_linux-2.2.1/
[ +1 ms] Found plugin path_provider_windows at /home/varun/.pub-cache/hosted/pub.dev/path_provider_windows-2.3.0/
[ +3 ms] Found plugin screen_brightness at /home/varun/.pub-cache/hosted/pub.dev/screen_brightness-0.2.2+1/
[ ] Found plugin screen_brightness_android at /home/varun/.pub-cache/hosted/pub.dev/screen_brightness_android-0.1.0+2/
[ ] Found plugin screen_brightness_ios at /home/varun/.pub-cache/hosted/pub.dev/screen_brightness_ios-0.1.0/
[ ] Found plugin screen_brightness_macos at /home/varun/.pub-cache/hosted/pub.dev/screen_brightness_macos-0.1.0+1/
[ +1 ms] Found plugin screen_brightness_windows at /home/varun/.pub-cache/hosted/pub.dev/screen_brightness_windows-0.1.3/
[ ] Found plugin screen_retriever at /home/varun/.pub-cache/hosted/pub.dev/screen_retriever-0.1.9/
[ ] Found plugin shared_preferences at /home/varun/.pub-cache/hosted/pub.dev/shared_preferences-2.3.2/
[ ] Found plugin shared_preferences_android at /home/varun/.pub-cache/hosted/pub.dev/shared_preferences_android-2.3.2/
[ ] Found plugin shared_preferences_foundation at /home/varun/.pub-cache/hosted/pub.dev/shared_preferences_foundation-2.5.2/
[ ] Found plugin shared_preferences_linux at /home/varun/.pub-cache/hosted/pub.dev/shared_preferences_linux-2.4.1/
[ ] Found plugin shared_preferences_web at /home/varun/.pub-cache/hosted/pub.dev/shared_preferences_web-2.4.2/
[ ] Found plugin shared_preferences_windows at /home/varun/.pub-cache/hosted/pub.dev/shared_preferences_windows-2.4.1/
[ +12 ms] Found plugin volume_controller at /home/varun/.pub-cache/hosted/pub.dev/volume_controller-2.0.8/
[ ] Found plugin wakelock_plus at /home/varun/.pub-cache/hosted/pub.dev/wakelock_plus-1.2.8/
[ +3 ms] Found plugin window_manager at /home/varun/.pub-cache/hosted/pub.dev/window_manager-0.4.2/
[ +30 ms] Found plugin connectivity_plus at /home/varun/.pub-cache/hosted/pub.dev/connectivity_plus-6.0.5/
[ +1 ms] Found plugin cryptography_flutter at /home/varun/.pub-cache/hosted/pub.dev/cryptography_flutter-2.3.2/
[ +2 ms] Found plugin file_picker at /home/varun/.pub-cache/hosted/pub.dev/file_picker-8.1.2/
[ +1 ms] Found plugin flutter_plugin_android_lifecycle at /home/varun/.pub-cache/hosted/pub.dev/flutter_plugin_android_lifecycle-2.0.22/
[ +9 ms] Found plugin media_kit_libs_android_audio at /home/varun/.pub-cache/hosted/pub.dev/media_kit_libs_android_audio-1.3.6/
[ ] Found plugin media_kit_libs_android_video at /home/varun/.pub-cache/hosted/pub.dev/media_kit_libs_android_video-1.3.6/
[ ] Found plugin media_kit_libs_ios_audio at /home/varun/.pub-cache/hosted/pub.dev/media_kit_libs_ios_audio-1.1.4/
[ ] Found plugin media_kit_libs_ios_video at /home/varun/.pub-cache/hosted/pub.dev/media_kit_libs_ios_video-1.1.4/
[ ] Found plugin media_kit_libs_linux at /home/varun/.pub-cache/hosted/pub.dev/media_kit_libs_linux-1.1.3/
[ ] Found plugin media_kit_libs_macos_audio at /home/varun/.pub-cache/hosted/pub.dev/media_kit_libs_macos_audio-1.1.4/
[ ] Found plugin media_kit_libs_macos_video at /home/varun/.pub-cache/hosted/pub.dev/media_kit_libs_macos_video-1.1.4/
[ ] Found plugin media_kit_libs_windows_audio at /home/varun/.pub-cache/hosted/pub.dev/media_kit_libs_windows_audio-1.0.9/
[ ] Found plugin media_kit_libs_windows_video at /home/varun/.pub-cache/hosted/pub.dev/media_kit_libs_windows_video-1.0.10/
[ ] Found plugin media_kit_native_event_loop at /home/varun/.pub-cache/hosted/pub.dev/media_kit_native_event_loop-1.0.9/
[ ] Found plugin media_kit_video at /home/varun/.pub-cache/hosted/pub.dev/media_kit_video-1.2.5/
[ +1 ms] Found plugin package_info_plus at /home/varun/.pub-cache/hosted/pub.dev/package_info_plus-8.0.2/
[ ] Found plugin path_provider_linux at /home/varun/.pub-cache/hosted/pub.dev/path_provider_linux-2.2.1/
[ ] Found plugin path_provider_windows at /home/varun/.pub-cache/hosted/pub.dev/path_provider_windows-2.3.0/
[ +2 ms] Found plugin screen_brightness at /home/varun/.pub-cache/hosted/pub.dev/screen_brightness-0.2.2+1/
[ ] Found plugin screen_brightness_android at /home/varun/.pub-cache/hosted/pub.dev/screen_brightness_android-0.1.0+2/
[ ] Found plugin screen_brightness_ios at /home/varun/.pub-cache/hosted/pub.dev/screen_brightness_ios-0.1.0/
[ ] Found plugin screen_brightness_macos at /home/varun/.pub-cache/hosted/pub.dev/screen_brightness_macos-0.1.0+1/
[ ] Found plugin screen_brightness_windows at /home/varun/.pub-cache/hosted/pub.dev/screen_brightness_windows-0.1.3/
[ ] Found plugin screen_retriever at /home/varun/.pub-cache/hosted/pub.dev/screen_retriever-0.1.9/
[ ] Found plugin shared_preferences at /home/varun/.pub-cache/hosted/pub.dev/shared_preferences-2.3.2/
[ ] Found plugin shared_preferences_android at /home/varun/.pub-cache/hosted/pub.dev/shared_preferences_android-2.3.2/
[ ] Found plugin shared_preferences_foundation at /home/varun/.pub-cache/hosted/pub.dev/shared_preferences_foundation-2.5.2/
[ ] Found plugin shared_preferences_linux at /home/varun/.pub-cache/hosted/pub.dev/shared_preferences_linux-2.4.1/
[ ] Found plugin shared_preferences_web at /home/varun/.pub-cache/hosted/pub.dev/shared_preferences_web-2.4.2/
[ ] Found plugin shared_preferences_windows at /home/varun/.pub-cache/hosted/pub.dev/shared_preferences_windows-2.4.1/
[ +4 ms] Found plugin volume_controller at /home/varun/.pub-cache/hosted/pub.dev/volume_controller-2.0.8/
[ ] Found plugin wakelock_plus at /home/varun/.pub-cache/hosted/pub.dev/wakelock_plus-1.2.8/
[ +1 ms] Found plugin window_manager at /home/varun/.pub-cache/hosted/pub.dev/window_manager-0.4.2/
[ +5 ms] Generating /media/varun/workspace/code/iprepdigiclass/android/app/src/main/java/io/flutter/plugins/GeneratedPluginRegistrant.java
[ +82 ms] No packages with native assets. Skipping native assets compilation.
[ +1 ms] Initializing file store
[ +3 ms] Skipping target: gen_localizations
[ +1 ms] gen_dart_plugin_registrant: Starting due to {InvalidatedReasonKind.inputChanged: The following inputs have updated contents:
/media/varun/workspace/code/iprepdigiclass/.dart_tool/package_config_subset,/media/varun/workspace/code/iprepdigiclass/.dart_tool/flutter_build/dart_plugin_registrant.dart}
[ +16 ms] Found plugin connectivity_plus at /home/varun/.pub-cache/hosted/pub.dev/connectivity_plus-6.0.5/
[ +2 ms] Found plugin cryptography_flutter at /home/varun/.pub-cache/hosted/pub.dev/cryptography_flutter-2.3.2/
[ +2 ms] Found plugin file_picker at /home/varun/.pub-cache/hosted/pub.dev/file_picker-8.1.2/
[ +1 ms] Found plugin flutter_plugin_android_lifecycle at /home/varun/.pub-cache/hosted/pub.dev/flutter_plugin_android_lifecycle-2.0.22/
[ +6 ms] Found plugin media_kit_libs_android_audio at /home/varun/.pub-cache/hosted/pub.dev/media_kit_libs_android_audio-1.3.6/
[ ] Found plugin media_kit_libs_android_video at /home/varun/.pub-cache/hosted/pub.dev/media_kit_libs_android_video-1.3.6/
[ ] Found plugin media_kit_libs_ios_audio at /home/varun/.pub-cache/hosted/pub.dev/media_kit_libs_ios_audio-1.1.4/
[ ] Found plugin media_kit_libs_ios_video at /home/varun/.pub-cache/hosted/pub.dev/media_kit_libs_ios_video-1.1.4/
[ ] Found plugin media_kit_libs_linux at /home/varun/.pub-cache/hosted/pub.dev/media_kit_libs_linux-1.1.3/
[ ] Found plugin media_kit_libs_macos_audio at /home/varun/.pub-cache/hosted/pub.dev/media_kit_libs_macos_audio-1.1.4/
[ ] Found plugin media_kit_libs_macos_video at /home/varun/.pub-cache/hosted/pub.dev/media_kit_libs_macos_video-1.1.4/
[ ] Found plugin media_kit_libs_windows_audio at /home/varun/.pub-cache/hosted/pub.dev/media_kit_libs_windows_audio-1.0.9/
[ ] Found plugin media_kit_libs_windows_video at /home/varun/.pub-cache/hosted/pub.dev/media_kit_libs_windows_video-1.0.10/
[ ] Found plugin media_kit_native_event_loop at /home/varun/.pub-cache/hosted/pub.dev/media_kit_native_event_loop-1.0.9/
[ ] Found plugin media_kit_video at /home/varun/.pub-cache/hosted/pub.dev/media_kit_video-1.2.5/
[ +1 ms] Found plugin package_info_plus at /home/varun/.pub-cache/hosted/pub.dev/package_info_plus-8.0.2/
[ ] Found plugin path_provider_linux at /home/varun/.pub-cache/hosted/pub.dev/path_provider_linux-2.2.1/
[ ] Found plugin path_provider_windows at /home/varun/.pub-cache/hosted/pub.dev/path_provider_windows-2.3.0/
[ +1 ms] Found plugin screen_brightness at /home/varun/.pub-cache/hosted/pub.dev/screen_brightness-0.2.2+1/
[ ] Found plugin screen_brightness_android at /home/varun/.pub-cache/hosted/pub.dev/screen_brightness_android-0.1.0+2/
[ ] Found plugin screen_brightness_ios at /home/varun/.pub-cache/hosted/pub.dev/screen_brightness_ios-0.1.0/
[ ] Found plugin screen_brightness_macos at /home/varun/.pub-cache/hosted/pub.dev/screen_brightness_macos-0.1.0+1/
[ ] Found plugin screen_brightness_windows at /home/varun/.pub-cache/hosted/pub.dev/screen_brightness_windows-0.1.3/
[ ] Found plugin screen_retriever at /home/varun/.pub-cache/hosted/pub.dev/screen_retriever-0.1.9/
[ ] Found plugin shared_preferences at /home/varun/.pub-cache/hosted/pub.dev/shared_preferences-2.3.2/
[ ] Found plugin shared_preferences_android at /home/varun/.pub-cache/hosted/pub.dev/shared_preferences_android-2.3.2/
[ ] Found plugin shared_preferences_foundation at /home/varun/.pub-cache/hosted/pub.dev/shared_preferences_foundation-2.5.2/
[ ] Found plugin shared_preferences_linux at /home/varun/.pub-cache/hosted/pub.dev/shared_preferences_linux-2.4.1/
[ ] Found plugin shared_preferences_web at /home/varun/.pub-cache/hosted/pub.dev/shared_preferences_web-2.4.2/
[ ] Found plugin shared_preferences_windows at /home/varun/.pub-cache/hosted/pub.dev/shared_preferences_windows-2.4.1/
[ +13 ms] Found plugin volume_controller at /home/varun/.pub-cache/hosted/pub.dev/volume_controller-2.0.8/
[ ] Found plugin wakelock_plus at /home/varun/.pub-cache/hosted/pub.dev/wakelock_plus-1.2.8/
[ +22 ms] Found plugin window_manager at /home/varun/.pub-cache/hosted/pub.dev/window_manager-0.4.2/
[ +9 ms] gen_dart_plugin_registrant: Complete
[ ] Skipping target: _composite
[ ] complete
[ +2 ms] Launching lib/main.dart on Linux in debug mode...
[ +2 ms] /home/varun/Documents/flutter/bin/cache/dart-sdk/bin/dartaotruntime --disable-dart-dev
/home/varun/Documents/flutter/bin/cache/dart-sdk/bin/snapshots/frontend_server_aot.dart.snapshot --sdk-root
/home/varun/Documents/flutter/bin/cache/artifacts/engine/common/flutter_patched_sdk/ --incremental --target=flutter --experimental-emit-debug-metadata --output-dill
/tmp/flutter_tools.JTCRKE/flutter_tool.QFEBWE/app.dill --packages /media/varun/workspace/code/iprepdigiclass/.dart_tool/package_config.json -Ddart.vm.profile=false -Ddart.vm.product=false
--enable-asserts --track-widget-creation --filesystem-scheme org-dartlang-root --initialize-from-dill build/cache.dill.track.dill --source
file:///media/varun/workspace/code/iprepdigiclass/.dart_tool/flutter_build/dart_plugin_registrant.dart --source package:flutter/src/dart_plugin_registrant.dart
-Dflutter.dart_plugin_registrant=file:///media/varun/workspace/code/iprepdigiclass/.dart_tool/flutter_build/dart_plugin_registrant.dart --verbosity=error
--enable-experiment=alternative-invalidation-strategy
[ +11 ms] Building Linux application...
[ +4 ms] <- compile package:iprepdigiclass/main.dart
[ +1 ms] executing: [build/linux/x64/debug/] cmake -G Ninja -DCMAKE_BUILD_TYPE=Debug -DFLUTTER_TARGET_PLATFORM=linux-x64 /media/varun/workspace/code/iprepdigiclass/linux
[ +35 ms] -- Configuring done (0.0s)
[ +22 ms] -- Generating done (0.0s)
[ +229 ms] -- Build files have been written to: /media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug
[ +7 ms] executing: ninja -C build/linux/x64/debug install
[ +10 ms] ninja: Entering directory `build/linux/x64/debug'
[+2885 ms] [1/17] Generating /media/varun/workspace/code/iprepdigiclass/linux/flutter/ephemeral/libflutter_linux_gtk.so,
/media/varun/workspace/code/iprepdigiclass/linux/flutter/ephemeral/flutter_linux/fl_basic_message_channel.h,
/media/varun/workspace/code/iprepdigiclass/linux/flutter/ephemeral/flutter_linux/fl_binary_codec.h,
/media/varun/workspace/code/iprepdigiclass/linux/flutter/ephemeral/flutter_linux/fl_binary_messenger.h,
/media/varun/workspace/code/iprepdigiclass/linux/flutter/ephemeral/flutter_linux/fl_dart_project.h,
/media/varun/workspace/code/iprepdigiclass/linux/flutter/ephemeral/flutter_linux/fl_engine.h,
/media/varun/workspace/code/iprepdigiclass/linux/flutter/ephemeral/flutter_linux/fl_json_message_codec.h,
/media/varun/workspace/code/iprepdigiclass/linux/flutter/ephemeral/flutter_linux/fl_json_method_codec.h,
/media/varun/workspace/code/iprepdigiclass/linux/flutter/ephemeral/flutter_linux/fl_message_codec.h,
/media/varun/workspace/code/iprepdigiclass/linux/flutter/ephemeral/flutter_linux/fl_method_call.h,
/media/varun/workspace/code/iprepdigiclass/linux/flutter/ephemeral/flutter_linux/fl_method_channel.h,
/media/varun/workspace/code/iprepdigiclass/linux/flutter/ephemeral/flutter_linux/fl_method_codec.h,
/media/varun/workspace/code/iprepdigiclass/linux/flutter/ephemeral/flutter_linux/fl_method_response.h,
/media/varun/workspace/code/iprepdigiclass/linux/flutter/ephemeral/flutter_linux/fl_plugin_registrar.h,
/media/varun/workspace/code/iprepdigiclass/linux/flutter/ephemeral/flutter_linux/fl_plugin_registry.h,
/media/varun/workspace/code/iprepdigiclass/linux/flutter/ephemeral/flutter_linux/fl_standard_message_codec.h,
/media/varun/workspace/code/iprepdigiclass/linux/flutter/ephemeral/flutter_linux/fl_standard_method_codec.h,
/media/varun/workspace/code/iprepdigiclass/linux/flutter/ephemeral/flutter_linux/fl_string_codec.h,
/media/varun/workspace/code/iprepdigiclass/linux/flutter/ephemeral/flutter_linux/fl_value.h, /media/varun/workspace/code/iprepdigiclass/linux/flutter/ephemeral/flutter_linux/fl_view.h,
/media/varun/workspace/code/iprepdigiclass/linux/flutter/ephemeral/flutter_linux/flutter_linux.h, _phony_
[ +2 ms] [ +10 ms] Unable to locate an Android SDK.
[ ] [ +5 ms] executing: uname -m
[ ] [ +1 ms] Exit code 0 from: uname -m
[ ] [ ] x86_64
[ ] [ +34 ms] Artifact Instance of 'AndroidGenSnapshotArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'AndroidInternalBuildArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'IOSEngineArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'FlutterWebSdk' is not required, skipping update.
[ ] [ ] Artifact Instance of 'LegacyCanvasKitRemover' is not required, skipping update.
[ ] [ +1 ms] Artifact Instance of 'WindowsEngineArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'MacOSEngineArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'LinuxEngineArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'LinuxFuchsiaSDKArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'MacOSFuchsiaSDKArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'FlutterRunnerSDKArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'FlutterRunnerDebugSymbols' is not required, skipping update.
[ ] [ +45 ms] Artifact Instance of 'MaterialFonts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'GradleWrapper' is not required, skipping update.
[ ] [ ] Artifact Instance of 'AndroidGenSnapshotArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'AndroidInternalBuildArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'IOSEngineArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'FlutterWebSdk' is not required, skipping update.
[ ] [ ] Artifact Instance of 'LegacyCanvasKitRemover' is not required, skipping update.
[ ] [ ] Artifact Instance of 'FlutterSdk' is not required, skipping update.
[ ] [ ] Artifact Instance of 'WindowsEngineArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'MacOSEngineArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'LinuxFuchsiaSDKArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'MacOSFuchsiaSDKArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'FlutterRunnerSDKArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'FlutterRunnerDebugSymbols' is not required, skipping update.
[ ] [ ] Artifact Instance of 'IosUsbArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'IosUsbArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'IosUsbArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'IosUsbArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'IosUsbArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'FontSubsetArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'PubDependencies' is not required, skipping update.
[ ] [ +25 ms] Initializing file store
[ ] [ +8 ms] Done initializing file store
[ ] [ +34 ms] Skipping target: gen_localizations
[ ] [ +8 ms] Skipping target: gen_dart_plugin_registrant
[ ] [ +341 ms] Skipping target: unpack_linux
[ ] [ +370 ms] Skipping target: kernel_snapshot_program
[ ] [ +2 ms] Skipping target: native_assets
[ ] [ ] Skipping target: kernel_snapshot_native_assets
[ ] [ ] Skipping target: kernel_snapshot
[ ] [ ] invalidated build due to missing files: /media/varun/workspace/code/iprepdigiclass/DOES_NOT_EXIST_RERUN_FOR_WILDCARD717774378
[ ] [ +489 ms] debug_bundle_linux-x64_assets: Starting due to {InvalidatedReasonKind.inputMissing: The following inputs were missing:
/media/varun/workspace/code/iprepdigiclass/DOES_NOT_EXIST_RERUN_FOR_WILDCARD717774378}
[ +2 ms] [ +488 ms] Manifest contained wildcard assets. Inserting missing file into build graph to force rerun. for more information see #56466.
[ ] [ +6 ms] shaderc command: [/home/varun/Documents/flutter/bin/cache/artifacts/engine/linux-x64/impellerc, --sksl, --runtime-stage-gles, --runtime-stage-vulkan, --iplr,
--sl=/media/varun/workspace/code/iprepdigiclass/build/flutter_assets/shaders/ink_sparkle.frag,
--spirv=/media/varun/workspace/code/iprepdigiclass/build/flutter_assets/shaders/ink_sparkle.frag.spirv,
--input=/home/varun/Documents/flutter/packages/flutter/lib/src/material/shaders/ink_sparkle.frag, --input-type=frag,
--include=/home/varun/Documents/flutter/packages/flutter/lib/src/material/shaders, --include=/home/varun/Documents/flutter/bin/cache/artifacts/engine/linux-x64/shader_lib]
[ ] [ +335 ms] debug_bundle_linux-x64_assets: Complete
[ ] [ +242 ms] Persisting file store
[ ] [ +8 ms] Done persisting file store
[ ] [ +7 ms] build succeeded.
[ ] [ +4 ms] "flutter assemble" took 2,425ms.
[ ] [ +169 ms] ensureAnalyticsSent: 162ms
[ ] [ ] Running 1 shutdown hook
[ ] [ ] Shutdown hooks complete
[ ] [ +9 ms] exiting with code 0
[ +164 ms] [2/9] Linking CXX shared library plugins/media_kit_libs_linux/libmedia_kit_libs_linux_plugin.so
[ +18 ms] [3/9] Linking CXX shared library plugins/screen_retriever/libscreen_retriever_plugin.so
[ +6 ms] [4/9] Linking CXX shared library plugins/window_manager/libwindow_manager_plugin.so
[ +26 ms] [5/9] Linking CXX shared library plugins/media_kit_video/libmedia_kit_video_plugin.so
[ +173 ms] [6/9] Building CXX object CMakeFiles/iprepdigiclass.dir/my_application.cc.o
[ +52 ms] [7/9] Building CXX object CMakeFiles/iprepdigiclass.dir/flutter/generated_plugin_registrant.cc.o
[ +204 ms] [8/9] Linking CXX executable intermediates_do_not_run/iprepdigiclass
[ ] [8/9] Install the project...
[ +6 ms] -- Install configuration: "Debug"
[ +6 ms] -- Installing: /media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug/bundle/iprepdigiclass
[ +2 ms] -- Set non-toolchain portion of runtime path of "/media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug/bundle/iprepdigiclass" to "$ORIGIN/lib"
[ ] -- Installing: /media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug/bundle/data/icudtl.dat
[ +67 ms] -- Installing: /media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug/bundle/lib/libflutter_linux_gtk.so
[+2063 ms] -- Installing: /media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug/bundle/lib/libmedia_kit_libs_linux_plugin.so
[ +1 ms] -- Installing: /media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug/bundle/lib/libmedia_kit_video_plugin.so
[ +8 ms] -- Installing: /media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug/bundle/lib/libscreen_retriever_plugin.so
[ +2 ms] -- Installing: /media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug/bundle/lib/libwindow_manager_plugin.so
[ +4 ms] -- Installing: /media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug/bundle/lib/libmedia_kit_native_event_loop.so
[ +32 ms] -- Up-to-date: /media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug/bundle/lib
[ ] -- Installing: /media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug/bundle/data/flutter_assets
[ ] -- Installing: /media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug/bundle/data/flutter_assets/AssetManifest.bin
[ ] -- Installing: /media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug/bundle/data/flutter_assets/AssetManifest.json
[ ] -- Installing: /media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug/bundle/data/flutter_assets/assets
[ ] -- Installing: /media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug/bundle/data/flutter_assets/assets/fonts
[ ] -- Installing: /media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug/bundle/data/flutter_assets/assets/fonts/Inter-Black.ttf
[ +19 ms] -- Installing: /media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug/bundle/data/flutter_assets/assets/fonts/Inter-Bold.ttf
[ +16 ms] -- Installing: /media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug/bundle/data/flutter_assets/assets/fonts/Inter-ExtraBold.ttf
[ +19 ms] -- Installing: /media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug/bundle/data/flutter_assets/assets/fonts/Inter-ExtraLight.ttf
[ +11 ms] -- Installing: /media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug/bundle/data/flutter_assets/assets/fonts/Inter-Light.ttf
[ +12 ms] -- Installing: /media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug/bundle/data/flutter_assets/assets/fonts/Inter-Medium.ttf
[ +11 ms] -- Installing: /media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug/bundle/data/flutter_assets/assets/fonts/Inter-Regular.ttf
[ +14 ms] -- Installing: /media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug/bundle/data/flutter_assets/assets/fonts/Inter-SemiBold.ttf
[ +15 ms] -- Installing: /media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug/bundle/data/flutter_assets/assets/fonts/Inter-Thin.ttf
[ +31 ms] -- Installing: /media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug/bundle/data/flutter_assets/assets/images
[ ] -- Installing: /media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug/bundle/data/flutter_assets/assets/images/svgs
[ ] -- Installing: /media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug/bundle/data/flutter_assets/assets/images/svgs/digitalclasslogo.svg
[ ] -- Installing: /media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug/bundle/data/flutter_assets/assets/images/svgs/digitalclasslogov.svg
[ ] -- Installing: /media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug/bundle/data/flutter_assets/assets/images/svgs/usbnotfound.svg
[ ] -- Installing: /media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug/bundle/data/flutter_assets/configuration
[ ] -- Installing: /media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug/bundle/data/flutter_assets/configuration/debug.json
[ ] -- Installing: /media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug/bundle/data/flutter_assets/FontManifest.json
[ ] -- Installing: /media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug/bundle/data/flutter_assets/fonts
[ ] -- Installing: /media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug/bundle/data/flutter_assets/fonts/MaterialIcons-Regular.otf
[ +50 ms] -- Installing: /media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug/bundle/data/flutter_assets/kernel_blob.bin
[+2043 ms] -- Installing: /media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug/bundle/data/flutter_assets/NOTICES.Z
[ +4 ms] -- Installing: /media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug/bundle/data/flutter_assets/packages
[ ] -- Installing: /media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug/bundle/data/flutter_assets/packages/cupertino_icons
[ ] -- Installing: /media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug/bundle/data/flutter_assets/packages/cupertino_icons/assets
[ ] -- Installing: /media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug/bundle/data/flutter_assets/packages/cupertino_icons/assets/CupertinoIcons.ttf
[ +13 ms] -- Installing: /media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug/bundle/data/flutter_assets/packages/media_kit
[ ] -- Installing: /media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug/bundle/data/flutter_assets/packages/media_kit/assets
[ ] -- Installing: /media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug/bundle/data/flutter_assets/packages/media_kit/assets/web
[ ] -- Installing: /media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug/bundle/data/flutter_assets/packages/media_kit/assets/web/hls1.4.10.js
[ +23 ms] -- Installing: /media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug/bundle/data/flutter_assets/packages/wakelock_plus
[ ] -- Installing: /media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug/bundle/data/flutter_assets/packages/wakelock_plus/assets
[ ] -- Installing: /media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug/bundle/data/flutter_assets/packages/wakelock_plus/assets/no_sleep.js
[ ] -- Installing: /media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug/bundle/data/flutter_assets/shaders
[ ] -- Installing: /media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug/bundle/data/flutter_assets/shaders/ink_sparkle.frag
[ +1 ms] -- Installing: /media/varun/workspace/code/iprepdigiclass/build/linux/x64/debug/bundle/data/flutter_assets/version.json
[ +11 ms] Building Linux application... (completed in 8.3s)
[ +1 ms] ✓ Built build/linux/x64/debug/bundle/iprepdigiclass
[ +8 ms] Error waiting for a debug connection: The log reader stopped unexpectedly, or never started.
[ ] Error launching application on Linux.
[ ] "flutter run" took 8,944ms.
[ +3 ms]
#0 throwToolExit (package:flutter_tools/src/base/common.dart:10:3)
#1 RunCommand.runCommand (package:flutter_tools/src/commands/run.dart:874:9)
<asynchronous suspension>
#2 FlutterCommand.run.<anonymous closure> (package:flutter_tools/src/runner/flutter_command.dart:1408:27)
<asynchronous suspension>
#3 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:153:19)
<asynchronous suspension>
#4 CommandRunner.runCommand (package:args/command_runner.dart:212:13)
<asynchronous suspension>
#5 FlutterCommandRunner.runCommand.<anonymous closure> (package:flutter_tools/src/runner/flutter_command_runner.dart:420:9)
<asynchronous suspension>
#6 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:153:19)
<asynchronous suspension>
#7 FlutterCommandRunner.runCommand (package:flutter_tools/src/runner/flutter_command_runner.dart:364:5)
<asynchronous suspension>
#8 run.<anonymous closure>.<anonymous closure> (package:flutter_tools/runner.dart:130:9)
<asynchronous suspension>
#9 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:153:19)
<asynchronous suspension>
#10 main (package:flutter_tools/executable.dart:93:3)
<asynchronous suspension>
[ +91 ms] ensureAnalyticsSent: 90ms
[ ] Running 2 shutdown hooks
[ +10 ms] Shutdown hooks complete
[ +75 ms] exiting with code 1
varun@varun-idreamhp:/media/varun/workspace/code/iprepdigiclass$
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
varun@varun-idreamhp:/media/varun/workspace/code/iprepdigiclass$ flutter doctor -v
[✓] Flutter (Channel stable, 3.24.3, on Ubuntu 24.04.1 LTS 6.8.0-45-generic, locale en_US.UTF-8)
• Flutter version 3.24.3 on channel stable at /home/varun/Documents/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 2663184aa7 (6 days ago), 2024-09-11 16:27:48 -0500
• Engine revision 36335019a8
• Dart version 3.5.3
• DevTools version 2.37.3
[✗] Android toolchain - develop for Android devices
✗ Unable to locate Android SDK.
Install Android Studio from: https://developer.android.com/studio/index.html
On first launch it will assist you in installing the Android SDK components.
(or visit https://flutter.dev/to/linux-android-setup for detailed instructions).
If the Android SDK has been installed to a custom location, please use
`flutter config --android-sdk` to update to that location.
[✗] Chrome - develop for the web (Cannot find Chrome executable at google-chrome)
! Cannot find Chrome. Try setting CHROME_EXECUTABLE to a Chrome executable.
[✓] Linux toolchain - develop for Linux desktop
• Ubuntu clang version 18.1.3 (1ubuntu1)
• cmake version 3.28.3
• ninja version 1.11.1
• pkg-config version 1.8.1
[!] Android Studio (not installed)
• Android Studio not found; download from https://developer.android.com/studio/index.html
(or visit https://flutter.dev/to/linux-android-setup for detailed instructions).
[✓] VS Code (version 1.93.1)
• VS Code at /usr/share/code
• Flutter extension version 3.96.0
[✓] Connected device (1 available)
• Linux (desktop) • linux • linux-x64 • Ubuntu 24.04.1 LTS 6.8.0-45-generic
[✓] Network resources
• All expected network resources are available.
! Doctor found issues in 3 categories.
varun@varun-idreamhp:/media/varun/workspace/code/iprepdigiclass$
```
</details>

| platform-linux,a: desktop,P2,team-linux,triaged-linux | low | Critical |
2,532,918,546 | rust | Reconsider Rule 4E (early) for RFC 3627 | ## Context
[RFC3627](https://github.com/rust-lang/rfcs/pull/3627) set out to improve some unintuitive edges of match ergonomics. The subtlest part involves fixing this case:
```rust
let [x]: &[&T] = ...;
// x: &&T
let [&x]: &[&T] = ...;
// x: T
```
where a `&` pattern appears to remove two layers of references. T-lang agreed on the desireability of "eat-one-layer" instead, namely that a `&` pattern should only ever remove one layer of reference.
For that, the RFC proposes rule 2: "When a reference pattern matches against a reference, do not update the default binding mode". While this is arguably a straightforward change from an implementation perspective, let me show you that it does not appropriately solve the problem we set out to solve from a language perspective.
## Issue
The reason is simple: there are two references at play in `&&T`; with rule 2 we match the pattern against the _inner_ one of these. Some consequences (you can try these out in [my online tool](https://nadrieril.github.io/typing-rust-patterns/?opts1=AAEBAAABAQABAgEAAAEAAAAAAAAAAAA%3D&q=%5B%26mut+x%5D%3A+%26mut+%5B%26T%5D) which can run both [TC's](https://github.com/traviscross/match-ergonomics-formality/) and [my](https://github.com/Nadrieril/typing-rust-patterns) solvers; just note that rule4-early has bugs when combined with rule 5):
- The mutability that matters is the inner one:
```rust
let [x]: &[&mut T] = ...;
// x: &&mut T
let [&x]: &[&mut T] = ...;
// with rule 2: Type error
// with rule 4E: x: &mut T + borrow checking error
let [&mut x]: &[&mut T] = ...;
// with rule 2: x: &T
// with rule 4E: type error
let [x]: &mut [&T] = ...;
// x: &mut &T
let [&x]: &mut [&T] = ...;
// with rule 2: `x: &mut T`, which causes a borrow-checking error
// with rule 4E: type error
let [&mut x]: &mut [&T] = ...;
// with rule 2: Type error
// with rule 4E: x: &T
```
- References are considered inherited when they shouldn't, which is visible with `mut` or `ref` bindings:
```rust
let [&x]: &[&T] = ...;
// with rule 2: x: &T
// with rule 4E: x: &T
let [&ref x]: &[&T] = ...;
// with rule 2: x: &T because the reference was considered inherited and `ref x` overrides that
// with rule 4E: x: &&T
let &[x]: &[&T] = ...;
// with rule 2: x: &T
// with rule 4E: x: &T
let &[ref x]: &[&T] = ...;
// with rule 2: x: &&T because the reference was not considered inherited
// with rule 4E: x: &&T
```
- Combined with the rest of RFC3627, we get weirdly inconsistent behaviors such as:
```rust
let [&mut (ref x)]: &mut [&mut T] = ...;
// RFC3627: `x: &T` because we got an inherited `&mut` and `ref` overrode it
// with rule 4E: x: &&mut T
let [&mut (ref x)]: &mut [& T] = ...;
// RFC3627: `x: &&T` because the mutability mismatch triggered rule 4 instead
// with rule 4E: x: &&T
```
In short: rule 2 does "eat-one-layer" but eats the wrong layer. The fix is simply to eat the other one. In the language of RFC3627: "when the binding mode is ref or ref mut, match the pattern against the binding mode as if it was a reference"; this has been called "rule 4-early" in our discussions.
## Edition
RFC3627 proposed to enable rules 1 and 2 over the edition. I propose to instead enable rules 1 and 4-early over the edition. Note that rule 4-early also replaces rule 4.
While I'm at it I would like to add a small additional rule, to enable fixing all the surprises:
- Rule 1.5: When the DBM (default binding mode) is not `move`, writing `ref` on a binding is an error.
We can always revert to the previous behavior (i.e. `ref x` swallows a reference if it is inherited) if we wish to later.
cc @traviscross | T-lang,A-patterns,C-discussion | low | Critical |
2,532,948,375 | excalidraw | Adding a child element with flowchart to an already preset set of childs overlap earlier childs if the shapes are large | 
| good first issue | low | Major |
2,532,957,962 | pytorch | TorchScript raises an OSError when using frozen dataclass | ### 🐛 Describe the bug
Running the following code:
```python
import torch
from dataclasses import dataclass
from typing import Tuple
@torch.jit.script
@dataclass(frozen=True)
class Info:
pass
class Model:
def __init__(self, info: Tuple[Info]):
self.infos = [i for i in info]
def forward(self, x):
return self.infos
```
Got:
```
Traceback (most recent call last):
File "/Users/hguandl/Library/micromamba/envs/ml/lib/python3.11/site-packages/torch/_sources.py", line 23, in get_source_lines_and_file
sourcelines, file_lineno = inspect.getsourcelines(obj)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/hguandl/Library/micromamba/envs/ml/lib/python3.11/inspect.py", line 1240, in getsourcelines
lines, lnum = findsource(object)
^^^^^^^^^^^^^^^^^^
File "/Users/hguandl/Library/micromamba/envs/ml/lib/python3.11/inspect.py", line 1077, in findsource
raise OSError('could not get source code')
OSError: could not get source code
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/hguandl/submit/test.py", line 5, in <module>
@torch.jit.script
^^^^^^^^^^^^^^^^
File "/Users/hguandl/Library/micromamba/envs/ml/lib/python3.11/site-packages/torch/jit/_script.py", line 1375, in script
_compile_and_register_class(obj, _rcb, qualified_name)
File "/Users/hguandl/Library/micromamba/envs/ml/lib/python3.11/site-packages/torch/jit/_recursive.py", line 59, in _compile_and_register_class
ast = get_jit_class_def(obj, obj.__name__)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/hguandl/Library/micromamba/envs/ml/lib/python3.11/site-packages/torch/jit/frontend.py", line 299, in get_jit_class_def
method_defs = [
^
File "/Users/hguandl/Library/micromamba/envs/ml/lib/python3.11/site-packages/torch/jit/frontend.py", line 300, in <listcomp>
get_jit_def(obj, name, self_name=self_name, is_classmethod=is_classmethod(obj))
File "/Users/hguandl/Library/micromamba/envs/ml/lib/python3.11/site-packages/torch/jit/frontend.py", line 331, in get_jit_def
parsed_def = parse_def(fn) if not isinstance(fn, _ParsedDef) else fn
^^^^^^^^^^^^^
File "/Users/hguandl/Library/micromamba/envs/ml/lib/python3.11/site-packages/torch/_sources.py", line 120, in parse_def
sourcelines, file_lineno, filename = get_source_lines_and_file(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/hguandl/Library/micromamba/envs/ml/lib/python3.11/site-packages/torch/_sources.py", line 32, in get_source_lines_and_file
raise OSError(msg) from e
OSError: Can't get source for <function Info.__delattr__ at 0x13f7218a0>. TorchScript requires source access in order to carry out compilation, make sure original .py files are available.
```
### Versions
Collecting environment information...
PyTorch version: 2.3.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.6.1 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.3.9.4)
CMake version: version 3.30.3
Libc version: N/A
Python version: 3.11.9 | packaged by conda-forge | (main, Apr 19 2024, 18:34:54) [Clang 16.0.6 ] (64-bit runtime)
Python platform: macOS-14.6.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M2 Max
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] optree==0.11.0
[pip3] torch==2.3.0
[conda] Could not collect
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel | oncall: jit | low | Critical |
2,532,965,804 | rust | `-C instrument-coverage` misses regions in const assertions | Hey, I've run into a problem with `-C instrument-coverage`, const generics and assertions.
I tried this code:
```rust
const fn check_even<const N: usize>() {
assert!(N % 2 == 0, "will fail once");
}
#[test]
fn true_branch() {
check_even::<8>(); // even number
}
#[test]
#[should_panic = "will fail once"]
fn false_branch() {
check_even::<9>(); // will trigger the assertion
}
```
I prepared a reproducer with `cargo`, which is not exactly a MCVE, but the code itself is rather minimal. Therefore the following commands will set up a Cargo project with the aforementioned code in the `lib.rs`-file.
```bash
cargo new --lib repro
RUSTFLAGS="-C instrument-coverage" cargo test --manifest-path repro/Cargo.toml --tests
llvm-profdata merge -sparse repro/default_* -o merged.profdata
llvm-cov report --instr-profile merged.profdata repro/target/debug/deps/repro-* --show-branch-summary=false # make sure, the correct test binary is picked up
```
I expected to see this happen: as all branches/regions are covered, I expect 100% test coverage.
Instead, this happened: only 80% region coverage (but 100% line coverage as expected):
```text
Filename Regions Missed Regions Cover Functions Missed Functions Executed Lines Missed Lines Cover
------------------------------------------------------------------------------------------------------------------------------------------------------------
/home/jfrimmel/git/repro/src/lib.rs 5 1 80.00% 3 0 100.00% 9 0 100.00%
------------------------------------------------------------------------------------------------------------------------------------------------------------
TOTAL 5 1 80.00% 3 0 100.00% 9 0 100.00%
```
Specifically, the region containing the assertion message is not covered, despite it clearly has to be used, since the tests would fail due to the wrong panic message.

For reference: this actually hits me in a project of mine, which aims to have 100% coverage. The only regions not covered are caused by assertions. The CI logs are located [here](https://app.circleci.com/pipelines/github/jfrimmel/emballoc/77/workflows/61129ef5-0baf-45a1-a367-61b337db5883/jobs/565?invite=true#step-102-4963_171).
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.81.0 (eeb90cda1 2024-09-04)
binary: rustc
commit-hash: eeb90cda1969383f56a2637cbd3037bdf598841c
commit-date: 2024-09-04
host: x86_64-unknown-linux-gnu
release: 1.81.0
LLVM version: 18.1.7
```
This also reproduces with the nightlies 2024-09-18, 2024-09-09, 2024-09-01 and 2024-08-01.
@rustbot label +A-code-coverage | T-compiler,C-bug,A-code-coverage | low | Critical |
2,533,016,373 | vscode | TS: import does not follow `.js` suffix requirement | Steps to Reproduce:
1. open `keybindingsEditor.ts`
2. somewhere use the `unmnemonicLabel` method
3. let it auto import
=> 🐛 it adds `import { unmnemonicLabel } from '../../../../base/common/labels';` which is not ESM compatible.
| bug,typescript,debt | low | Minor |
2,533,018,972 | rust | Rustonomicon and ManuallyDrop's documentation contradict each other about relying on drop order. | ### Location
https://doc.rust-lang.org/nomicon/dropck.html#a-related-side-note-about-drop-order
https://doc.rust-lang.org/std/mem/struct.ManuallyDrop.html#manuallydrop-and-drop-order
### Summary
The Rustonomicon [states](https://doc.rust-lang.org/nomicon/dropck.html#a-related-side-note-about-drop-order):
> While the drop order of fields inside a struct is defined, relying on it is fragile and subtle. When the order matters, it is better to use the `ManuallyDrop` wrapper.
The docs for `ManuallyDrop` [states](https://doc.rust-lang.org/std/mem/struct.ManuallyDrop.html#manuallydrop-and-drop-order):
> Rust has a well-defined [drop order](https://doc.rust-lang.org/reference/destructors.html) of values. To make sure that fields or locals are dropped in a specific order, reorder the declarations such that the implicit drop order is the correct one.
> It is possible to use ManuallyDrop to control the drop order, but this requires unsafe code and is hard to do correctly in the presence of unwinding.
That is, when you rely on fields being dropped in a specific order, the Rustonomicon recommends using `ManuallyDrop`, while the `ManuallyDrop` documentation recommends using field ordering. Which of the two is correct? | T-lang,T-libs-api,A-docs,C-bug | low | Minor |
2,533,060,300 | vscode | debug restart does not work in test mode | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.93.1
- OS Version: Sonoma 14.5
Steps to Reproduce:
1. run a test from the integrated test runner
2. hook the debugger on a breakpoint
3. click the restart button
I also tried this on the latest insiders build, but it fails with the same error as in the current vs code
> Invalid message: either "program", "module", or "code" must be specified
<img width="1706" alt="Screenshot 2024-09-18 at 10 20 18" src="https://github.com/user-attachments/assets/187ec496-6a0e-4d2e-b3c8-3bad1c7ac65d">
I found others discussing this issue in the playright issues in a couple of places related to vscode and I opened an issue there that pulled those threads together
https://github.com/microsoft/playwright/issues/32655
but I then realised that that was probably the wrong repo.
This change would make testing with debugging so much more efficient - I hit this issue multiple times a day - debug restart works on all my scripts, but not my tests :-( | info-needed | low | Critical |
2,533,065,287 | kubernetes | When the pod is deleted, GC the current image. | ### What would you like to be added?
I want to add an Image GC strategy. When a single Pod is deleted, the image is marked as needing GC and deleted when the next GC cycle starts.
### Why is this needed?
When a cluster has an Image P2P distribution system, each node is used as a proxy node for the image. This will cause an image to exist twice on the node, resulting in low disk utilization. Therefore, it is hoped that the image at runtime will be deleted and only the image in the P2P system will be retained. | sig/node,kind/feature,lifecycle/rotten,needs-triage | low | Major |
2,533,075,473 | stable-diffusion-webui | [Feature Request]: The EC2 instance type is g6e.xlarge, but it's taking 8 seconds to render an image via the API, which is too long. What should I do? | ### Is there an existing issue for this?
- [ ] I have searched the existing issues and checked the recent builds/commits
### What would your feature do ?
Speed up the rendering time on G6e.xlarge.
### Proposed workflow

### Additional information
_No response_ | enhancement | low | Minor |
2,533,096,372 | react | [React 19] Export SuspenseException | ## Problem Statement
Currently, in React's `use(promise)` mechanism, there is no straightforward way to determine whether an exception originates from a promise suspended within the `use` hook. This makes it challenging for developers to:
- Accurately catch and handle errors related to suspended promises.
- Differentiate between errors caused by `use(promise)` and other unrelated errors, complicating error handling logic.
## Proposal
React should export either:
1. A `SuspenseException` class that can be used to identify errors originating from a promise suspension, or
2. A utility function to check whether a given error is caused by a suspended promise in `use(promise)`.
### Example Usage
1. **SuspenseException Class:**
```jsx
import { SuspenseException } from 'react';
try {
use(fetchData());
} catch (error) {
if (error === SuspenseException) {
// Handle Suspense-related logic
} else {
// Handle other types of errors
}
}
```
2. **Utility Function:**
```jsx
import { isSuspenseException } from 'react';
try {
use(fetchData());
} catch (error) {
if (isSuspenseException(error)) {
// Handle Suspense-related logic
} else {
// Handle other types of errors
}
}
```
## Benefits
- **Error differentiation:** Clear distinction between promise suspensions and other errors.
- **Enhanced debugging:** Easier diagnosis of Suspense-related issues in both development and production.
- **Safer error handling:** Prevents unintended catches of non-Suspense errors during Suspense management.
## Conclusion
By exporting either a `SuspenseException` or a utility function, React would offer developers more control over managing Suspense-related errors, improving both error handling and debugging in applications. | Type: Feature Request,React 19 | low | Critical |
2,533,185,148 | opencv | JS: Support new `using` javascript keyword for explicit resource management | ### Describe the feature and motivation
There is a TC39 proposal to add the `using` keyword to JavaScript:
https://github.com/tc39/proposal-explicit-resource-management
This is already implemented in TypeScript:
https://www.typescriptlang.org/docs/handbook/release-notes/typescript-5-2.html#using-declarations-and-explicit-resource-management
It would be great if, instead of having to call `.delete()` manually on objects, we could use the `using` keyword to ensure memory cleanup.
### Additional context
_No response_ | feature,category: javascript (js),community help requested | low | Minor |
2,533,194,326 | pytorch | `to()` Method Does Not Move Internal Components of Sparse Tensors | ### 🐛 Describe the bug
**Description:**
In PyTorch 2.4, calling `.to(device)` on an `nn.Module` that contains sparse CSR tensors does not move the internal components of the sparse tensors (e.g., `crow_indices`, `col_indices`, `values`) to the specified device, while it was the case in PyTorch 2.2. This causes potential device mismatch errors.
**Minimal Working Example:**
```python
import torch
import torch.nn as nn
class SparseLayer(nn.Module):
def __init__(self):
super(SparseLayer, self).__init__()
sparse_tensor = torch.empty(5, 5, device='cpu').uniform_(-1, 1).to_sparse_csr()
self.matrix = nn.Parameter(sparse_tensor)
def forward(self, x):
pass
if __name__ == "__main__":
# Create an instance of the SparseLayer
layer = SparseLayer()
# Move the layer to GPU
layer = layer.to('cuda')
# Check the device of the matrix and its internals after moving to GPU
print("After moving to CUDA:")
print(f"Matrix device: {layer.matrix.device}")
print(f"Matrix crow_indices device: {layer.matrix.crow_indices().device}")
print(f"Matrix col_indices device: {layer.matrix.col_indices().device}")
print(f"Matrix values device: {layer.matrix.values().device}")
# Now, move the layer back to CPU
layer = layer.to('cpu')
# Check the device of the matrix and its internals after moving back to CPU
print("\nAfter moving back to CPU:")
print(f"Matrix device: {layer.matrix.device}")
print(f"Matrix crow_indices device: {layer.matrix.crow_indices().device}")
print(f"Matrix col_indices device: {layer.matrix.col_indices().device}")
print(f"Matrix values device: {layer.matrix.values().device}")
# Manually move the sparse matrix back to CUDA and check its internals
layer.matrix = nn.Parameter(layer.matrix.to('cuda'))
print("\nAfter manually moving matrix to CUDA:")
print(f"Matrix device: {layer.matrix.device}")
print(f"Matrix crow_indices device: {layer.matrix.crow_indices().device}")
print(f"Matrix col_indices device: {layer.matrix.col_indices().device}")
print(f"Matrix values device: {layer.matrix.values().device}")
```
**Expected Behavior:**
- When calling `layer.to(device)`, the sparse tensor and all of its internal components (e.g., `crow_indices`, `col_indices`, `values`) should be moved to the specified device.
**Observed Behavior:**
- The code `layer.to('cuda')` moves the sparse tensor to the GPU, but not its internal components on PyTorch 2.4, while the internal components were moved in PyTorch 2.2. Manually moving the matrix to GPU with `nn.Parameter(layer.matrix.to('cuda'))` moves all the internals to the GPU. These different behaviors are prone to making errors.
### Versions
PyTorch version: 2.4.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 12 (bookworm) (x86_64)
GCC version: (Debian 12.2.0-14) 12.2.0
Clang version: Could not collect
CMake version: version 3.25.1
Libc version: glibc-2.36
Python version: 3.11.2 (main, May 2 2024, 11:59:08) [GCC 12.2.0] (64-bit runtime)
Python platform: Linux-6.1.0-23-amd64-x86_64-with-glibc2.36
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-PCIE-40GB
GPU 1: NVIDIA A100-PCIE-40GB
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Silver 4215R CPU @ 3.20GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 2
Stepping: 7
BogoMIPS: 6400.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe sys
call nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx
est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fa
ult epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2
smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc
cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 512 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 16 MiB (16 instances)
L3 cache: 22 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31
Vulnerability Gather data sampling: Vulnerable: No microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.1.1
[pip3] torch==2.4.1
[pip3] torchaudio==2.4.1
[pip3] torchvision==0.19.1
[pip3] triton==3.0.0
[conda] No relevant packages
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @jcaip | module: sparse,triaged | low | Critical |
2,533,225,792 | vscode | Making all the code-cell outputs scrollable | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
Even when editing the setting of VS Code called `notebook.output.scrolling` to `true`, the output of a code-cell in a notebook (i.e. file with extension `.ipynb`) does not get restricted, except when using the function `print(...)` to create output.

If you for example try using either:
- the function `display(...)` from `IPython.display` to display a `pandas.DataFrame`, or

- making a plot using `matplotlib.pyplot` and displaying it in the output (or any kind of an image),

then the size of the code-cell output is not restricted and scrollable, but stretches all the way until the whole element (either the `pandas.DataFrame`, or the plot) is displayed.
Is there an option in VS Code which will allow its user to restrict the size of a code-cell output, not matter how the output is created (i.e. no matter if the output is generated using the function `display(...)`, if it is a plot, if it is using the function `print(...)`)?
**Note**: There was previously a similar issue created: [the_issue](https://github.com/microsoft/vscode/issues/206226), but the requestor never wrote a reply to what you had to say. I think that my request is a bit different, because I would not change the fact that each output is independent, but I would only make it possible to make either:
1. make a setting which would allow the user to specify that if all the outputs of one code-cell exceed some size, to the code-cell output (looking at the code-cell output as an output, no matter if is actually comprised out of multiple outputs logically) scrollable
2. make it possible to specify for each of the outputs of one code-cell individually to be scrollable if they exceed a certain size.
In my opinion, it would be the best if both of these settings were implemented, but having at least one of the two would do the trick for most of the people. | feature-request | low | Major |
2,533,248,492 | vscode | Windows integration crash | https://dev.azure.com/monacotools/Monaco/_build/results?buildId=293714&view=results | bug,freeze-slow-crash-leak,windows,integration-test-failure | low | Critical |
2,533,343,260 | TypeScript | [suggest] constructor init delegation with "strictPropertyInitialization" | ### 🔍 Search Terms
strictPropertyInitialization, Property has no initializer and is not definitely assigned in the constructor.
### ✅ Viability Checklist
- [X] This wouldn't be a breaking change in existing TypeScript/JavaScript code
- [X] This wouldn't change the runtime behavior of existing JavaScript code
- [X] This could be implemented without emitting different JS based on the types of the expressions
- [X] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, new syntax sugar for JS, etc.)
- [X] This isn't a request to add a new utility type: https://github.com/microsoft/TypeScript/wiki/No-New-Utility-Types
- [X] This feature would agree with the rest of our Design Goals: https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals
### ⭐ Suggestion
Hello, have you ever considered adding a special hack to delegate the role of a class constructor when using `strictPropertyInitialization: true` in the **tsconfig** file?
In fact, I have an issue with an **architecture that makes the constructor unnecessary until the class is injected into a facade that activates the system and creates the private dependencies of the class**.
There is no other solution except to disable this rule, which is frankly not great and very limiting when a project uses several design patterns requiring a distinct approach.
To take advantage of the security of `strictPropertyInitialization`, would it be possible to have **maybe** a special utility type that tells TypeScript to check this method instead of the constructor?
Here are **examples** that could potentially be good solutions:
I personally prefer examples **2 and 3.**
The first example can bring more complexity
---
### 📃 Motivating Example
**Suggest** 1: method with special type
We add `methodname( this:ThisConstructor )`
```ts
class SystemA {
public readonly instanceNeedDependancyAvaibleLater: object; // Property 'instanceNeedDependancyAvaibleLater' has no initializer and is not definitely assigned in the constructor.
constructor() {
}
// ThisConstructor can be a special type flag thas tell to tsserveur with "strictPropertyInitialization" to ignore the constructor and scan asignation in this method instead
init( this:ThisConstructor ) {
this.instanceNeedDependancyAvaibleLater = {};
}
}
```
issues:
- can add complexity
- can add perf issue with ts (need scan all method)
- issue with 2 or more !
- may have issue with overload
---
**Suggest** 2: detect with illegal `this` in constructor.
It is currently illegal to use "this" in the constructor.
We could take this opportunity to tell ts to check another methods for props initialisation.
I using "string" here, but is juste for give the idea.
```ts
class SystemA {
public readonly instanceNeedDependancyAvaibleLater: object; // Property 'instanceNeedDependancyAvaibleLater' has no initializer and is not definitely assigned in the constructor.
// this:'methodName' can maybe tell ts to find this method and scan assignation instead use this constructor
constructor( this:'init' ) {
}
init( ) {
this.instanceNeedDependancyAvaibleLater = {};
}
}
```
issues:
- will maybe not work with `protected` and `private` method based ont how ts work
- rename or security issue if using "string", ts will need index the token or suggest types
- add complexity in constructor arguments
- issue if we extends class with constructor params "need implement"
---
**Suggest** 3: using comment
```ts
class SystemA {
public readonly instanceNeedDependancyAvaibleLater: object; // Property 'instanceNeedDependancyAvaibleLater' has no initializer and is not definitely assigned in the constructor.
// using a comment flag to scan .init method instead of constructor
//@ts-strictPropertyInitialization {SystemA.prototype.init}
constructor( ) {
}
init( ) {
this.instanceNeedDependancyAvaibleLater = {};
}
}
```
issues:
- will maybe not work with `protected` and `private` method based ont how ts work
- It may be perceived and treated incorrectly as a comment, which adds ambiguity:
- - _Is it a comment or an instruction essential to the execution of the code? This comment therefore guides the interpretation of the code?_
---
**Suggest** 4: tracking the flow from constructor:
https://github.com/microsoft/TypeScript/issues/32194
https://github.com/microsoft/TypeScript/issues/30462
```ts
class SystemA {
public readonly instanceNeedDependancyAvaibleLater: object; // Property 'instanceNeedDependancyAvaibleLater' has no initializer and is not definitely assigned in the constructor.
constructor() {
this.init();
}
// ThisConstructor can be a special type flag thas tell to tsserveur with "strictPropertyInitialization" to ignore the constructor and scan asignation in this method instead
init() {
this.instanceNeedDependancyAvaibleLater = {};
}
```
issues:
- can add perf issue with ts (need scan all method and what they do)
- can add complexity with multiple methods: who, why do assignments
- it maybe a breakchange
### 💻 Use Cases
1. What do you want to use this for?
This is to address the need for an architecture that requires initialization after adding a loosely coupled facade.
The idea is to maintain the security of declarations, which current solutions do not allow.
This approach can also open the door to design patterns that are difficult to manage with TypeScript.
2. What shortcomings exist with current approaches?
Currently, there are various tricks that do not allow to maintain the security and philosophy of TypeScript.
1: `//@ts-ignore`
2: `public props!:object`
3: `strictPropertyInitialization:false`
3. What workarounds are you using in the meantime?
`public props!:object` | Suggestion,Awaiting More Feedback | low | Minor |
2,533,386,189 | PowerToys | Auto-type clipboard content as keystrokes (paste) | ### Description of the new feature / enhancement
When I'm required to use different vendors' password managers, RMMs, and remote connection tools for work, I often find myself unable to copy-paste a password (or any text) from the host machine to the guest. In my humble opinion, it would be an excellent feature if a specific keyboard combination could auto-type the text from the clipboard as keystrokes.
Similar to how KeePass auto-fills passwords.
### Scenario when this would be used?
It would be useful for helpdesk engineers and sysadmins who often have to "jump" into servers or workstations through multiple remote connection tools, particularly when the copy/paste function doesn't work (e.g., in Quick Assist).
### Supporting information
_No response_ | Resolution-Helped User | low | Major |
2,533,407,946 | ui | [feat]: Stepper UI component | ### Feature description
Is it possible to implement a stepper using components already in shadcn
e.g. Googel Maps

e.g. GMPRO Viewer

### Affected component/components
_No response_
### Additional Context
_No response_
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | low | Major |
2,533,416,563 | flutter | [video_player_android] Regression in version 2.7.2 on Google Chromecast | ### Steps to reproduce
Upgrading video_player_android from 2.7.1 to 2.7.2 or 2.7.3 causes all of our videos to stop working on Google Chromecast.
No other changes applied to our app or other dependencies.
We tried both h265 and h264, hls and plain mp4, same results.
Please tell us if we can do anything to help diagnose this issue.
Logcat portion is attached.
### Expected results
Video playback should work.
### Actual results
Exoplayer throws several expcetions, video not playing.
### Code sample
<details open><summary>Code sample</summary>
Very simple implementation, only portion of our app just for reference...
In initState:
```dart
_videoPlayerController = VideoPlayerController.networkUrl(uri);
```
In build:
```dart
Center(
child: AspectRatio(
aspectRatio: _videoPlayerController.value.aspectRatio,
child: VideoPlayer(_videoPlayerController),
),
),
```
In playerTickListener:
```dart
if (!_initialized && _videoPlayerController.value.isInitialized) {
setState(() {
_initialized = true;
_videoPlayerController.play();
});
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
09-18 11:36:05.975 3690 3813 W ProcessStats: Tracking association SourceState{e006f1c com.google.android.katniss:search/10036 BTopFgs #50869} whose proc state 2 is better than process ProcessState{bc08bf8 com.google.android.gms.persistent/10040 pkg=com.google.android.gms} proc state 3 (14 skipped)
09-18 11:36:06.033 4693 4802 W TifLauncherHelper: Allow remove LastInput? com.pravatv.flutter in foreground
09-18 11:36:06.045 4693 4802 W LastUsedInputProtoDataS: Sending last used input update broadcast
09-18 11:36:06.048 4693 4693 E libprocessgroup: set_timerslack_ns write failed: Operation not permitted
09-18 11:36:06.050 4693 4693 E libprocessgroup: set_timerslack_ns write failed: Operation not permitted
09-18 11:36:06.050 4693 4693 E libprocessgroup: set_timerslack_ns write failed: Operation not permitted
09-18 11:36:06.052 4693 4693 E libprocessgroup: set_timerslack_ns write failed: Operation not permitted
09-18 11:36:06.052 4693 4693 E libprocessgroup: set_timerslack_ns write failed: Operation not permitted
09-18 11:36:06.053 4693 4693 E libprocessgroup: set_timerslack_ns write failed: Operation not permitted
09-18 11:36:06.076 4693 4693 W LastUsedInputUpdateRece: Notify last used input data change
09-18 11:36:07.421 3690 4660 E TaskPersister: File error accessing recents directory (directory doesn't exist?).
09-18 11:36:17.729 3690 4808 W ProcessStats: Tracking association SourceState{a56d84f com.google.android.katniss:interactor/10036 BTopFgs #50911} whose proc state 2 is better than process ProcessState{bc08bf8 com.google.android.gms.persistent/10040 pkg=com.google.android.gms} proc state 3 (395 skipped)
09-18 11:36:21.075 9175 9778 W Finsky : [140] qoe.j(28): SLM: no metadata property com.android.vending.derived.apk.id found for shared library com.google.android.trichromelibrary_447211480:
09-18 11:36:21.075 9175 9778 W Finsky : [140] qoe.j(28): SLM: no metadata property com.android.vending.sdk.version.patch found for shared library com.google.android.trichromelibrary_447211480:
09-18 11:36:21.076 9175 9778 W Finsky : [140] qoe.j(28): SLM: no metadata property com.android.vending.derived.apk.id found for shared library com.google.android.trichromelibrary_661312830:
09-18 11:36:21.077 9175 9778 W Finsky : [140] qoe.j(28): SLM: no metadata property com.android.vending.sdk.version.patch found for shared library com.google.android.trichromelibrary_661312830:
09-18 11:36:21.359 3723 3723 E SurfaceFlinger: Attempt to set frame rate on an unrecognized IGraphicBufferProducer
09-18 11:36:23.198 9451 9782 W VideoCapabilities: Unsupported mime video/dolby-vision
09-18 11:36:23.198 9451 9782 W VideoCapabilities: Unsupported mime video/dolby-vision
09-18 11:36:23.200 9451 9782 W VideoCapabilities: Unrecognized profile/level 0/3 for video/mpeg2
09-18 11:36:23.202 9451 9782 W VideoCapabilities: Unrecognized level 16 for video/x-vnd.on2.vp8
09-18 11:36:23.202 9451 9782 W VideoCapabilities: Unrecognized level 32 for video/x-vnd.on2.vp8
09-18 11:36:23.202 9451 9782 W VideoCapabilities: Unrecognized level 64 for video/x-vnd.on2.vp8
09-18 11:36:23.202 9451 9782 W VideoCapabilities: Unrecognized level 128 for video/x-vnd.on2.vp8
09-18 11:36:23.202 9451 9782 W VideoCapabilities: Unrecognized level 256 for video/x-vnd.on2.vp8
09-18 11:36:23.202 9451 9782 W VideoCapabilities: Unrecognized level 512 for video/x-vnd.on2.vp8
09-18 11:36:23.202 9451 9782 W VideoCapabilities: Unrecognized level 1024 for video/x-vnd.on2.vp8
09-18 11:36:23.202 9451 9782 W VideoCapabilities: Unrecognized level 2048 for video/x-vnd.on2.vp8
09-18 11:36:23.202 9451 9782 W VideoCapabilities: Unrecognized level 4096 for video/x-vnd.on2.vp8
09-18 11:36:23.202 9451 9782 W VideoCapabilities: Unrecognized level 8192 for video/x-vnd.on2.vp8
09-18 11:36:23.202 9451 9782 W VideoCapabilities: Unrecognized level 16384 for video/x-vnd.on2.vp8
09-18 11:36:23.202 9451 9782 W VideoCapabilities: Unrecognized level 32768 for video/x-vnd.on2.vp8
09-18 11:36:23.203 9451 9782 W VideoCapabilities: Unsupported profile 4 for video/avc
09-18 11:36:23.344 3862 12847 W AmlSysfsUtil: [amsysfs_get_sysfs_str] /sys/module/am_vecm/parameters/dolby_vision_enable failed!
09-18 11:36:23.346 3862 3862 W HwBinder:3862_5: type=1400 audit(0.0:2310): avc: denied { read } for name="u:object_r:build_bootimage_prop:s0" dev="tmpfs" ino=11384 scontext=u:r:mediacodec:s0 tcontext=u:object_r:build_bootimage_prop:s0 tclass=file permissive=0
09-18 11:36:23.346 3862 12847 E secmem_tz: [Secure_NegotiateVersion:374] Secure_NegotiateVersion 2 2 2 2 2 2
09-18 11:36:23.346 3862 12847 E secmem_tz: [Secure_NegotiateVersion:384] Negotiated secmem version = 2
09-18 11:36:23.347 3862 12847 W OmxLogConf: Can not read property media.omx.log_levels, using 0
09-18 11:36:23.348 3862 12847 W libc : Access denied finding property "ro.bootimage.build.fingerprint"
09-18 11:36:23.365 3862 12847 E OMXNodeInstance: setParameter(0xebb2c444:amlogic.hevc.decoder.awesome2, OMX.google.android.index.allocateNativeHandle(0x7f00000a): Output:1 en=0) ERROR: UnsupportedSetting(0x80001019)
09-18 11:36:23.365 3862 12847 E OMXNodeInstance: setParameter(0xebb2c444:amlogic.hevc.decoder.awesome2, OMX.google.android.index.storeMetaDataInBuffers(0x7f000002): Output:1 en=1 type=1) ERROR: BadPortIndex(0x8000101b)
09-18 11:36:23.365 9451 9796 E ACodec : [OMX.amlogic.hevc.decoder.awesome2] setPortMode on output to DynamicANWBuffer failed w/ err -2147483648
09-18 11:36:23.366 3862 12847 E OMXNodeInstance: getParameter(0xebb2c444:amlogic.hevc.decoder.awesome2, ??(0x6f600011)) ERROR: UnsupportedIndex(0x8000101a)
09-18 11:36:23.366 3862 3862 E OMXNodeInstance: setParameter(0xebb2c444:amlogic.hevc.decoder.awesome2, OMX.google.android.index.allocateNativeHandle(0x7f00000a): Output:1 en=0) ERROR: UnsupportedSetting(0x80001019)
09-18 11:36:23.367 3862 3862 E OMXNodeInstance: setParameter(0xebb2c444:amlogic.hevc.decoder.awesome2, OMX.google.android.index.storeMetaDataInBuffers(0x7f000002): Output:1 en=0 type=1) ERROR: BadPortIndex(0x8000101b)
09-18 11:36:23.369 3862 3862 E OmxVideoDecoder: reset input buffer:2073600
09-18 11:36:23.370 3862 3862 W libc : Access denied finding property "media.omx.width"
09-18 11:36:23.371 3862 3862 W libc : Access denied finding property "media.omx.height"
09-18 11:36:23.366 3862 3862 W omx@1.0-service: type=1400 audit(0.0:2311): avc: denied { read } for name="u:object_r:default_prop:s0" dev="tmpfs" ino=11429 scontext=u:r:mediacodec:s0 tcontext=u:object_r:default_prop:s0 tclass=file permissive=0
09-18 11:36:23.366 3862 3862 W omx@1.0-service: type=1400 audit(0.0:2312): avc: denied { read } for name="u:object_r:default_prop:s0" dev="tmpfs" ino=11429 scontext=u:r:mediacodec:s0 tcontext=u:object_r:default_prop:s0 tclass=file permissive=0
09-18 11:36:23.375 3862 3862 E OMXNodeInstance: setConfig(0xebb2c444:amlogic.hevc.decoder.awesome2, ConfigPriority(0x6f800002)) ERROR: UnsupportedIndex(0x8000101a)
09-18 11:36:23.386 3862 12847 E OMXNodeInstance: getConfig(0xebb2c444:amlogic.hevc.decoder.awesome2, ??(0x7f00000c)) ERROR: UnsupportedSetting(0x80001019)
09-18 11:36:23.428 3862 32103 W OmxVideoDecoder: 4kosd set output error, newBufferCount 13 > 8
09-18 11:36:23.428 3862 32103 E OMXNodeInstance: setParameter(0xebb2c444:amlogic.hevc.decoder.awesome2, ParamPortDefinition(0x2000001)) ERROR: UnsupportedSetting(0x80001019)
09-18 11:36:23.428 9451 9796 W ACodec : [OMX.amlogic.hevc.decoder.awesome2] setting nBufferCountActual to 13 failed: -1010
09-18 11:36:23.429 3862 32103 W OmxVideoDecoder: 4kosd set output error, newBufferCount 12 > 8
09-18 11:36:23.429 3862 32103 E OMXNodeInstance: setParameter(0xebb2c444:amlogic.hevc.decoder.awesome2, ParamPortDefinition(0x2000001)) ERROR: UnsupportedSetting(0x80001019)
09-18 11:36:23.429 9451 9796 W ACodec : [OMX.amlogic.hevc.decoder.awesome2] setting nBufferCountActual to 12 failed: -1010
09-18 11:36:23.429 3862 32103 W OmxVideoDecoder: 4kosd set output error, newBufferCount 11 > 8
09-18 11:36:23.429 3862 32103 E OMXNodeInstance: setParameter(0xebb2c444:amlogic.hevc.decoder.awesome2, ParamPortDefinition(0x2000001)) ERROR: UnsupportedSetting(0x80001019)
09-18 11:36:23.429 9451 9796 W ACodec : [OMX.amlogic.hevc.decoder.awesome2] setting nBufferCountActual to 11 failed: -1010
09-18 11:36:23.429 3862 32103 W OmxVideoDecoder: 4kosd set output error, newBufferCount 10 > 8
09-18 11:36:23.429 3862 32103 E OMXNodeInstance: setParameter(0xebb2c444:amlogic.hevc.decoder.awesome2, ParamPortDefinition(0x2000001)) ERROR: UnsupportedSetting(0x80001019)
09-18 11:36:23.429 9451 9796 W ACodec : [OMX.amlogic.hevc.decoder.awesome2] setting nBufferCountActual to 10 failed: -1010
09-18 11:36:23.429 9451 9796 E ACodec : Failed to allocate buffers after transitioning to IDLE state (error 0xfffffc0e)
09-18 11:36:23.429 9451 9796 E ACodec : signalError(omxError 0x80001001, internalError -1010)
09-18 11:36:23.430 9451 9793 E MediaCodec: Codec reported err 0xfffffc0e, actionCode 0, while in state 5/STARTING
09-18 11:36:23.433 9451 9796 W AHierarchicalStateMachine: Warning message AMessage(what = 'omxI') = {
09-18 11:36:23.433 9451 9796 W AHierarchicalStateMachine: int32_t type = 0
09-18 11:36:23.433 9451 9796 W AHierarchicalStateMachine: int32_t event = 0
09-18 11:36:23.433 9451 9796 W AHierarchicalStateMachine: int32_t data1 = 0
09-18 11:36:23.433 9451 9796 W AHierarchicalStateMachine: int32_t data2 = 1
09-18 11:36:23.433 9451 9796 W AHierarchicalStateMachine: } unhandled in root state.
09-18 11:36:23.434 9451 9782 W MediaCodecRenderer: Failed to initialize decoder: OMX.amlogic.hevc.decoder.awesome2
09-18 11:36:23.434 9451 9782 W MediaCodecRenderer: java.lang.IllegalStateException
09-18 11:36:23.434 9451 9782 W MediaCodecRenderer: at android.media.MediaCodec.native_stop(Native Method)
09-18 11:36:23.434 9451 9782 W MediaCodecRenderer: at android.media.MediaCodec.stop(MediaCodec.java:2300)
09-18 11:36:23.434 9451 9782 W MediaCodecRenderer: at e0.d.release(SourceFile:1)
09-18 11:36:23.434 9451 9782 W MediaCodecRenderer: at e0.d$b.d(SourceFile:1)
09-18 11:36:23.434 9451 9782 W MediaCodecRenderer: at e0.n.a(SourceFile:1)
09-18 11:36:23.434 9451 9782 W MediaCodecRenderer: at e0.b0.a1(SourceFile:1)
09-18 11:36:23.434 9451 9782 W MediaCodecRenderer: at e0.b0.j1(SourceFile:1)
09-18 11:36:23.434 9451 9782 W MediaCodecRenderer: at e0.b0.i1(SourceFile:1)
09-18 11:36:23.434 9451 9782 W MediaCodecRenderer: at e0.b0.n1(SourceFile:1)
09-18 11:36:23.434 9451 9782 W MediaCodecRenderer: at r0.k.n1(SourceFile:1)
09-18 11:36:23.434 9451 9782 W MediaCodecRenderer: at e0.b0.x1(SourceFile:1)
09-18 11:36:23.434 9451 9782 W MediaCodecRenderer: at e0.b0.j(SourceFile:1)
09-18 11:36:23.434 9451 9782 W MediaCodecRenderer: at r0.k.j(SourceFile:1)
09-18 11:36:23.434 9451 9782 W MediaCodecRenderer: at v.s1.w(SourceFile:1)
09-18 11:36:23.434 9451 9782 W MediaCodecRenderer: at v.s1.handleMessage(SourceFile:1)
09-18 11:36:23.434 9451 9782 W MediaCodecRenderer: at android.os.Handler.dispatchMessage(Handler.java:102)
09-18 11:36:23.434 9451 9782 W MediaCodecRenderer: at android.os.Looper.loopOnce(Looper.java:201)
09-18 11:36:23.434 9451 9782 W MediaCodecRenderer: at android.os.Looper.loop(Looper.java:288)
09-18 11:36:23.434 9451 9782 W MediaCodecRenderer: at android.os.HandlerThread.run(HandlerThread.java:67)
09-18 11:36:23.435 9451 9782 E MediaCodecVideoRenderer: Video codec error
09-18 11:36:23.435 9451 9782 E MediaCodecVideoRenderer: e0.b0$d: Decoder init failed: OMX.amlogic.hevc.decoder.awesome2, Format(0, null, null, video/hevc, hvc1.1.2.L120.90, -1, null, [1920, 1080, -1.0, ColorInfo(Unset color space, Unset color range, Unset color transfer, false, 8bit Luma, 8bit Chroma)], [-1, -1])
09-18 11:36:23.435 9451 9782 E MediaCodecVideoRenderer: at e0.b0.j1(SourceFile:1)
09-18 11:36:23.435 9451 9782 E MediaCodecVideoRenderer: at e0.b0.i1(SourceFile:1)
09-18 11:36:23.435 9451 9782 E MediaCodecVideoRenderer: at e0.b0.n1(SourceFile:1)
09-18 11:36:23.435 9451 9782 E MediaCodecVideoRenderer: at r0.k.n1(SourceFile:1)
09-18 11:36:23.435 9451 9782 E MediaCodecVideoRenderer: at e0.b0.x1(SourceFile:1)
09-18 11:36:23.435 9451 9782 E MediaCodecVideoRenderer: at e0.b0.j(SourceFile:1)
09-18 11:36:23.435 9451 9782 E MediaCodecVideoRenderer: at r0.k.j(SourceFile:1)
09-18 11:36:23.435 9451 9782 E MediaCodecVideoRenderer: at v.s1.w(SourceFile:1)
09-18 11:36:23.435 9451 9782 E MediaCodecVideoRenderer: at v.s1.handleMessage(SourceFile:1)
09-18 11:36:23.435 9451 9782 E MediaCodecVideoRenderer: at android.os.Handler.dispatchMessage(Handler.java:102)
09-18 11:36:23.435 9451 9782 E MediaCodecVideoRenderer: at android.os.Looper.loopOnce(Looper.java:201)
09-18 11:36:23.435 9451 9782 E MediaCodecVideoRenderer: at android.os.Looper.loop(Looper.java:288)
09-18 11:36:23.435 9451 9782 E MediaCodecVideoRenderer: at android.os.HandlerThread.run(HandlerThread.java:67)
09-18 11:36:23.435 9451 9782 E MediaCodecVideoRenderer: Caused by: java.lang.IllegalStateException
09-18 11:36:23.435 9451 9782 E MediaCodecVideoRenderer: at android.media.MediaCodec.native_stop(Native Method)
09-18 11:36:23.435 9451 9782 E MediaCodecVideoRenderer: at android.media.MediaCodec.stop(MediaCodec.java:2300)
09-18 11:36:23.435 9451 9782 E MediaCodecVideoRenderer: at e0.d.release(SourceFile:1)
09-18 11:36:23.435 9451 9782 E MediaCodecVideoRenderer: at e0.d$b.d(SourceFile:1)
09-18 11:36:23.435 9451 9782 E MediaCodecVideoRenderer: at e0.n.a(SourceFile:1)
09-18 11:36:23.435 9451 9782 E MediaCodecVideoRenderer: at e0.b0.a1(SourceFile:1)
09-18 11:36:23.435 9451 9782 E MediaCodecVideoRenderer: ... 13 more
09-18 11:36:23.437 9451 9782 E ExoPlayerImplInternal: Playback error
09-18 11:36:23.437 9451 9782 E ExoPlayerImplInternal: v.u: MediaCodecVideoRenderer error, index=0, format=Format(0, null, null, video/hevc, hvc1.1.2.L120.90, -1, null, [1920, 1080, -1.0, ColorInfo(Unset color space, Unset color range, Unset color transfer, false, 8bit Luma, 8bit Chroma)], [-1, -1]), format_supported=YES
09-18 11:36:23.437 9451 9782 E ExoPlayerImplInternal: at v.s1.handleMessage(SourceFile:1)
09-18 11:36:23.437 9451 9782 E ExoPlayerImplInternal: at android.os.Handler.dispatchMessage(Handler.java:102)
09-18 11:36:23.437 9451 9782 E ExoPlayerImplInternal: at android.os.Looper.loopOnce(Looper.java:201)
09-18 11:36:23.437 9451 9782 E ExoPlayerImplInternal: at android.os.Looper.loop(Looper.java:288)
09-18 11:36:23.437 9451 9782 E ExoPlayerImplInternal: at android.os.HandlerThread.run(HandlerThread.java:67)
09-18 11:36:23.437 9451 9782 E ExoPlayerImplInternal: Caused by: e0.b0$d: Decoder init failed: OMX.amlogic.hevc.decoder.awesome2, Format(0, null, null, video/hevc, hvc1.1.2.L120.90, -1, null, [1920, 1080, -1.0, ColorInfo(Unset color space, Unset color range, Unset color transfer, false, 8bit Luma, 8bit Chroma)], [-1, -1])
09-18 11:36:23.437 9451 9782 E ExoPlayerImplInternal: at e0.b0.j1(SourceFile:1)
09-18 11:36:23.437 9451 9782 E ExoPlayerImplInternal: at e0.b0.i1(SourceFile:1)
09-18 11:36:23.437 9451 9782 E ExoPlayerImplInternal: at e0.b0.n1(SourceFile:1)
09-18 11:36:23.437 9451 9782 E ExoPlayerImplInternal: at r0.k.n1(SourceFile:1)
09-18 11:36:23.437 9451 9782 E ExoPlayerImplInternal: at e0.b0.x1(SourceFile:1)
09-18 11:36:23.437 9451 9782 E ExoPlayerImplInternal: at e0.b0.j(SourceFile:1)
09-18 11:36:23.437 9451 9782 E ExoPlayerImplInternal: at r0.k.j(SourceFile:1)
09-18 11:36:23.437 9451 9782 E ExoPlayerImplInternal: at v.s1.w(SourceFile:1)
09-18 11:36:23.437 9451 9782 E ExoPlayerImplInternal: ... 5 more
09-18 11:36:23.437 9451 9782 E ExoPlayerImplInternal: Caused by: java.lang.IllegalStateException
09-18 11:36:23.437 9451 9782 E ExoPlayerImplInternal: at android.media.MediaCodec.native_stop(Native Method)
09-18 11:36:23.437 9451 9782 E ExoPlayerImplInternal: at android.media.MediaCodec.stop(MediaCodec.java:2300)
09-18 11:36:23.437 9451 9782 E ExoPlayerImplInternal: at e0.d.release(SourceFile:1)
09-18 11:36:23.437 9451 9782 E ExoPlayerImplInternal: at e0.d$b.d(SourceFile:1)
09-18 11:36:23.437 9451 9782 E ExoPlayerImplInternal: at e0.n.a(SourceFile:1)
09-18 11:36:23.437 9451 9782 E ExoPlayerImplInternal: at e0.b0.a1(SourceFile:1)
09-18 11:36:23.437 9451 9782 E ExoPlayerImplInternal: ... 13 more
09-18 11:36:23.452 9451 9476 E flutter : [ERROR:flutter/runtime/dart_vm_initializer.cc(41)] Unhandled Exception: PlatformException(VideoError, Video player had error v.u: MediaCodecVideoRenderer error, index=0, format=Format(0, null, null, video/hevc, hvc1.1.2.L120.90, -1, null, [1920, 1080, -1.0, ColorInfo(Unset color space, Unset color range, Unset color transfer, false, 8bit Luma, 8bit Chroma)], [-1, -1]), format_supported=YES, null, null)
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
❯ flutter doctor -v
[✓] Flutter (Channel stable, 3.24.3, on macOS 14.6.1 23G93 darwin-arm64, locale en-US)
• Flutter version 3.24.3 on channel stable at /opt/homebrew/Caskroom/flutter/3.13.6/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 2663184aa7 (7 days ago), 2024-09-11 16:27:48 -0500
• Engine revision 36335019a8
• Dart version 3.5.3
• DevTools version 2.37.3
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /Users/nedim/Library/Android/sdk
• Platform android-34, build-tools 34.0.0
• ANDROID_HOME = /Users/nedim/Library/Android/sdk
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11572160)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 15.4)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 15F31d
• CocoaPods version 1.15.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2023.3)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11572160)
[✓] VS Code (version 1.93.1)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.96.0
[✓] Connected device (4 available)
• Chromecast (mobile) • 192.168.86.100:5555 • android-arm • Android 12 (API 31)
• macOS (desktop) • macos • darwin-arm64 • macOS 14.6.1 23G93 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 14.6.1 23G93 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 128.0.6613.138
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| c: regression,e: device-specific,platform-android,p: video_player,package,P3,team-android,triaged-android | low | Critical |
2,533,454,151 | PowerToys | Workspaces Does Not Span Virtual Desktops | ### Microsoft PowerToys version
0.84.1
### Installation method
Microsoft Store
### Running as admin
No
### Area(s) with issue?
General, Workspaces
### Steps to reproduce
I love the new workspaces concept, but it does not span virtual desktops. This to me seems like a bug as it severely limits the usefulness of this utility. Would also be nice to have it remember the open tabs of the browser and what web URLs they had open.
### ✔️ Expected Behavior
Workspaces to record and use all virtual desktops.
### ❌ Actual Behavior
Workspaces only records the active virtual desktop.
### Other Software
_No response_ | Issue-Bug,Needs-Triage,Product-Workspaces | low | Critical |
2,533,520,858 | go | x/tools/go/analysis/passes/printf: suggest Printf calls without args to use Print | ### Go version
1.23.1
### Output of `go env` in your module/workspace:
```shell
no relevant
```
### What did you do?
Run the "printf" pass with `go vet` or `golangci-lint`
### What did you see happen?
It doesn't detect anything for `fmt.Printf("some message")`.
### What did you expect to see?
Maybe it should suggest to rewrite `fmt.Printf("some message")` => `fmt.Print("some message")`, because it doesn't use any argument. | NeedsInvestigation,Tools | low | Major |
2,533,523,536 | TypeScript | Potential memory leak or dead recursive during auto completion | ### 🔎 Search Terms
memory leak auto completion
### 🕗 Version & Regression Information
seems occur since first version that could compile those code.
**Please forgive me for providing such a huge source code, as its complexity has to be kept to cause noticeable memory consumption rate (about 30MB/s, at least).**
### ⏯ Playground Link
https://www.typescriptlang.org/play/?#code/C4TwDgpgBAShDGB7AKuaBeAUFHUA+UA5ACICWATgsABKnCHa4GHIQC2YANgIbAQCyveAAsGufEQBiEXgFdKg4CLFMiAYUSdE5RcsY5mAeTUwVBogDkI87pyvAA7toDWangGd3pAGYgzEwisbOwhHF2JQqn9mNVl3YEQ2BkxQSCgAQXhgVDSscWZiRAtEYGFSADsAc2j1TlJ4ZxrCAGUHUkgmgGkIP30A1gAPej7mZuBucmB0sDAmscQwadmRonmwZG53RpXCWPjE5NToQ3IAEwhyACEQQTAodCgAbz7WDh4+XWEALiJqbVIAF6IcrjTiEAIANQuwHqtnBoyQlHhRBg3HKpwOfWkcgUQm+v3+QJBcMh0NhYICzUREGRhHSlG4tNR6Mx4g0Wh0eJ+hD+5EBwNBtKhk3JtKp2hpAXpMiZaIxST6xhg3N5-OJFOYwph8BJzGljICABkIFVSrKWQrxEFyLZ7E5yK4PF5fCrCQLdUQtaLKdTzfL-NbbaF7c4Inwsq6+UTBaSRTqNatfVKGX7MQBfTApNBQAAK7QgdXKEEuslInHO5DGvAgAB4AFLuYH3J5pgB8zee4gxRZ+DeBmDTUAAZFAABSEShISrlOikYHgiBDE2ndxQZw9RDeKB98p9AD8Lb6P074lPk8Q09nwJrMCgi746NXcCQOQgrdHfVPZ4QiB+ME-p4AJQ-HmkCFhAz4XjOMLAiWZYVlWfA1gBX44Duw5PChqHiOel4weUf5YV+aYADREae-7YTgrYAWmgHDn047cFkc7lAuS6PmuG5bjuWEHo8GbYce5HiMx+E1ukd4cSuGRZK+74iahYk-Okik4MBub5uBmT4XB5YXIhtZqae6EjieVEWVAYmsSpxniKRdm4Kplm4DRFl0Qx4jjkWQzsQ+MnriAm7bo2u7YfxgmocJLk4D5wA1hYUn+au8R8lUADaAC674AHR5eUQw-OleU5RYmUaaBBYVMWpb6ZW4xIaZTxQHFPyJW25EeUOjGEBUfDkOQshgPQSXLqugXBbx4WHkJmExVAfUXINw01gAkqNnGpRUlRZQp83iCVfVFSVq2ZcZFVadVekIQ1tZNY8C0gktQ3AD860de59HdV5E7VgA+nUbB0H5Y1cUFPGhXxM1RXNMU2nwAOkED8W3veoPlLIbAAEYXO+8MQH+F1gVdtU3dW9ahRhD344jyN-lAH1UV1PUwmwECILII1o5xE0Q-200CVh0Uxaz7OczeG0yRj2O46OouEyBl1FtdBm3RTTZmVAosc69sAM25TNfT1wJ-Ut2ggzz3EhfzqERULsMuSbZvkDWhiSylwBpTt2UfvtuAlUtx15YYZ3zUTVXK6Tqvk-dUBOwN2g-G7jPYczP0VAAbhc7iStzAVW1NtvQ1+wsuZn2e1uteerljiCaDI5Tvpnb3h+BKv1THlOa+X5A529+udUbP0mtwWOcBApwW-n4PW2FReC7N5mWSPY8TzWACi7tQLX9dou+Jo-Ovrck-B0eNV3zUr+PpyHwPn2ebg45gJQf3nDwfhb7zs9QwvMNLxZz8ICvwLNwEANYcxb2ljjcg75AEgWPpHU+Hdz4a2aoA4B78QJ30Ng-HAT9EDxAwaAqe40C6QwFpFEuDtLJgAIcAIhYCIHVxapjaBsC6HwMVsTRBdVDLq3KFTKAtDCFv1AVglOqE06P0IMI+hoiP7MK-oXL8dtF7GVkQw8BkDWGy1kZwzS3CapIL4bHDR8jxEG1TkPaR6CHDcDoH9bwlAIAAggO4EhYNJrkPnpQ08pcaEvzsQ4pxEAXFuK4RHCAAB1exwBJDONce3ExoUSJEFsbExxCS3GEEsZI6xeDCDeEQPAOIHilHeJUcXPx1CLJFJKe4Gskgt473HnvUcdS4g-EkAgoxvC1axw6e4Lp2CrGZiOAYyJkE8KsSSWrHMlwdypMglvSCr5UnoQeAJdsmy+jdgJhMtuUdkG1nmU1HcNFBwjlHMs5hJAKBUFoMMcQqjcA-HHOQRApAylkJthQ8i-iXIfNIBLZh6UtpVEyhIdKUCLipJheQOFOiEUsJluQH2jkDp5SBYRP2PSpnQRmUcvhGLcCnJSSSnAkEyJ+3ELHbFetQXwsRai5l0DWUXEhQeW8f50oAAZIW+JcrkqxuDULvM+X9Tc3gc5c2kqQmeyjLIvJcgC-aQLJXeGlaEEFcqoDQqRey5FTKUXQPRTS1CJV6WUXNVAPFP5pmwSJXMhZKTYA-nWRfamEqpUyvphIlydF8ljhubqlg7AuDVk+LSbEwB5ACDxP4Z5RA+BvGrN8hVFSqLKqoqqlyKaI1IWQFvcF3s9o2pwCVfNnAg45WQKHGldqpwEsdcY51iy3VIA9agh6Vbqw-CLf6pm9sBJBuuT+LeYbU0fATQEQMIQwgOjDFEKGhBSiUHcMITQk9P4-LnhZbNajzVrrcZu8sNYi3MPhbtX25aoCVuEOumtdaSWNqglecoszyZkuBEs91s9BHHo3Vu-tA8SWAdPacAAJOe7RqKy3lsA-2l9ETwL4vfZ+pC37yi-s7f+zW4HgNQAHcKodi8pF4JDclTxW4TgVmuLcFd2gKx-Sxgo3V5TflZqqYemlTGLgsbAW7ZhtGLj0e4GAdKkEzW3r4+QJOyGDnVTQ7pJ1X6XU-o7YgLtAjNayYE0nEZllBWvJbKOyjoMv4iauDccTK6KjnAGOmrxnG-kqpqTFezi4YOXqRfBm1pBTgDCQ+a19DqP2qcw+p7DmntOCM80Fojhn3LDvI8G8dtzXgFvjUoUQK62alEQNuxRu7FIHr-iS-Lm7Tg1n4FvAAjBIAAzBIAArH581lXCs-H4ApyqqH7XNvC62tT7aqV4eap1m+UBauDtTilsz6XQ3sm0FGvLoQqtOb5nurjv8LK5sspNmrW8AAsEhju8okAANnazSyb3XetKwggN9DEWTlRZw1p8bD07vTaS6Rv+qWx1IAnZl942XlABBjXG1b01CCVGceUP6bBNiNB3RmlzPjFL7YsvD0JiPkdbBrAAcWaXXVpjcb3loJ84H4ROHuGOU4S4bkXRt-tjrjk0SOUe07+3NsjC3ge3Kh7iHLtJluclFyupAsgQSbe-i5MrVD-4xWlyCGsahYNsMpza1Xus1D08mc9lTzO3us9w7HXXPwNezckfN0VQPEATuF+D3LsPzjhgSOQOXirXOWWx1Rd3VBtA1mIBO5oq1JDIFpJ0dIAAtdetJ0gx-j7SS4MBVrNE6LSQwMBLg5O1+a93PxiAG-602l7JvwHvZi19qAgesiJygKHm3xE7ffUfuZzihBncw6Lr9GC3vM2+72+5wFvA5w6qo-Cm7ft4ZzhxQ2lDSmjdM76SN11Y26Xj9-HrFvp5jM4GPIDzvMldiaBWwmtbBWivsZKwrschAtAOAuIPjH9-dv7V7KFdKhBJuEEhcwqdgQOdhSlAPxKAV+E-hcDWIaFvPqiyiarCogWitehAVRCVFAXJnqiVIaPWrergKFoNhhqbhvmzp6lAJgT8LAXvgGqAf7n7JgTAXAfCplKgfgdhBgYgM-lgcVHlLgWgeIIQRXmvizqQebuQZQVANQSRn7KlvNOOENJAF7mjs5ttuaorp-rPD-n-gAbqkAVACAewWAaPuwYodAbAYygasgYaqwdJkYRwXlGYTwTgXgfYepEvkWIzi2iISQRppvuQU4VQbzjagfjSvQTak4TWAAKrMFIq2Ez5uF3qOEzAXA1pRGuFuFCHG4+FV5m6faxyBFQAxE0H7R0RY7cYmYjr24n6rhn4ci96VKEBIDlBFhZATyv5qGY5ubK4uTNGtF8DVYa7MItINwJHzR9FW6l7L7l7ZFkyiF+FkHdpQB9FUATxW7BGt787VGLZUaEBKiJq4AHiECLiQBtE35UYcadGVIf48Z+wnGrHVabzMIlpsG3oBwDBgA1rrwZExRZGr5zG+HRb+FLH3FnG3wlE4ChFH4C6O63Jzp2guBuCbDOgfzMDwnBjhCRBZAHE4BHGgkDEdGlYVFK4kr4lrxPG6pXp2E2rvGfHYF5TfFTGeEr7eEAm5FiH5HkFklTabwQkMxt6MQ1FED7ErqUARrwC5y37o5XGngaHiDhFfhik8ASmT6gxgqezbSpIlrxEF40q0k1owA-EuR-GslnyAkfaxaaxKnMT7K3h8lQmmbbGC6hoimw7AicAgB-STiEnv6hHykmEWTumemTiuyk67wU5oGTjyYhYeFPYzH-FmnskLHiFLFBlekIAGb2kCleRCl7EmC0jokLqOjIk+ComWDWA2jzohhLrYl5aFYFg+lKrEnVI9EHb1mcBHbPEalVBjExRsDtn3YxmKbMnxmmnHJJlAmLE6YTYDm-ZZlbHt4UY7GgyBAVlBhFlImeClkFlrlVmYke44nGGP6jwFjuIqFbZEk3Hlbmo8A4ycANIWG6ovHUnmoYEnnVr0k5T8FDl9bTFvqzGJlYYWm163mnlBHzkA4wkTp7AJCWh971KwXpmjlsTnny5NlXkknmoIWJBIX-msTq7FrdmVC9mAo-iTE-mPZeFDY5FAU14W5xCIW4SDbrEQVK7H7Lld4wWsjwUMU4VMXvp-RgATDcBJCoU+5dF+4BlUTYVsC4VhaCXCVsAEXMKQRnA1glqpIy7ODlBcGNwkU0KKXkWL7Dlxl4VjnEr4FAVoFUpoH0X7CyX8X4QKU2hsAsUkoyG2785jLZi-lFg6QJnjm0U6Rbw6RrL-qbJtgdi7LAj7K+W9JslYYYTnIDgYSjjBW3JuD1Co4xC8VwW4nJoTCVChCNnXEOmtlfjjDkBFXxQXq6rpSeyyAQCQoEDqlezNV6rGqdVWEsH6VfiVqFWhDBa-Gxn+XmVqwkpWXmo6TUo0qxyVXVUgaWEIFdXLVxHGFFr9p8oCrGQeWQlHiOmLljjpVLZ1ANBi65WHlHHzWhAapaqyoXF37oV+mVFgYDX0I+rapCZ1UrVsrWHIEvl6l5QfW6yGBMkQCjXUUJXV7TW17XXvWaq+pQDJy7X77ZmPzHW7GtD5iXVEA4yVAVAlU7bPWH5SXYR40VA1iXBwENVNVQralQo-VIGM1oq9XoF5Tk0ERQCXBg0Q3EHISWXvZoHTW2XkEc0-BU1LW-XM02HGFU3i1bX8nmoo32Ro14IY0rlY0dAroc23UyqE3D45qk2oQ63A2U2xGrUW1S1xGs0OE5TA3i080sRjXr4aYw2xwm0I2DVc0bGo0LmMTq1d6a00grrLj60SUj7lVUTLgbzU2DS00tX00tXS1-U9W6l+wBzoiHyO0AXjloGTW3rC34GxzLi3yS1M3dUV2oqcpQCbyHwK3E2oTK24BlWA4B2n5B043HHoi63FViVD7h2G2R3YTLg93xQUlUbwFW2W3l1V020WpA2e26zrzZ0BV8JBVZCWmXzd32010+0q1+1eRt21HdC9Cw6BRh2lXlFD2oSBQ1idCa4XCvHlolSBQ1qdBGmWQ9K82vYTmpJu3kGv1QD32sUtmt1ZATqrTlBgCcyDBPJ94VDQP0J8C+R91v6ynNn+nX1fgIOcx-TIM1WEVexz0VX3hDW4ojVO2Q2AXQ0b2144NIOkOJYgOYNgPABh6VVTAzBigJCLBcMrpCUNDcBFUX3oMYUtkkoCPOBCMnKEPbTEOnhgDOCVAgQr3O3zHRb-1LGSPSMWLGQt2jpH3qAXVS65V-TWTzioMylJpiOYMkoyVmOUPKVPlEXyOiQRhQD64UWGLf2V7r3ACb0PT2PmOc3W5N17UH3o3gMZXGOw5BOUPOUiUiPWMN0KmnhxNOVCUuVONUaqXVYaVQBaU6UOB6Vp37SZMiVGXkMmU+M0U0P+O17pOsQJOuUeN73N3zbeVpBxUxJ0DxKhKJI-19D504An0TrpLBJZJnnMAaJBL0IhJhLuIzVoSUwRWYDbLUJ7Kxl81YSJVmSKTpSBSPRAM9CZRf5oO+IXKpWrpIySkPXSn7hVL+6izeaUm+alMVU3NkOoQ9I9NxJZJ820Un0BNayfNMMkaA6rpvUiMaGpNw0vMT003tWtXbRIvJ3M0A0uT9VVVe3IDnSxm-N9NhLbPmrDP7RAskpzVvWLXfWV1T10scrrVQCbX8qK0xQo0Qtw2j3QvNmwtvWj2hll1Gq0sz2mryMlQ72g1hz4uxKEsDO+PV5Auw18sSs+0csPonpbrctiOwvqtAZnq1VT5vNga6tfMWQ-Myv-M-2As9DAsEblggZ74Qt-6WM-xlXGSHa1bMINYEDNYEBtbvNUT5aDnDUmUEuWvyvtqKuxw-Yzbsujr95AKAzAwusG22PzQ0xJsowP0wIBvYT4wL7GnSu9Phu1ORs2u14ZtIx0B+pxuZjNHxBQAZzNijgCRWSrhxXEGAQ5TBPjiZVnVdt7I5SYBAA
### 💻 Code
```ts
type RecoType =
| 'DirectHit'
| 'TemplateMatch'
| 'FeatureMatch'
| 'ColorMatch'
| 'OCR'
| 'NeuralNetworkClassify'
| 'NeuralNetworkDetect'
| 'Custom'
type ActType =
| 'DoNothing'
| 'Click'
| 'Swipe'
| 'Key'
| 'Text'
| 'StartApp'
| 'StopApp'
| 'StopTask'
| 'Custom'
type OrderByMap = {
TemplateMatch: 'Horizontal' | 'Vertical' | 'Score' | 'Random'
FeatureMatch: 'Horizontal' | 'Vertical' | 'Score' | 'Area' | 'Random'
ColorMatch: 'Horizontal' | 'Vertical' | 'Score' | 'Area' | 'Random'
OCR: 'Horizontal' | 'Vertical' | 'Area' | 'Length' | 'Random'
NeuralNetworkClassify: 'Horizontal' | 'Vertical' | 'Score' | 'Random'
NeuralNetworkDetect: 'Horizontal' | 'Vertical' | 'Score' | 'Area' | 'Random'
}
type PipelineBuilderState<Json = {}> = {
done: Json
} & ('recognition' extends keyof Json
? {}
: {
recognition<R extends RecoType>(
reco: R
): PipelineRecognitionBuilderState<
Json & {
recognition: R
},
R
>
}) &
('action' extends keyof Json
? {}
: {
action<A extends ActType>(
act: A
): PipelineActionBuilderState<
Json & {
action: A
},
A
>
}) &
('next' extends keyof Json
? {}
: {
next<N extends string[]>(...nxt: [...N]): PipelineBuilderState<Json & { next: N }>
}) &
('interrupt' extends keyof Json
? {}
: {
interrupt<I extends string[]>(
...int: [...I]
): PipelineBuilderState<Json & { interrupt: I }>
}) &
('rate_limit' extends keyof Json
? {}
: {
rate_limit<R extends number>(rate: R): PipelineBuilderState<Json & { rate_limit: R }>
}) &
('timeout' extends keyof Json
? {}
: {
timeout<R extends number>(time: R): PipelineBuilderState<Json & { timeout: R }>
}) &
('on_error' extends keyof Json
? {}
: {
on_error<O extends string[]>(
...err: [...O]
): PipelineBuilderState<Json & { on_error: O }>
}) &
('inverse' extends keyof Json
? {}
: {
inverse<I extends boolean>(inv: I): PipelineBuilderState<Json & { inverse: I }>
}) &
('enabled' extends keyof Json
? {}
: {
enabled<E extends boolean>(en: E): PipelineBuilderState<Json & { enabled: E }>
}) &
('pre_delay' extends keyof Json
? {}
: {
pre_delay<P extends number>(pre: P): PipelineBuilderState<Json & { pre_delay: P }>
}) &
('post_delay' extends keyof Json // <--- here two post_delay are provided, which is the root cause
? {}
: {
post_delay<P extends number>(post: P): PipelineBuilderState<Json & { post_delay: P }>
}) &
('post_delay' extends keyof Json // <---
? {}
: {
post_delay<P extends number>(post: P): PipelineBuilderState<Json & { post_delay: P }>
}) &
('pre_wait_freezes' extends keyof Json
? {}
: {
pre_wait_freezes: PipelineWaitFreezeBuilderState<Json, 'pre_wait_freezes'>
}) &
('focus' extends keyof Json
? {}
: {
focus<F extends boolean>(focus: F): PipelineBuilderState<Json & { focus: F }>
})
type PipelineRecognitionBuilderState<PBJson, Reco extends RecoType, Json = {}> = {
done: PipelineBuilderState<PBJson & Json>
} & (Reco extends 'DirectHit'
? {}
: ('roi' extends keyof Json
? {}
: {
roi<R extends [string] | [number, number, number, number]>(
...roi: R
): PipelineRecognitionBuilderState<
PBJson,
Reco,
Json & { roi: R extends [number, number, number, number] ? R : R[0] }
>
}) &
('roi_offset' extends keyof Json
? {}
: {
roi_offset<R extends [number, number, number, number]>(
...roi: R
): PipelineRecognitionBuilderState<PBJson, Reco, Json & { roi_offset: R }>
})) &
(Reco extends 'TemplateMatch' | 'FeatureMatch'
? 'template' extends keyof Json
? {}
: {
template<T extends string[]>(
...templ: [...T]
): PipelineRecognitionBuilderState<PBJson, Reco, Json & { template: T }>
}
: {}) &
(Reco extends 'TemplateMatch' | 'NeuralNetworkDetect'
? 'threshold' extends keyof Json
? {}
: {
threshold<T extends number[]>(
...thres: [...T]
): PipelineRecognitionBuilderState<PBJson, Reco, Json & { threshold: T }>
threshold$<T extends number>(
thres: T
): PipelineRecognitionBuilderState<PBJson, Reco, Json & { threshold: T }>
}
: {}) &
(Reco extends keyof OrderByMap
? 'order_by' extends keyof Json
? {}
: {
order_by<O extends OrderByMap[Reco]>(
order: O
): PipelineRecognitionBuilderState<PBJson, Reco, Json & { order_by: O }>
}
: {}) &
(Reco extends keyof OrderByMap
? 'index' extends keyof Json
? {}
: {
index<T extends number>(
idx: T
): PipelineRecognitionBuilderState<PBJson, Reco, Json & { index: T }>
}
: {}) &
(Reco extends 'TemplateMatch'
? 'method' extends keyof Json
? {}
: {
method<M extends 1 | 3 | 5>(
method: M
): PipelineRecognitionBuilderState<PBJson, Reco, Json & { method: M }>
}
: {}) &
(Reco extends 'ColorMatch'
? 'method' extends keyof Json
? {}
: {
method<M extends 4 | 40 | 6>(
method: M
): PipelineRecognitionBuilderState<PBJson, Reco, Json & { method: M }>
}
: {}) &
(Reco extends 'TemplateMatch' | 'FeatureMatch'
? 'green_mask' extends keyof Json
? {}
: {
green_mask<G extends boolean>(
mask: G
): PipelineRecognitionBuilderState<PBJson, Reco, Json & { green_mask: G }>
}
: {}) &
(Reco extends 'FeatureMatch' | 'ColorMatch'
? 'count' extends keyof Json
? {}
: {
count<C extends number>(
count: C
): PipelineRecognitionBuilderState<PBJson, Reco, Json & { count: C }>
}
: {}) &
(Reco extends 'FeatureMatch'
? 'detector' extends keyof Json
? {}
: {
detector<D extends 'SIFT' | 'KAZE' | 'AKAZE' | 'BRISK' | 'ORB'>(
det: D
): PipelineRecognitionBuilderState<PBJson, Reco, Json & { detector: D }>
}
: {}) &
(Reco extends 'FeatureMatch'
? 'ratio' extends keyof Json
? {}
: {
ratio<R extends number>(
ratio: R
): PipelineRecognitionBuilderState<PBJson, Reco, Json & { ratio: R }>
}
: {}) &
(Reco extends 'ColorMatch'
? 'method' extends keyof Json
? ('lower' extends keyof Json
? {}
: Json['method'] extends 4 | 40
? {
lower<L extends [number, number, number][]>(
...lower: [...L]
): PipelineRecognitionBuilderState<PBJson, Reco, Json & { lower: L }>
}
: {
lower<L extends [number][]>(
...lower: [...L]
): PipelineRecognitionBuilderState<PBJson, Reco, Json & { lower: L }>
}) &
('upper' extends keyof Json
? {}
: Json['method'] extends 4 | 40
? {
upper<L extends [number, number, number][]>(
...upper: [...L]
): PipelineRecognitionBuilderState<PBJson, Reco, Json & { upper: L }>
}
: {
upper<U extends [number][]>(
...upper: [...U]
): PipelineRecognitionBuilderState<PBJson, Reco, Json & { upper: U }>
})
: {}
: {}) &
(Reco extends 'ColorMatch'
? 'connected' extends keyof Json
? {}
: {
connected<C extends boolean>(
conn: C
): PipelineRecognitionBuilderState<PBJson, Reco, Json & { connected: C }>
}
: {}) &
(Reco extends 'OCR'
? 'expected' extends keyof Json
? {}
: {
expected<E extends string[]>(
...exp: [...E]
): PipelineRecognitionBuilderState<PBJson, Reco, Json & { expected: E }>
}
: {}) &
(Reco extends 'NeuralNetworkClassify' | 'NeuralNetworkDetect'
? 'expected' extends keyof Json
? {}
: {
expected<E extends number[]>(
...exp: [...E]
): PipelineRecognitionBuilderState<PBJson, Reco, Json & { expected: E }>
}
: {}) &
(Reco extends 'OCR'
? 'replace' extends keyof Json
? {}
: {
replace<R extends [string, string][]>(
...exp: [...R]
): PipelineRecognitionBuilderState<PBJson, Reco, Json & { replace: R }>
}
: {}) &
(Reco extends 'OCR'
? 'only_rec' extends keyof Json
? {}
: {
only_rec<O extends boolean>(
rec: O
): PipelineRecognitionBuilderState<PBJson, Reco, Json & { only_rec: O }>
}
: {}) &
(Reco extends 'OCR' | 'NeuralNetworkClassify' | 'NeuralNetworkDetect'
? 'model' extends keyof Json
? {}
: {
model<M extends string>(
model: M
): PipelineRecognitionBuilderState<PBJson, Reco, Json & { model: M }>
}
: {}) &
(Reco extends 'NeuralNetworkClassify' | 'NeuralNetworkDetect'
? 'labels' extends keyof Json
? {}
: {
labels<L extends string[]>(
...label: [...L]
): PipelineRecognitionBuilderState<PBJson, Reco, Json & { labels: L }>
}
: {}) &
(Reco extends 'Custom'
? 'custom_recognition' extends keyof Json
? {}
: {
custom_recognition<C extends string>(
reco: C
): PipelineRecognitionBuilderState<PBJson, Reco, Json & { custom_recognition: C }>
}
: {}) &
(Reco extends 'Custom'
? 'custom_recognition_param' extends keyof Json
? {}
: {
custom_recognition_param<C extends Record<string, unknown>>(
param: C
): PipelineRecognitionBuilderState<
PBJson,
Reco,
Json & { custom_recognition_param: C }
>
}
: {})
type PipelineActionBuilderState<PBJson, Act extends ActType, Json = {}> = {
done: PipelineBuilderState<PBJson & Json>
} & (Act extends 'Click' | 'Custom'
? 'target' extends keyof Json
? {}
: {
target<T extends [true] | [string] | [number, number, number, number]>(
...target: T
): PipelineActionBuilderState<
PBJson,
Act,
Json & { target: T extends [number, number, number, number] ? T : T[0] }
>
}
: {}) &
(Act extends 'Click' | 'Custom'
? 'target_offset' extends keyof Json
? {}
: {
target_offset<O extends [number, number, number, number]>(
...offset: O
): PipelineActionBuilderState<PBJson, Act, Json & { target_offset: O }>
}
: {}) &
(Act extends 'Swipe'
? 'begin' extends keyof Json
? {}
: {
begin<B extends [true] | [string] | [number, number, number, number]>(
...begin: B
): PipelineActionBuilderState<
PBJson,
Act,
Json & { begin: B extends [number, number, number, number] ? B : B[0] }
>
}
: {}) &
(Act extends 'Swipe'
? 'begin_offset' extends keyof Json
? {}
: {
begin_offset<B extends [number, number, number, number]>(
...offset: B
): PipelineActionBuilderState<PBJson, Act, Json & { begin_offset: B }>
}
: {}) &
(Act extends 'Swipe'
? 'end' extends keyof Json
? {}
: {
end<E extends [true] | [string] | [number, number, number, number]>(
...end: E
): PipelineActionBuilderState<
PBJson,
Act,
Json & { end: E extends [number, number, number, number] ? E : E[0] }
>
}
: {}) &
(Act extends 'Swipe'
? 'end_offset' extends keyof Json
? {}
: {
end_offset<E extends [number, number, number, number]>(
...offset: E
): PipelineActionBuilderState<PBJson, Act, Json & { end_offset: E }>
}
: {}) &
(Act extends 'Key'
? 'key' extends keyof Json
? {}
: {
key<K extends number[]>(
...key: [...K]
): PipelineActionBuilderState<PBJson, Act, Json & { key: K }>
}
: {}) &
(Act extends 'InputText'
? 'input_text' extends keyof Json
? {}
: {
input_text<T extends string>(
text: T
): PipelineActionBuilderState<PBJson, Act, Json & { input_text: T }>
}
: {}) &
(Act extends 'StartApp' | 'StopApp'
? 'package' extends keyof Json
? {}
: {
package<P extends string>(
pkg: P
): PipelineActionBuilderState<PBJson, Act, Json & { package: P }>
}
: {}) &
(Act extends 'Custom'
? 'custom_action' extends keyof Json
? {}
: {
custom_action<C extends string>(
act: C
): PipelineActionBuilderState<PBJson, Act, Json & { custom_action: C }>
}
: {}) &
(Act extends 'Custom'
? 'custom_action_param' extends keyof Json
? {}
: {
custom_action_param<C extends Record<string, unknown>>(
param: C
): PipelineActionBuilderState<PBJson, Act, Json & { custom_action_param: C }>
}
: {})
type PipelineWaitFreezeBuilderState<
PBJson,
Key extends 'pre_wait_freezes' | 'post_wait_freezes',
Json = {}
> = {
done: PipelineBuilderState<
PBJson & {
[key in Key]: Json
}
>
} & ('time' extends keyof Json
? {}
: {
time<T extends number>(
time: T
): PipelineWaitFreezeBuilderState<PBJson, Key, Json & { time: T }>
}) &
('target' extends keyof Json
? {}
: {
target<T extends [true] | [string] | [number, number, number, number]>(
...target: T
): PipelineWaitFreezeBuilderState<
PBJson,
Key,
Json & { target: T extends [number, number, number, number] ? T : T[0] }
>
}) &
('target_offset' extends keyof Json
? {}
: {
target_offset<O extends [number, number, number, number]>(
...offset: O
): PipelineWaitFreezeBuilderState<PBJson, Key, Json & { target_offset: O }>
}) &
('threshold' extends keyof Json
? {}
: {
threshold<T extends number>(
thres: T
): PipelineWaitFreezeBuilderState<PBJson, Key, Json & { threshold: T }>
}) &
('method' extends keyof Json
? {}
: {
method<M extends 1 | 3 | 5>(
met: M
): PipelineWaitFreezeBuilderState<PBJson, Key, Json & { method: M }>
}) &
('rate_limit' extends keyof Json
? {}
: {
rate_limit<R extends number>(
rate: R
): PipelineWaitFreezeBuilderState<PBJson, Key, Json & { rate_limit: R }>
})
const v = ({} as PipelineBuilderState).action('Click').done.
```
### 🙁 Actual behavior
When requesting auto completion in the last row (via dot), the heap raise quickly.
I've checked out that it is caused by the miss duplication of `post_delay` property state changing edge.


### 🙂 Expected behavior
The completion should either fail or succeed quickly. The duplicate edge shouldn't affect completing.
### Additional information about the issue
_No response_ | Needs Investigation | low | Critical |
2,533,539,916 | PowerToys | Everything freezes - PresentationFramework issue | ### Microsoft PowerToys version
0.81.1.0
### Installation method
PowerToys auto-update
### Running as admin
No
### Area(s) with issue?
General
### Steps to reproduce
[2024-09-18.txt](https://github.com/user-attachments/files/17043491/2024-09-18.txt)
### ✔️ Expected Behavior
_No response_
### ❌ Actual Behavior
Everything freezes (two external display setup connected to closed Lenovo laptop with USB-C docking station) until I open the laptop.
### Other Software
Google Chrome version 128.0.6613.138 | Issue-Bug,Needs-Triage | low | Minor |
2,533,543,686 | langchain | OpenAI Chat isn't forward compatible with OpenAI API | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
requirements.txt:
```
langchain==0.3.0
langchain-core==0.3.1
langchain-openai==0.2.0
openai==1.46.0
httpx==0.27.2
```
test.py:
```python
from langchain_openai import ChatOpenAI
from langchain_openai.chat_models.base import _convert_dict_to_message, _convert_message_to_dict
from langchain_core.messages import HumanMessage, BaseMessage
import httpx
from pydantic import SecretStr
class MockClient(httpx.Client):
def send(self, request, **kwargs):
# !!! No extra_request field
print(f"Request: {request.content.decode()}")
message = {
"role": "assistant",
"content": "answer",
"extra_response": "extra_response",
}
# !!! No extra_response field
print(f"Response message dict: {_convert_dict_to_message(message)}")
return httpx.Response(
request=request,
status_code=200,
headers={"Content-Type": "application/json"},
json={"choices": [{"index": 0, "message": message}]},
)
llm = ChatOpenAI(api_key=SecretStr("-"), http_client=MockClient())
request_message = HumanMessage(content="question", extra_request="extra_request")
# !!! No extra_request field
print(f"Request message dict: {_convert_message_to_dict(request_message)}")
output = llm.generate(messages=[[request_message]])
response_message: BaseMessage = output.generations[0][0].message
# !!! No extra_response field
print(f"Response message: {response_message}")
```
```sh
> python test.py
Request message dict: {'content': 'question', 'role': 'user'}
Request: {"messages": [{"content": "question", "role": "user"}], "model": "gpt-3.5-turbo", "n": 1, "stream": false, "temperature": 0.7}
Response message dict: content='answer' additional_kwargs={} response_metadata={}
Response message: content='answer' additional_kwargs={'refusal': None} response_metadata={'token_usage': None, 'model_name': None, 'system_fingerprint': None, 'finish_reason': None, 'logprobs': None} id='run-7ba969eb-8a92-4c6d-9fc6-6a4a2f6d4bf6-0'
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
The core of the problem is that OpenAI adapter ignores additional fields in request messages and response messages.
It happens in [_convert_*](https://github.com/langchain-ai/langchain/blob/master/libs/partners/openai/langchain_openai/chat_models/base.py#L96-L332) family of methods where only a certain subset of fields from the input type is being converted.
There are two implications of this:
1. The forward compatibility of `langchain_openai` library with future versions of OpenAI API is undermined. Let's suppose OpenAI introduces a new field to the **response** `assistant` message, e.g. one which reflects thought process in the latest GPT-4 o1. The `langchain_openai` users will have to wait for the library to pick up with the changed in the API and then migrate their apps to the new version of the library. Same goes about the **request** messages. Curiously, the forward compatibility is supported for the **top-level request parameters**, which could be provided via `extra_body` parameter in `ChatOpenAI`.
2. Any custom extensions of the OpenAI API are not possible, since the library cut all of these extensions down.
Note that the `openai` library itself is designed to be forward compatible. The additional fields undeclared in the request and response schemas are passed through unperturbed. The request types are defined via `TypedDict` and the response one as pydantic `BaseModel` with extra fields allowed.
It would be great to achieve the same in `langchain_openai` as well.
### System Info
```
aiohappyeyeballs==2.4.0
aiohttp==3.10.5
aiosignal==1.3.1
annotated-types==0.7.0
anyio==4.4.0
attrs==24.2.0
certifi==2024.8.30
charset-normalizer==3.3.2
distro==1.9.0
frozenlist==1.4.1
h11==0.14.0
httpcore==1.0.5
httpx==0.27.2
idna==3.10
jiter==0.5.0
jsonpatch==1.33
jsonpointer==3.0.0
langchain==0.3.0
langchain-core==0.3.1
langchain-openai==0.2.0
langchain-text-splitters==0.3.0
langsmith==0.1.121
multidict==6.1.0
numpy==1.26.4
openai==1.46.0
orjson==3.10.7
packaging==24.1
pydantic==2.9.2
pydantic_core==2.23.4
PyYAML==6.0.2
regex==2024.9.11
requests==2.32.3
sniffio==1.3.1
SQLAlchemy==2.0.35
tenacity==8.5.0
tiktoken==0.7.0
tqdm==4.66.5
typing_extensions==4.12.2
urllib3==2.2.3
yarl==1.11.1
``` | 🤖:bug,stale,investigate | low | Critical |
2,533,593,224 | neovim | ansi color code highlight util | # Problem
Nvim is a pretty good pager for "man", but can't easily replace `less -R` for other (non-manpage) terminal output containing ANSI color codes.
Programs like kitty terminal, allow configuring any program such as `less -R` or `nvim` for displaying terminal "scrollback". `less -R` colorizes ANSI color codes, but Nvim doesn't.
Currently, this is the `kitty.config` setting I use:
scrollback_pager nvim --clean --cmd 'set eventignore=FileType' +'%s/\e\[[0-9;]*m//g' +'set nomodified readonly' +'$' -
but that removes the escape codes instead of colorizing them.
# Expected behavior
- ✅ document `nvim_open_term` snippet
- provide a `nvim_open_term` wrapper as command/function that makes it easy to colorize a buffer containing ANSI escape codes.
- https://github.com/folke/dot/blob/39602b7edc7222213bce762080d8f46352167434/nvim/lua/util/init.lua#L68-L93
- https://github.com/neovim/neovim/issues/30415#issuecomment-2368519968
- expose `:terminal highlights` as highlight groups. https://github.com/neovim/neovim/pull/7406#issuecomment-337363444
## Reference
previous implementations:
- https://github.com/neovim/neovim/issues/5054 and other posts mention ye olde AnsiEsc.vim
- https://github.com/lucc/nvimpager
| enhancement,defaults,input,complexity:low,highlight,pager | low | Major |
2,533,647,335 | PowerToys | Color Picker closes automatically | ### Description of the new feature / enhancement
The color picker closes after a short time. Why? This is very annoying.
### Scenario when this would be used?
Whenever anyone uses the color picker.
### Supporting information
Please don't say use the shortcut. Eff the shortcut. When I open the damn editor, I want it to stay open until I close it. If you're obsessed with keeping it open, then at least provide a setting somewhere that will allow users to override this "feature". | Needs-Triage,Product-Color Picker,Status-Reproducible | low | Minor |
2,533,738,432 | svelte | Update of $state variable overwrites it instead of update | ### Describe the bug
Update of $state variable overwrites it instead of update when the state variable was returned by a function.
As you can see in the 2nd log line, `count2` is not a state proxy anymore.
```
<script>
//App.svelte
import { createStore } from './Stores.svelte.js';
let count = $state({});
let count2 = createStore();
console.log('stores', count, count2);
count = { some: 'value' };
count2 = { some: 'value' };
console.log('stores after', count, count2);
</script>
```
```
//Stores.svelte.js
export function createStore() {
let store = $state({});
return store;
}
```
### Reproduction
https://svelte-5-preview.vercel.app/#H4sIAAAAAAAAA3WRTWrDMBCFrzKIghwINnhpp4WcIcu6C1cZFwVZMtIobRG6e-UfEps0aCFmnt684VNgnVToWPUemG57ZBU7DgPbM_odxsJdURGm2hlvxdg5OGHlQG-NTodkPxhLEEBYbAlPZCxChM6aHnheTLXL5yn5xfF6tikkEMZrgld4cZScWYi7eiOVSVtNzXaLVxjtjMJcma-MuymA72fLcpX3p3NEAGd6rIBfW-WRQ6xvavlM_j8J2o7QPssrihMifFrz7dDCYm_0oVgxSyx7c5adxDOryHqM-xv6Da77J1zc-gPwZ0LeeS1IGr1lBKHRACPDad9HvBbJWz2rqREf1_mIf_bmXW4VAgAA
### Logs
```shell
- stores Proxy(Object) {} Proxy(Object) {}
- stores after Proxy(Object) {some: 'value'} {some: 'value'}
```
### System Info
```shell
Svelte 5.0.0-next.251
```
### Severity
blocking an upgrade | documentation | medium | Critical |
2,533,756,450 | vscode | Walkthrough doesn't truncate steps properly | See "Command Pale."

It should say "Command Pal..." or "Command..." (to align padding with when you click it.

| polish,getting-started | low | Minor |
2,533,767,167 | vscode | "No accounts requested yet..." disabled menu item? | On a new install I see this entry at the bottom of the accounts menu:

On my main instance I don't see a menu item at the bottom, so why do we need/want a disabled item there? | bug,polish,authentication | low | Minor |
2,533,768,014 | flutter | [video_player_android] Regression in version 2.7.2 on Amazon Fire TV Stick 4K Max (2nd Gen 2023) | ### Steps to reproduce
Related to https://github.com/flutter/flutter/issues/155355
Same app same versions of dependencies like in the above issue , upgrading video_player_android from version 2.7.1 to 2.7.2 or 2.7.3 causes following player regression:
Display resolution is 4k (2160p)
Video with resolution less then 1920x1080 displays in it's original resolution with green backdound filling the rest of the screen.
Video with resolution 1920x1080 and up fills the screen.
### Expected results
video_player_android versions prior to 2.7.2 work well, video scales to fill the display.
### Actual results
Video resultion smaller then FHD (1920x1080) shows video in it's original resolution filling the rest of the display with green background, see attached images.
### Code sample
<details open><summary>Code sample</summary>
Very simple implementation, only portion of our app just for reference...
In initState:
```dart
_videoPlayerController = VideoPlayerController.networkUrl(uri);
```
In build:
```dart
Center(
child: AspectRatio(
aspectRatio: _videoPlayerController.value.aspectRatio,
child: VideoPlayer(_videoPlayerController),
),
),
```
In playerTickListener:
```dart
if (!_initialized && _videoPlayerController.value.isInitialized) {
setState(() {
_initialized = true;
_videoPlayerController.play();
});
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
```


```
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
❯ flutter doctor -v
[✓] Flutter (Channel stable, 3.24.3, on macOS 14.6.1 23G93 darwin-arm64, locale en-US)
• Flutter version 3.24.3 on channel stable at /opt/homebrew/Caskroom/flutter/3.13.6/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 2663184aa7 (7 days ago), 2024-09-11 16:27:48 -0500
• Engine revision 36335019a8
• Dart version 3.5.3
• DevTools version 2.37.3
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /Users/nedim/Library/Android/sdk
• Platform android-34, build-tools 34.0.0
• ANDROID_HOME = /Users/nedim/Library/Android/sdk
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11572160)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 15.4)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 15F31d
• CocoaPods version 1.15.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2023.3)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11572160)
[✓] VS Code (version 1.93.1)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.96.0
[✓] Connected device (5 available)
• Chromecast (mobile) • 192.168.86.100:5555 • android-arm • Android 12 (API 31)
• AFTKRT (mobile) • 192.168.86.101:5555 • android-arm • Android 11 (API 30)
• macOS (desktop) • macos • darwin-arm64 • macOS 14.6.1 23G93 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 14.6.1 23G93 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 128.0.6613.138
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| c: regression,e: device-specific,platform-android,p: video_player,package,P3,team-android,triaged-android | low | Major |
2,533,815,344 | react | [React 19] [bug] SVG with dangerouslySetInnerHTML content does not trigger first click | ## Summary
Hi all,
Here is the scenario that we found out while testing with both the latest rc and the beta that works correctly with React 18.
We have an SVG element that does not trigger its first click if it is focusable (or positioned inside a focusable element) that changes some state on focus.
**Steps to reproduce:**
Open the Stackblitz example and open its console
Click 1 time on the triangle svg element
**Expected**:
'svg click' message is logged
**Current**:
no message is logged
(On the second and all next clicks the message is shown as expected - only the first click is not triggered)
**Here are the stackblitz examples where the issue can be observed:**
rc: https://stackblitz.com/edit/react-vsxt51-w3ktmp?file=app%2Fapp.tsx - not working
beta: https://stackblitz.com/edit/react-vsxt51-ssqptj?file=app%2Fapp.tsx - not working
**And here is how it is working in React 18:**
React 18: https://stackblitz.com/edit/react-vsxt51-xsg1yu?file=app%2Fapp.tsx - working
**Code**:
```
const App = () => {
const [focused, setFocused] = React.useState(false);
const handleFocus = () => {
setFocused(true);
};
return (
<svg
onFocus={handleFocus}
tabIndex={1}
onClick={() => {
console.log('svg click');
}}
viewBox="0 0 512 512"
dangerouslySetInnerHTML={{
__html: '<path d="M256 352 128 160h256z" />',
}}
></svg>
);
};
```
| Type: Bug,React 19 | medium | Critical |
2,533,819,080 | pytorch | torch.compile 100x slower than eager mode for torch.cumprod backward pass | ### 🐛 Describe the bug
When using torch.compile on a model that includes the backward pass of torch.cumprod, the compiled version is significantly slower than the eager execution, especially for large input dimensions. The performance degradation appears to scale with the size of the dimension over which cumprod is computed.
## To Reproduce
```python
#!/usr/bin/env python3
# the dimention over which we compute cumprod
# larger number results in much slower compilation and execution
N_DIM = 8
print(f"Running with N_DIM={N_DIM}")
import torch
import torch.nn as nn
import time
if torch.cuda.is_available():
torch.set_default_device('cuda')
device = torch.device('cuda')
else:
torch.set_default_device('cpu')
device = torch.device('cpu')
N_REPEAT = 10
N_WARMUP = 2
class CumProdModule(nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
a = torch.cumprod(x, dim=1)
y = a.sum()
return y
# e.g. (batch_size, sequence_length, features)
x = torch.rand(8, N_DIM, 5000, requires_grad=True)
model = CumProdModule()
def benchmark(model, x, n_repeat, n_warmup):
# Warmup iterations
for _ in range(n_warmup):
y = model(x)
y.backward()
if torch.cuda.is_available():
torch.cuda.synchronize()
start_time = time.time()
for _ in range(n_repeat):
y = model(x)
y.backward()
if torch.cuda.is_available():
torch.cuda.synchronize()
end_time = time.time()
return (end_time - start_time) / n_repeat
print(f"Running on {device}")
eager_time = benchmark(model, x, N_REPEAT, N_WARMUP)
print(f"Eager average time: {eager_time:.6f} seconds")
compiled_model = torch.compile(model, fullgraph=True)
compiled_time = benchmark(compiled_model, x, N_REPEAT, N_WARMUP)
print(f"Compiled average time: {compiled_time:.6f} seconds")
print(f"Compiled is {compiled_time / eager_time :.2f}x slower than eager")
```
Output:
```
Running with N_DIMS=2048
Running on cuda
Eager average time: 0.032862 seconds
Compiled average time: 1.431347 seconds
Compiled is 43.56x slower than eager
```
## Details
The issue seems to be due to decomposition of `cumprod_backward`.
for example a snippet of inductor post_grad graph looks like this
```python
class GraphModule(torch.nn.Module):
def forward(self, primals_1: "f32[8, 32, 5000][160000, 5000, 1]cuda:0", tangents_1: "f32[][]cuda:0"):
# File: /home/pi/usr/dev/python/tmp/torch_tests/slow_cumprod/./time_cumprod.py:26 in forward, code: y = a.sum()
expand: "f32[8, 32, 5000][0, 0, 0]cuda:0" = torch.ops.aten.expand.default(tangents_1, [8, 32, 5000]); tangents_1 = None
# File: /home/pi/usr/dev/python/tmp/torch_tests/slow_cumprod/./time_cumprod.py:25 in forward, code: a = torch.cumprod(x, dim=1)
full: "f32[1][1]cuda:0" = torch.ops.aten.full.default([1], 1, dtype = torch.float32, layout = torch.strided, device = device(type='cuda', index=0), pin_memory = False)
expand_1: "f32[8, 1, 5000][0, 1, 0]cuda:0" = torch.ops.aten.expand.default(full, [8, 1, 5000]); full = None
slice_1: "f32[8, 31, 5000][160000, 5000, 1]cuda:0" = torch.ops.aten.slice.Tensor(primals_1, 1, 1)
cumprod_1: "f32[8, 31, 5000][155000, 5000, 1]cuda:0" = torch.ops.aten.cumprod.default(slice_1, 1); slice_1 = None
cat: "f32[8, 32, 5000][160000, 5000, 1]cuda:0" = torch.ops.aten.cat.default([expand_1, cumprod_1], 1); expand_1 = cumprod_1 = None
slice_2: "f32[8, 32, 5000][0, 0, 0]cuda:0" = torch.ops.aten.slice.Tensor(expand, 1, 0)
mul_1: "f32[8, 32, 5000][160000, 5000, 1]cuda:0" = torch.ops.aten.mul.Tensor(slice_2, cat); slice_2 = cat = None
sum_2: "f32[8, 5000][5000, 1]cuda:0" = torch.ops.aten.sum.dim_IntList(mul_1, [1]); mul_1 = None
slice_3: "f32[8, 1, 5000][160000, 5000, 1]cuda:0" = torch.ops.aten.slice.Tensor(primals_1, 1, 0, 1)
prod: "f32[8, 1, 5000][5000, 5000, 1]cuda:0" = torch.ops.aten.prod.dim_int(slice_3, 1, True); slice_3 = None
slice_4: "f32[8, 30, 5000][160000, 5000, 1]cuda:0" = torch.ops.aten.slice.Tensor(primals_1, 1, 2)
cumprod_2: "f32[8, 30, 5000][150000, 5000, 1]cuda:0" = torch.ops.aten.cumprod.default(slice_4, 1); slice_4 = None
expand_2: "f32[8, 30, 5000][5000, 0, 1]cuda:0" = torch.ops.aten.expand.default(prod, [8, 30, 5000])
mul_2: "f32[8, 30, 5000][150000, 5000, 1]cuda:0" = torch.ops.aten.mul.Tensor(expand_2, cumprod_2); expand_2 = cumprod_2 = None
cat_1: "f32[8, 31, 5000][155000, 5000, 1]cuda:0" = torch.ops.aten.cat.default([prod, mul_2], 1); prod = mul_2 = None
slice_5: "f32[8, 31, 5000][0, 0, 0]cuda:0" = torch.ops.aten.slice.Tensor(expand, 1, 1)
mul_3: "f32[8, 31, 5000][155000, 5000, 1]cuda:0" = torch.ops.aten.mul.Tensor(slice_5, cat_1); slice_5 = cat_1 = None
sum_3: "f32[8, 5000][5000, 1]cuda:0" = torch.ops.aten.sum.dim_IntList(mul_3, [1]); mul_3 = None
slice_6: "f32[8, 2, 5000][160000, 5000, 1]cuda:0" = torch.ops.aten.slice.Tensor(primals_1, 1, 0, 2)
prod_1: "f32[8, 1, 5000][5000, 5000, 1]cuda:0" = torch.ops.aten.prod.dim_int(slice_6, 1, True); slice_6 = None
slice_7: "f32[8, 29, 5000][160000, 5000, 1]cuda:0" = torch.ops.aten.slice.Tensor(primals_1, 1, 3)
cumprod_3: "f32[8, 29, 5000][145000, 5000, 1]cuda:0" = torch.ops.aten.cumprod.default(slice_7, 1); slice_7 = None
expand_3: "f32[8, 29, 5000][5000, 0, 1]cuda:0" = torch.ops.aten.expand.default(prod_1, [8, 29, 5000])
mul_4: "f32[8, 29, 5000][145000, 5000, 1]cuda:0" = torch.ops.aten.mul.Tensor(expand_3, cumprod_3); expand_3 = cumprod_3 = None
cat_2: "f32[8, 30, 5000][150000, 5000, 1]cuda:0" = torch.ops.aten.cat.default([prod_1, mul_4], 1); prod_1 = mul_4 = None
slice_8: "f32[8, 30, 5000][0, 0, 0]cuda:0" = torch.ops.aten.slice.Tensor(expand, 1, 2)
mul_5: "f32[8, 30, 5000][150000, 5000, 1]cuda:0" = torch.ops.aten.mul.Tensor(slice_8, cat_2); slice_8 = cat_2 = None
sum_4: "f32[8, 5000][5000, 1]cuda:0" = torch.ops.aten.sum.dim_IntList(mul_5, [1]); mul_5 = None
slice_9: "f32[8, 3, 5000][160000, 5000, 1]cuda:0" = torch.ops.aten.slice.Tensor(primals_1, 1, 0, 3)
prod_2: "f32[8, 1, 5000][5000, 5000, 1]cuda:0" = torch.ops.aten.prod.dim_int(slice_9, 1, True); slice_9 = None
slice_10: "f32[8, 28, 5000][160000, 5000, 1]cuda:0" = torch.ops.aten.slice.Tensor(primals_1, 1, 4)
cumprod_4: "f32[8, 28, 5000][140000, 5000, 1]cuda:0" = torch.ops.aten.cumprod.default(slice_10, 1); slice_10 = None
# The pattern repeats, generating a block for each step in the cumprod.
```
- The generated graph seems to create a block for each element along the cumprod dimension (dim=1 in this case)
- This unrolling leads to increased compilation times and slower execution in the compiled model.
- One way to work around this is to introduce a graph break with torch._dynamo.disable. This works but of course not ideal for performance. Any other workaround ideas would be much appreciated.
## Request
Is there a way to prevent the decomposition of cumprod_backward during compilation so that the backward pass can remain efficient in the compiled model? Is it possible for torch.compile to automatically (or heuristically) decide on whether to decompose or not?
Attachments: [torch_trace_slow_cumprod.log](https://github.com/user-attachments/files/17045003/dedicated_log_torch_trace_twzmnd3p.log)
### Versions
```
Collecting environment information...
PyTorch version: 2.4.0
Is debug build: False
CUDA used to build PyTorch: 12.2
ROCM used to build PyTorch: N/A
OS: Gentoo Linux (x86_64)
GCC version: (GCC) 13.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.11.9 (main, Apr 2 2024, 08:25:04) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.6.50-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 550.78
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 7950X3D 16-Core Processor
CPU family: 25
Model: 97
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 2
CPU(s) scaling MHz: 28%
CPU max MHz: 5714.0000
CPU min MHz: 400.0000
BogoMIPS: 8399.91
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d amd_lbr_pmc_freeze
Virtualization: AMD-V
L1d cache: 512 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 16 MiB (16 instances)
L3 cache: 128 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.4.0
[pip3] triton==3.0.0.post1
[conda] Could not collect
```
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @zou3519 @bdhirsh | module: autograd,triaged,actionable,oncall: pt2,module: inductor,module: pt2-dispatcher | low | Critical |
2,533,820,006 | opencv | DNN config file (Caffe, Darknet) are not downloaded with download_modeles.py | ### System Information
OpenCV 5.x, dnn samples
### Detailed description
Current download_models.py implementation + alias strategy in dnn samples allows not touch model files, but download models and run them automatically. It does not work for yolov4, for example. download_models.py loads just weights, but not config. I propose to download config with download_models.py too.
### Steps to reproduce
-
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [X] I updated to the latest OpenCV version and the issue is still there
- [X] There is reproducer code and related data files (videos, images, onnx, etc) | bug,category: samples | low | Minor |
2,533,821,983 | pytorch | Compatibility with numpy 2 __array__ interface | ### 🐛 Describe the bug
PyTorch Tensors do not implement all features of the `__array__` interface introduced in NumPy 2, resulting in deprecation warnings for simple operations. For example, running the below code:
```python
import numpy as np
import torch
x = torch.tensor([1, 2, 3])
y = np.array([1, 2, 3])
z = np.array(x)
x + y
```
results in:
```console
> python test.py
test.py:6: DeprecationWarning: __array__ implementation doesn't accept a copy keyword, so passing copy=False failed. __array__ must implement 'dtype' and 'copy' keyword arguments.
z = np.array(x)
test.py:7: DeprecationWarning: __array_wrap__ must accept context and return_scalar arguments (positionally) in the future. (Deprecated NumPy 2.0)
x + y
```
These kinds of interactions are required for other libraries like matplotlib to smoothly plot tensors, among other things.
@rgommers
### Versions
* torch: 2.4.1
* numpy: 2.1.1
cc @mruberry @rgommers | triaged,module: numpy | low | Critical |
2,533,833,088 | tauri | [bug] Titlebar shows when it should not | ### Describe the bug
tauri v2 with Nuxt v3
so I want to create full monitor confetti:
somewhere in the code:
```ts
const webview = new WebviewWindow('confetti', {
url: '/confetti',
transparent: true,
decorations: false,
alwaysOnTop: true,
focus: false,
fullscreen: true,
shadow: false,
skipTaskbar: true,
})
```
confetti.vue:
```vue
<template>
</template>
<script lang="ts" setup>
import { getCurrentWindow } from "@tauri-apps/api/window";
definePageMeta({
layout: false,
});
const { proxy } = useScriptNpm({
packageName: 'js-confetti',
file: 'dist/js-confetti.browser.js',
version: '0.12.0',
scriptOptions: {
//@ts-ignore
use: () => typeof window.JSConfetti !== 'undefined' && new window.JSConfetti()
},
})
getCurrentWindow().setIgnoreCursorEvents(true)
onMounted(async () => {
proxy.addConfetti({
confettiRadius: 6,
confettiNumber: 500,
})
setTimeout(() => {
getCurrentWindow().close()
}, 3000);
})
</script>
```
but as soon as I "interact" with the window (obviously I click through (tested this) and even if something on another monitor is interacted with) this happens:
https://github.com/user-attachments/assets/788f4d39-3e44-4f99-a9f3-815e009af870
the "titlebar" appeared as soon as I clicked (titlebar is not interactable)
### Reproduction
Create a tauri app with nuxt and paste in the code from above (`@nuxt/scripts` package is required)
### Expected behavior
No titlebar it should just stay invisible
### Full `tauri info` output
```text
[✔] Environment
- OS: Windows 10.0.22635 x86_64 (X64)
✔ WebView2: 128.0.2739.79
✔ MSVC:
- Visual Studio Build Tools 2022
- Visual Studio Community 2022
✔ rustc: 1.80.1 (3f5fd8dd4 2024-08-06)
✔ cargo: 1.80.1 (376290515 2024-07-16)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-x86_64-pc-windows-msvc (default)
- node: 20.10.0
- pnpm: 9.0.2
- npm: 10.8.3
- bun: 1.1.27
[-] Packages
- tauri 🦀: 2.0.0-rc.15
- tauri-build 🦀: 2.0.0-rc.12
- wry 🦀: 0.43.1
- tao 🦀: 0.30.0
[-] Plugins
- tauri-plugin-sql 🦀: 2.0.0-rc.2
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:3000/
```
### Stack trace
_No response_
### Additional context
support on discord said I should open a ticket | type: bug,platform: Windows,status: needs triage | low | Critical |
2,533,880,441 | PowerToys | Can't extract text from Snip and Sketch screengrab | ### Microsoft PowerToys version
0.84.1
### Installation method
PowerToys auto-update
### Running as admin
No
### Area(s) with issue?
TextExtractor
### Steps to reproduce
Open a Snip and Sketch screengrab
Win+Shift+t a selection to extract the text and the popup menu does not go away, you can't extract the text from the screengrab
### ✔️ Expected Behavior
Pop up menu disappears and extracted text is in the clipboard
### ❌ Actual Behavior
Pop up menu stays
### Other Software
Snip and Sketch 10.2008.3001.0 | Issue-Bug,Needs-Triage,Product-Text Extractor | low | Minor |
2,533,926,190 | PowerToys | PowerToys Window Masking | ### Description of the new feature / enhancement
This feature would allow users to mask specific windows from being captured by screen recording, screenshot, and screen sharing tools.
### Scenario when this would be used?
PowerToys Window Masking would enable users to selectively hide windows from capturing tools by:
- Window Name Matching: Users can define specific window names or parts of names to mask. This allows for flexibility in masking windows with dynamic names (e.g., "Untitled - Notepad" or "Discord - [Username]").
- Hotkeys: Users can assign hotkeys to instantly mask or unmask specific windows. This allows for quick and easy control during live screen sharing or recording.
### Supporting information
PowerToys Window Masking provides users with increased privacy and control over their on-screen activities:
- Protecting Sensitive Information: Mask windows containing personal data, passwords, or other sensitive information during screen recordings or screen shares.
- Hiding Distractions: Mask irrelevant windows during live screen sharing to focus on the important content.
- Maintaining Privacy during Collaboration: Mask windows containing private projects or communications during screen sharing with colleagues.
- Controlling Personal Information: Mask windows displaying personal messages or chats during screen recording.
This feature allows users to confidently share their screen while ensuring privacy and maintaining control over what is visible to others. | Needs-Triage | low | Minor |
2,533,934,699 | langchain | Can ChatHuggingFace model perform Function Calling? | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I was using the following code to demonstrate the tool-calling feature of HuggingFace model, but i got error.
```python
from langchain_huggingface import ChatHuggingFace, HuggingFacePipeline
pipeline = HuggingFacePipeline.from_model_id(
model_id="NousResearch/Hermes-2-Pro-Llama-3-8B",
task="text-generation",
pipeline_kwargs={
"max_new_tokens": 500,
"top_k": 50,
"temperature": 0.1,
"do_sample": True,
"return_full_text": False
},
device=0
)
model = ChatHuggingFace(llm=pipeline)
from pydantic import BaseModel, Field
from langchain_core.tools import tool
class GetWeather(BaseModel):
'''Get the current weather in a given location'''
location: str = Field(description="The city and state,e.g. San Francisco, CA")
class GetPopulation(BaseModel):
'''Get the current population in a given location'''
location: str = Field(description="The city and state, e.g. San Francisco, CA")
model_with_tools = model.bind_tools([GetWeather, GetPopulation])
model_with_tools.invoke(
"Which city is hotter today and which is bigger: LA or NY?",
)
model_with_tools.tool_calls
```
### Error Message and Stack Trace (if applicable)
```
{
"name": "AttributeError",
"message": "'ChatHuggingFace' object has no attribute 'tool_calls'",
"stack": "---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[7], line 1
----> 1 model_with_tools.tool_calls
File /dccstor/kirushikesh/.conda/testenv/lib/python3.10/site-packages/langchain_core/runnables/base.py:5704, in RunnableBinding.__getattr__(self, name)
5703 def __getattr__(self, name: str) -> Any:
-> 5704 attr = getattr(self.bound, name)
5706 if callable(attr) and (
5707 config_param := inspect.signature(attr).parameters.get(\"config\")
5708 ):
5709 if config_param.kind == inspect.Parameter.KEYWORD_ONLY:
File /dccstor/kirushikesh/.conda/testenv/lib/python3.10/site-packages/pydantic/main.py:856, in BaseModel.__getattr__(self, item)
853 return super().__getattribute__(item) # Raises AttributeError if appropriate
854 else:
855 # this is the current error
--> 856 raise AttributeError(f'{type(self).__name__!r} object has no attribute {item!r}')
AttributeError: 'ChatHuggingFace' object has no attribute 'tool_calls'"
}
```
### Description
Ok, i am try to use langchain library to play with the tool-calling capability of HuggingFace models. Recently, i got to know about the Hermes-LLM tool calling capability from [this](https://huggingface.co/blog/unified-tool-use) blog. I am a big fan of Langchain and i thought of doing the function calling with that model in Langchain. First i am not sure if its possible yet in Langchain, but when i checked this [class definition](https://github.com/langchain-ai/langchain/blob/0a177ec2cc39213eb8fc8c185b5afb73c3f1d028/libs/partners/huggingface/langchain_huggingface/chat_models/huggingface.py#L266) i thought its possible and i tried to reproduce the same code.
The .bind_tools() is working from ChatHuggingFace, but when i tried to read the tool_calls from the response i was getting the error.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Thu Dec 7 03:06:13 EST 2023
> Python Version: 3.10.4 (main, Mar 31 2022, 08:41:55) [GCC 7.5.0]
Package Information
-------------------
> langchain_core: 0.3.0
> langchain: 0.3.0
> langchain_community: 0.3.0
> langsmith: 0.1.121
> langchain_huggingface: 0.1.0
> langchain_ibm: 0.2.0
> langchain_openai: 0.2.0
> langchain_text_splitters: 0.3.0
Optional packages not installed
-------------------------------
> langgraph
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.5
> async-timeout: 4.0.3
> dataclasses-json: 0.6.7
> httpx: 0.27.2
> huggingface-hub: 0.24.7
> ibm-watsonx-ai: 1.1.9
> jsonpatch: 1.33
> numpy: 1.26.4
> openai: 1.45.1
> orjson: 3.10.7
> packaging: 24.1
> pydantic: 2.9.2
> pydantic-settings: 2.5.2
> PyYAML: 6.0.2
> requests: 2.32.2
> sentence-transformers: 3.1.0
> SQLAlchemy: 2.0.35
> tenacity: 8.5.0
> tiktoken: 0.7.0
> tokenizers: 0.19.1
> transformers: 4.44.2
> typing-extensions: 4.12.2
| 🤖:bug | low | Critical |
2,533,962,972 | rust | Tracking issue for unsafe binder types | This is a tracking issue for unsafe binder types. See https://hackmd.io/@compiler-errors/HkXwoBPaR for an initial design proposal.
The feature gate for the issue is `#![feature(unsafe_binders)]`.
### About tracking issues
Tracking issues are used to record the overall progress of implementation. They are also used as hubs connecting to other relevant issues, e.g., bugs or open design questions. A tracking issue is however *not* meant for large scale discussion, questions, or bug reports about a feature. Instead, open a dedicated issue for the specific matter and add the relevant feature gate label.
### Steps
- [x] Approve as lang experiment.
- We accepted this in the lang triage meeting on 2024-09-18.
- [ ] Accept an RFC.
- [ ] Implement in nightly.
- https://github.com/rust-lang/rust/pull/130514
- [ ] Add documentation to the [dev guide][].
- See the [instructions][doc-guide].
- [ ] Add documentation to the [reference][].
- See the [instructions][reference-instructions].
- [ ] Add formatting for new syntax to the [style guide][].
- See the [nightly style procedure][].
- [ ] Stabilize.
- See the [instructions][stabilization-instructions].
[dev guide]: https://github.com/rust-lang/rustc-dev-guide
[doc-guide]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#documentation-prs
[edition guide]: https://github.com/rust-lang/edition-guide
[nightly style procedure]: https://github.com/rust-lang/style-team/blob/master/nightly-style-procedure.md
[reference]: https://github.com/rust-lang/reference
[reference-instructions]: https://github.com/rust-lang/reference/blob/master/CONTRIBUTING.md
[stabilization-instructions]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#stabilization-pr
[style guide]: https://github.com/rust-lang/rust/tree/master/src/doc/style-guide
### Unresolved Questions
TODO.
### Related
TODO.
cc @compiler-errors @rust-lang/lang
| T-lang,C-tracking-issue,B-experimental | low | Critical |
2,534,035,331 | stable-diffusion-webui | [Bug]: Some malicious extension is getting installed automatically after making 10K+ calls to Stable diffusion model through the API. |
[sd.txt](https://github.com/user-attachments/files/17046338/sd.txt)
[sd.txt](https://github.com/user-attachments/files/17046338/sd.txt)
### Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
When we made 10K calls to generate different images we observed that a new extension with URL "http://77.90.22.129:3000/WCZMKQKVIQ/na8672" is getting installed.

### Steps to reproduce the problem
1. Install Stable Diffusion.
2. Install following Extensions:
A. https://github.com/Mikubill/sd-webui-controlnet
B. https://github.com/AUTOMATIC1111/stable-diffusion-webui-nsfw-censor
C. https://github.com/w-e-w/sd-webui-nudenet-nsfw-censor
3. Try making 10K calls to Stable diffusion using the endpoint: sdapi/v1/txt2img
### What should have happened?
The malicious extension shouldn't have been installed automatically.
### What browsers do you use to access the UI ?
Google Chrome
### Sysinfo
[sysinfo-2024-09-18-15-03.json](https://github.com/user-attachments/files/17046063/sysinfo-2024-09-18-15-03.json)
### Console logs
```Shell
Attached in files section.
```
### Additional information
We have deployed it on K8s on a pod using a Dockerfile. | bug-report | low | Critical |
2,534,070,495 | pytorch | Training (backward) crashes when using `torch.narrow`, nested tensors, and `scaled_dot_product_attention` | ### 🐛 Describe the bug
Using nested tensors generated with `torch.narrow` as inputs to `torch.nn.functional.scaled_dot_product_attention` works fine in the forward pass of the model. However, both the math and Flash backends crash when training a model.
When using `SDPBackend.MATH`, I encounter the following error:
```
RuntimeError: split_with_sizes expects split_sizes to sum exactly to 13107200 (input tensor's size at dimension 0), but got split_sizes=[131072, 131072, 131072, 131072, 131072, 131072, 131072, 131072, 131072, 131072]
```
The returned sizes sum up to the original size of the tensor (before `torch.narrow`).
When using `SDPBackend.FLASH_ATTENTION`, the problem seems to be that there is no available backward implementation:
```
RuntimeError: derivative for aten::narrow is not implemented
```
I would add that although the different nested layouts are not documented, using `torch.jagged` as the layout for `torch.nested.narrow` is not possible since only `dim=1` is allowed in this case:
```
RuntimeError: jagged layout only supports dim=1
```
Ideally, slicing the tensors would be the most intuitive. However, the ability to use slice/narrow at least for the 0 dimension and batching purposes seems essential for model training. In my case, it is not easy/efficient to do the slicing before batching.
Perhaps @jbschlosser @cpuhrsch know more about this based on previous contributions.
A minimal example to reproduce these issues:
```python
import torch
from torch import nn
from torch.nn import functional as F
from torch.nn.attention import SDPBackend, sdpa_kernel
from tqdm.auto import tqdm
torch.manual_seed(0)
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.fc_q = nn.Linear(256, 256)
self.fc_k = nn.Linear(256, 256)
self.fc_v = nn.Linear(256, 256)
self.fc_o = nn.Linear(256, 5)
def forward(self, x):
q = self.fc_q(x)
k = self.fc_k(x)
v = self.fc_v(x)
q_nested = torch.nested.as_nested_tensor(q)
k_nested = torch.nested.as_nested_tensor(k)
v_nested = torch.nested.as_nested_tensor(v)
q = q_nested.reshape(100, 512, 16, 16).transpose(1, 2).contiguous()
k = k_nested.reshape(100, 512, 16, 16).transpose(1, 2).contiguous()
v = v_nested.reshape(100, 512, 16, 16).transpose(1, 2).contiguous()
q_narrow = torch.narrow(q, dim=0, start=0, length=10)
k_narrow = torch.narrow(k, dim=0, start=0, length=10)
v_narrow = torch.narrow(v, dim=0, start=0, length=10)
# with sdpa_kernel(SDPBackend.FLASH_ATTENTION):
with sdpa_kernel(SDPBackend.MATH):
out = F.scaled_dot_product_attention(
q_narrow, k_narrow, v_narrow
)
out = out.transpose(1, 2).contiguous()
out = torch.nested.to_padded_tensor(out, padding=0)
out = out.reshape(10, 512, 256)
return out
def main():
x = torch.randn((100, 512, 256), requires_grad=True).cuda()
y = torch.randint(0, 5, size=(10,)).cuda()
model = Model().cuda()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
scaler = torch.amp.GradScaler()
for epoch in tqdm(range(100)):
optimizer.zero_grad()
with torch.autocast(device_type='cuda', dtype=torch.bfloat16):
predictions = model(x).squeeze().sum(dim=1).sum(dim=-1)
loss = F.cross_entropy(predictions, y.float())
scaler.scale(loss).backward()
scaler.step(optimizer)
scaler.update()
if __name__ == "__main__":
main()
```
### Versions
```
python collect_env.py
Collecting environment information...
PyTorch version: 2.5.0.dev20240909
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.11.9 | packaged by conda-forge | (main, Apr 19 2024, 18:36:13) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-121-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.1.66
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 9 5950X 16-Core Processor
CPU family: 25
Model: 33
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU max MHz: 3400,0000
CPU min MHz: 2200,0000
BogoMIPS: 6800.18
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm
Virtualization: AMD-V
L1d cache: 512 KiB (16 instances)
L1i cache: 512 KiB (16 instances)
L2 cache: 8 MiB (16 instances)
L3 cache: 64 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET, no microcode
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] admin-torch==0.1.0
[pip3] numpy==2.1.1
[pip3] performer-pytorch==1.1.4
[pip3] pytorch-lightning==2.4.0
[pip3] torch==2.5.0.dev20240909
[pip3] torch_cluster==1.6.3
[pip3] torch_scatter==2.1.2
[pip3] torch_sparse==0.6.18
[pip3] torch_spline_conv==1.2.2
[pip3] torchaudio==2.5.0.dev20240909
[pip3] torchinfo==1.8.0
[pip3] torchmetrics==1.4.1
[pip3] torchscale==0.2.0
[pip3] torchvision==0.20.0.dev20240909
[pip3] triton==3.0.0
[conda] admin-torch 0.1.0 pypi_0 pypi
[conda] blas 1.0 mkl conda-forge
[conda] brotlipy 0.7.0 py311h9bf148f_1002 pytorch-nightly
[conda] cffi 1.15.1 py311h9bf148f_3 pytorch-nightly
[conda] cryptography 38.0.4 py311h46ebde7_0 pytorch-nightly
[conda] filelock 3.9.0 py311_0 pytorch-nightly
[conda] libblas 3.9.0 16_linux64_mkl conda-forge
[conda] libcblas 3.9.0 16_linux64_mkl conda-forge
[conda] liblapack 3.9.0 16_linux64_mkl conda-forge
[conda] mkl 2022.1.0 hc2b9512_224
[conda] mpmath 1.2.1 py311_0 pytorch-nightly
[conda] numpy 2.1.1 py311h71ddf71_0 conda-forge
[conda] performer-pytorch 1.1.4 pypi_0 pypi
[conda] pysocks 1.7.1 py311_0 pytorch-nightly
[conda] pytorch 2.5.0.dev20240909 py3.11_cuda12.1_cudnn9.1.0_0 pytorch-nightly
[conda] pytorch-cuda 12.1 ha16c6d3_6 pytorch-nightly
[conda] pytorch-lightning 2.4.0 pypi_0 pypi
[conda] pytorch-mutex 1.0 cuda pytorch-nightly
[conda] torch-cluster 1.6.3 pypi_0 pypi
[conda] torch-scatter 2.1.2 pypi_0 pypi
[conda] torch-sparse 0.6.18 pypi_0 pypi
[conda] torch-spline-conv 1.2.2 pypi_0 pypi
[conda] torchaudio 2.5.0.dev20240909 py311_cu121 pytorch-nightly
[conda] torchinfo 1.8.0 pypi_0 pypi
[conda] torchmetrics 1.4.1 pypi_0 pypi
[conda] torchscale 0.2.0 dev_0 <develop>
[conda] torchtriton 3.0.0+757b6a61e7 py311 pytorch-nightly
[conda] torchvision 0.20.0.dev20240909 py311_cu121 pytorch-nightly
[conda] urllib3 1.26.14 py311_0 pytorch-nightly
```
cc @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ @erichan1 @mikaylagawarecki | triaged,module: nestedtensor,module: sdpa | low | Critical |
2,534,134,251 | deno | deno run + tab does not offer file autocompletion using zsh | Related https://github.com/denoland/deno/issues/13593
Version: Deno 1.46.3
In a shell type:
`deno run` and hit tab and you will see something like this:

| bug,needs info | low | Major |
2,534,159,169 | pytorch | [triton x pt2] Dynamo should trace data_ptr accesses | internal xref: https://fb.workplace.com/groups/1075192433118967/permalink/1505159356788937/
The motivation is that people commonly pack data ptrs into a tensor and then pass the tensor to a triton kernel. This pattern is used to implement triton kernels that take in variable-length lists.
Dynamo graph breaks on data_ptr calls and FakeTensor chokes on it if it gets past Dynamo.
Our options are:
1) add a get_data_ptrs operator to construct said variable-length list
2) get Dynamo to trace code involving data_ptr access
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @rec @oulgen @aakhundov | triaged,oncall: pt2,module: dynamo,module: user triton | low | Minor |
2,534,161,756 | deno | Adopt GB18030-2022 | If you implement gb18030/GBK, see https://github.com/whatwg/encoding/pull/336. | feat,web,fix available | low | Minor |
2,534,161,843 | node | Adopt GB18030-2022 | If you implement gb18030/GBK, see https://github.com/whatwg/encoding/pull/336. | icu,web-standards | low | Minor |
2,534,183,769 | pytorch | AOTDispatcher debug mode | (from discussion with @bdhirsh and @ezyang)
Motivation: AOTDispatcher (and it's interaction with HOPs) is generally difficult to understand. The claim is that the complexity is around the PyTorch dispatcher applying successive transforms: we apply Autograd, subclass handling, Functionalization, in that order, but "all at the same time": it materializes one FX graph where these transforms have been applied. Developers who work with AOTDispatcher need to understand the sequence of dispatch and keep it around in their heads.
Pitch: Instead, we should offer a debug mode that applies each transform to a graph. For example:
- we would trace out a joint graph that operates on Tensor subclasses
- then we would re-trace that graph to produce another joint graph that doesn't involve Tensor subclasses
- next we would re-trace the previous graph to produce a joint graph that is functionalized.
- finally we would apply the decomposition table.
The presence of the intermediate graphs lets developers see how the successive transforms work improve debuggability. An analogy is that in TORCH_LOGS, we let people see the FX graph at different stages of torch.compile; it would be difficult to debug if the only graph we materialized was Inductor's output_code.
cc @ezyang @chauhang @penguinwu @bdhirsh | triaged,oncall: pt2,module: aotdispatch,module: pt2-dispatcher | low | Critical |
2,534,238,766 | pytorch | Please offer `<version>+cpu`-versioned packages for MacOS and linux ARM CPU-only builds | ### 🚀 The feature, motivation and pitch
This is a duplicate of this issue: https://github.com/pytorch/pytorch/issues/110004 , which was closed as implemented w/ a note that it should be reopened if we continue to see issues.
Apologies if there is already an open issue for this, I searched around but wasn't able to find one.
Currently the behavior when trying to use the CPU-only builds hosted at https://download.pytorch.org/whl/cpu with Poetry is as follows:
If I declare a torch dependency in my `pyproject.toml` as follows:
```toml
[tool.poetry.dependencies]
torch = { version = "2.4.1", source = "torch-cpu" }
[[tool.poetry.source]]
name = "torch-cpu"
url = "https://download.pytorch.org/whl/cpu"
priority = "explicit"
```
Running `poetry install` will fail on MacOS and Linux machines using an ARM processor saying it cannot find an installation candidate. What happens is that when poetry resolves dependencies with `poetry lock`, it will only resolve one version. When it looks at the pytorch CPU builds index, it finds `2.4.1+cpu` and treats it as the highest version, and does not consider `2.4.1` as the same version. However, when I look at the actual files in the index, they look like:
```
torch-2.4.1+cpu-cp310-cp310-linux_x86_64.whl
torch-2.4.1+cpu-cp310-cp310-win_amd64.whl
...
torch-2.4.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
torch-2.4.1-cp310-none-macosx_11_0_arm64.whl
...
```
So the wheels are available for these two platforms, they just don't have the `+cpu` portion appended to the version number and so poetry treats them as different, and in fact cannot resolve the mac and arm linux ones because it always finds `*+cpu` to be the highest version number and doesn't consider the other files to be part of that version.
Obviously part of this is a few pieces of less than ideal behavior from poetry, but there have been a bunch of discussions in their repo related to this and it doesn't sound like any behavior change is forthcoming that will help.
It's also possible that I'm not totally understanding the initial resolution in the issue I linked to above. That indicates that this should work after torch 2.1, but looking back at the CPU wheels I don't see any time when the macos and arm linux wheels were published with `+cpu` so it's possible that I'm missing some existing way of making this work (though it's definitely not for lack of trying)
### Alternatives
The alternative is to use direct URLs to the wheels for each platform needed. This isn't too bad, but it would be really nice to be able to upgrade without needing to go copy and paste several URLs (especially when using multiple packages e.g. `torch` and `torchvision`)
### Additional context
_No response_ | oncall: releng,triaged | low | Major |
2,534,242,424 | flutter | [path_provider] Re-evaluate storage scope of getDownloadsDirectory and similar methods | ### What package does this bug report belong to?
path_provider
### What target platforms are you seeing this bug on?
Android, iOS
### Have you already upgraded your packages?
Yes
### Dependency versions
<details><summary>pubspec.lock</summary>
```lock
name: inspectogo
description: "A multiplatform app to manage companies and inspections."
# The following line prevents the package from being accidentally published to
# pub.dev using `flutter pub publish`. This is preferred for private packages.
publish_to: 'none' # Remove this line if you wish to publish to pub.dev
# The following defines the version and build number for your application.
# A version number is three numbers separated by dots, like 1.2.43
# followed by an optional build number separated by a +.
# Both the version and the builder number may be overridden in flutter
# build by specifying --build-name and --build-number, respectively.
# In Android, build-name is used as versionName while build-number used as versionCode.
# Read more about Android versioning at https://developer.android.com/studio/publish/versioning
# In iOS, build-name is used as CFBundleShortVersionString while build-number is used as CFBundleVersion.
# Read more about iOS versioning at
# https://developer.apple.com/library/archive/documentation/General/Reference/InfoPlistKeyReference/Articles/CoreFoundationKeys.html
# In Windows, build-name is used as the major, minor, and patch parts
# of the product and file versions while build-number is used as the build suffix.
version: 1.0.0+1
environment:
sdk: '>=3.2.6 <4.0.0'
# Dependencies specify other packages that your package needs in order to work.
# To automatically upgrade your package dependencies to the latest versions
# consider running `flutter pub upgrade --major-versions`. Alternatively,
# dependencies can be manually updated by changing the version numbers below to
# the latest version available on pub.dev. To see which dependencies have newer
# versions available, run `flutter pub outdated`.
dependencies:
flutter:
sdk: flutter
# The following adds the Cupertino Icons font to your application.
# Use with the CupertinoIcons class for iOS style icons.
cupertino_icons: ^1.0.2
firebase_core: ^3.0.0
firebase_auth: ^5.2.0
cloud_firestore: ^5.4.0
flutter_riverpod: ^2.4.10
google_fonts: ^6.1.0
pin_code_fields: ^8.0.1
logger: ^2.4.0
accordion: ^2.6.0
flutter_lorem: ^2.0.0
intl_phone_number_input: ^0.7.4
email_validator: ^3.0.0
fancy_password_field: ^2.0.7
flutter_localizations:
sdk: flutter
intl: ^0.19.0
mocktail: ^1.0.4
google_sign_in: ^6.2.1
fake_cloud_firestore: ^3.0.2
im_stepper: ^1.0.1+1
google_maps_flutter: ^2.7.1
geocoding: ^3.0.0
google_maps_flutter_android: ^2.14.5
google_maps_flutter_platform_interface: ^2.8.0
throttling: ^2.0.1
freezed_annotation: ^2.4.4
json_annotation: ^4.8.1
fluster: ^1.2.0
super_context_menu: ^0.8.18
hold_to_confirm_button: ^0.0.2
collection: ^1.18.0
firebase_storage: ^12.2.0
image_picker: ^1.1.2
uuid: ^4.5.0
firebase_performance: ^0.10.0
path_provider: ^2.1.4
firebase_analytics: ^11.3.0
rxdart: ^0.27.7
firebase_app_check: ^0.3.1
get_thumbnail_video:
git:
url: https://github.com/kdhfred/video_thumbnail
lazy_load_indexed_stack: ^1.1.0
firebase_crashlytics: ^4.1.0
http: ^1.2.2
shared_preferences: ^2.3.2
cloud_functions: ^5.1.0
flutter_widget_from_html: ^0.15.2
image_picker_platform_interface: ^2.10.0
image_picker_android: ^0.8.12+4
image_picker_ios: ^0.8.12
fast_immutable_collections: ^10.2.4
go_router: ^14.2.7
path: ^1.9.0
flutter_flavorizr: ^2.2.3
in_app_purchase: ^3.2.0
crypto: ^3.0.3
url_launcher: ^6.3.0
in_app_purchase_android: ^0.3.6+6
scroll_snap_list: ^0.9.1
photo_view: ^0.15.0
image: ^4.2.0
vector_math: ^2.1.4
defer_pointer: ^0.0.2
html: ^0.15.4
universal_html: ^2.2.4
open_file_manager: ^1.0.2
downloadsfolder: ^1.1.0
dev_dependencies:
flutter_test:
sdk: flutter
# The "flutter_lints" package below contains a set of recommended lints to
# encourage good coding practices. The lint set provided by the package is
# activated in the `analysis_options.yaml` file located at the root of your
# package. See that file for information about deactivating specific lint
# rules and activating additional ones.
flutter_lints: ^4.0.0
custom_lint: ^0.6.4
riverpod_lint: ^2.3.9
integration_test:
sdk: flutter
freezed: ^2.4.7
build_runner: ^2.4.12
json_serializable: ^6.7.1
cxpress_lint:
git:
url: https://github.com/DevXpressInc/cxpress_lint
ref: ea9ee3556518a4f3765f14901d7d53be72075410
path: packages/pyramid_lint
mocktail_image_network: ^1.2.0
go_router_builder: ^2.7.0
# For information on the generic Dart part of this file, see the
# following page: https://dart.dev/tools/pub/pubspec
# The following section is specific to Flutter packages.
flutter:
generate: true
# The following line ensures that the Material Icons font is
# included with your application, so that you can use the icons in
# the material Icons class.
uses-material-design: true
# To add assets to your application, add an assets section, like this:
# assets:
# - images/a_dot_burr.jpeg
# - images/a_dot_ham.jpeg
# An image asset can refer to one or more resolution-specific "variants", see
# https://flutter.dev/assets-and-images/#resolution-aware
# For details regarding adding assets from package dependencies, see
# https://flutter.dev/assets-and-images/#from-packages
# To add custom fonts to your application, add a fonts section here,
# in this "flutter" section. Each entry in this list should have a
# "family" key with the font family name, and a "fonts" key with a
# list giving the asset and other descriptors for the font. For
# example:
# fonts:
# - family: Schyler
# fonts:
# - asset: fonts/Schyler-Regular.ttf
# - asset: fonts/Schyler-Italic.ttf
# style: italic
# - family: Trajan Pro
# fonts:
# - asset: fonts/TrajanPro.ttf
# - asset: fonts/TrajanPro_Bold.ttf
# weight: 700
#
# For details regarding fonts from package dependencies,
# see https://flutter.dev/custom-fonts/#from-packages
assets:
- assets/images/
- assets/html/fr/
- assets/html/en/
shaders:
- assets/shaders/devxpress-logo.frag
flavorizr:
app:
android:
flavorDimensions: "app"
ios:
buildConfiguration:
debug: "Debug"
profile: "Profile"
release: "Release"
flavors:
examples:
app:
name: "Inspectogo Examples"
android:
applicationId: "com.inspectogo.app.examples"
ios:
bundleId: "com.inspectogo.app.demo"
demo:
app:
name: "Inspectogo Demo"
android:
applicationId: "com.inspectogo.app.demo"
ios:
bundleId: "com.inspectogo.app.demo"
firebase:
config: "ios/Runner/demo/GoogleService-Info.plist"
firebase:
config:
ios: "ios/Runner/demo/GoogleService-Info.plist"
android: "android/app/src/demo/google-services.json"
prod:
app:
name: "Inspectogo"
android:
applicationId: "com.inspectogo.app"
ios:
bundleId: "com.inspectogo.app"
firebase:
config: "ios/Runner/prod/GoogleService-Info.plist"
firebase:
config:
ios: "ios/Runner/prod/GoogleService-Info.plist"
android: "android/app/src/prod/google-services.json"
```
</details>
### Steps to reproduce
Log getDownloadsDirectory() path result on an android emulator or ios emulator
### Expected results
On Android emulator, should return ```/storage/emulated/0/Download```
On IOS emulator, should return ```/Users/{user}/Library/Developer/CoreSimulator/Devices/{device_uid}/data/Containers/Data/Application/{app_uid}/Documents```
### Actual results
On Android emulator, it instead returns ```"/storage/emulated/0/Android/data/{package_name}/files/downloads"```, which doesn't lead to the public downloads folder.
On IOS emulator, it instead returns ``````/Users/{user}/Library/Developer/CoreSimulator/Devices/{device_uid}/data/Containers/Data/Application/{app_uid}/Downloads``````
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:downloadsfolder/downloadsfolder.dart' show getDownloadDirectory;
import 'package:path_provider/path_provider.dart' as pathprovider;
Future(() async {
final downloadsDir = await getDownloadDirectory();
final errorDir = await pathprovider.getDownloadsDirectory();
print('downloadsfolder package: ${downloadsDir.path}');
print('path_provider package: ${errorDir!.path}');
});
```
</details>
### Screenshots or Videos
_No response_
### Logs
_No response_
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[√] Flutter (Channel stable, 3.24.3, on Microsoft Windows [Version 10.0.22631.4169], locale en-CA)
• Flutter version 3.24.3 on channel stable at W:\flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 2663184aa7 (7 days ago), 2024-09-11 16:27:48 -0500
• Engine revision 36335019a8
• Dart version 3.5.3
• DevTools version 2.37.3
[√] Windows Version (Installed version of Windows is version 10 or higher)
[√] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at W:\Android
• Platform android-35, build-tools 34.0.0
• ANDROID_HOME = W:\Android
• Java binary at: W:\Android\AndroidStudio\jbr\bin\java
• Java version OpenJDK Runtime Environment (build 17.0.7+0-b2043.56-10550314)
• All Android licenses accepted.
[√] Chrome - develop for the web
• Chrome at C:\Program Files\Google\Chrome\Application\chrome.exe
[√] Visual Studio - develop Windows apps (Visual Studio Community 2022 17.10.5)
• Visual Studio at C:\Program Files\Microsoft Visual Studio\2022\Community
• Visual Studio Community 2022 version 17.10.35122.118
• Windows 10 SDK version 10.0.22621.0
[√] Android Studio (version 2023.1)
• Android Studio at W:\Android\AndroidStudio
• Flutter plugin can be installed from:
https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.7+0-b2043.56-10550314)
[√] VS Code (version 1.93.1)
• VS Code at C:\Users\alexa\AppData\Local\Programs\Microsoft VS Code
• Flutter extension version 3.94.0
[√] Connected device (4 available)
• sdk gphone64 x86 64 (mobile) • emulator-5554 • android-x64 • Android 14 (API 34) (emulator)
• Windows (desktop) • windows • windows-x64 • Microsoft Windows [Version 10.0.22631.4169]
• Chrome (web) • chrome • web-javascript • Google Chrome 128.0.6613.138
• Edge (web) • edge • web-javascript • Microsoft Edge 128.0.2739.42
[√] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| p: path_provider,package,team-ecosystem,has reproducible steps,P2,triaged-ecosystem,found in release: 3.24,found in release: 3.26 | low | Critical |
2,534,243,020 | PowerToys | Mouse Without Borders loses 'wrap mouse' state after disable/enable | ### Microsoft PowerToys version
0.84.1
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
Mouse Without Borders
### Steps to reproduce
Config MWB with 2 machines
Disable Wrap mouse feature
Disable MWB using On/Off toggle
Re-enable MWB
Wrap mouse will functionally be enabled however will display as disabled
(toggling 'wrap mouse' works around)
### ✔️ Expected Behavior
Wrap mouse will functionally match the displayed state, and match the previous state before disable toggline MWB
### ❌ Actual Behavior
Wrap mouse will functionally be enabled however will display as disabled
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,534,243,560 | langchain | LLMGraphTransformer not working with Gemini | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
import asyncio
import json
from typing import Any, Dict, List, Optional, Sequence, Tuple, Type, Union, cast
from langchain_community.graphs.graph_document import GraphDocument, Node, Relationship
from langchain_core.documents import Document
from langchain_core.language_models import BaseLanguageModel
from langchain_core.messages import SystemMessage
from langchain_core.output_parsers import JsonOutputParser
from langchain_core.prompts import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
PromptTemplate,
)
from langchain_core.runnables import RunnableConfig
from pydantic import BaseModel, Field, create_model
examples = [
{
"text": (
"Adam is a software engineer in Microsoft since 2009, "
"and last year he got an award as the Best Talent"
),
"head": "Adam",
"head_type": "Person",
"relation": "WORKS_FOR",
"tail": "Microsoft",
"tail_type": "Company",
},
{
"text": (
"Adam is a software engineer in Microsoft since 2009, "
"and last year he got an award as the Best Talent"
),
"head": "Adam",
"head_type": "Person",
"relation": "HAS_AWARD",
"tail": "Best Talent",
"tail_type": "Award",
},
{
"text": (
"Microsoft is a tech company that provide "
"several products such as Microsoft Word"
),
"head": "Microsoft Word",
"head_type": "Product",
"relation": "PRODUCED_BY",
"tail": "Microsoft",
"tail_type": "Company",
},
{
"text": "Microsoft Word is a lightweight app that accessible offline",
"head": "Microsoft Word",
"head_type": "Product",
"relation": "HAS_CHARACTERISTIC",
"tail": "lightweight app",
"tail_type": "Characteristic",
},
{
"text": "Microsoft Word is a lightweight app that accessible offline",
"head": "Microsoft Word",
"head_type": "Product",
"relation": "HAS_CHARACTERISTIC",
"tail": "accessible offline",
"tail_type": "Characteristic",
},
]
system_prompt = (
"# Knowledge Graph Instructions for GPT-4\n"
"## 1. Overview\n"
"You are a top-tier algorithm designed for extracting information in structured "
"formats to build a knowledge graph.\n"
"Try to capture as much information from the text as possible without "
"sacrificing accuracy. Do not add any information that is not explicitly "
"mentioned in the text.\n"
"- **Nodes** represent entities and concepts.\n"
"- The aim is to achieve simplicity and clarity in the knowledge graph, making it\n"
"accessible for a vast audience.\n"
"## 2. Labeling Nodes\n"
"- **Consistency**: Ensure you use available types for node labels.\n"
"Ensure you use basic or elementary types for node labels.\n"
"- For example, when you identify an entity representing a person, "
"always label it as **'person'**. Avoid using more specific terms "
"like 'mathematician' or 'scientist'."
"- **Node IDs**: Never utilize integers as node IDs. Node IDs should be "
"names or human-readable identifiers found in the text.\n"
"- **Relationships** represent connections between entities or concepts.\n"
"Ensure consistency and generality in relationship types when constructing "
"knowledge graphs. Instead of using specific and momentary types "
"such as 'BECAME_PROFESSOR', use more general and timeless relationship types "
"like 'PROFESSOR'. Make sure to use general and timeless relationship types!\n"
"## 3. Coreference Resolution\n"
"- **Maintain Entity Consistency**: When extracting entities, it's vital to "
"ensure consistency.\n"
'If an entity, such as "John Doe", is mentioned multiple times in the text '
'but is referred to by different names or pronouns (e.g., "Joe", "he"),'
"always use the most complete identifier for that entity throughout the "
'knowledge graph. In this example, use "John Doe" as the entity ID.\n'
"Remember, the knowledge graph should be coherent and easily understandable, "
"so maintaining consistency in entity references is crucial.\n"
"## 4. Strict Compliance\n"
"Adhere to the rules strictly. Non-compliance will result in termination."
)
default_prompt = ChatPromptTemplate.from_messages(
[
(
"system",
system_prompt,
),
(
"human",
(
"Tip: Make sure to answer in the correct format and do "
"not include any explanations. "
"Use the given format to extract information from the "
"following input: {input}"
),
),
]
)
def _get_additional_info(input_type: str) -> str:
# Check if the input_type is one of the allowed values
if input_type not in ["node", "relationship", "property"]:
raise ValueError("input_type must be 'node', 'relationship', or 'property'")
# Perform actions based on the input_type
if input_type == "node":
return (
"Ensure you use basic or elementary types for node labels.\n"
"For example, when you identify an entity representing a person, "
"always label it as **'Person'**. Avoid using more specific terms "
"like 'Mathematician' or 'Scientist'"
)
elif input_type == "relationship":
return (
"Instead of using specific and momentary types such as "
"'BECAME_PROFESSOR', use more general and timeless relationship types "
"like 'PROFESSOR'. However, do not sacrifice any accuracy for generality"
)
elif input_type == "property":
return ""
return ""
def optional_enum_field(
enum_values: Optional[List[str]] = None,
description: str = "",
input_type: str = "node",
llm_type: Optional[str] = None,
**field_kwargs: Any,
) -> Any:
"""Utility function to conditionally create a field with an enum constraint."""
# Only openai supports enum param
if enum_values and llm_type == "openai-chat":
return Field(
...,
enum=enum_values, # type: ignore[call-arg]
description=f"{description}. Available options are {enum_values}",
**field_kwargs,
)
elif enum_values:
return Field(
...,
description=f"{description}. Available options are {enum_values}",
**field_kwargs,
)
else:
additional_info = _get_additional_info(input_type)
return Field(..., description=description + additional_info, **field_kwargs)
class _Graph(BaseModel):
nodes: Optional[List]
relationships: Optional[List]
class UnstructuredRelation(BaseModel):
head: str = Field(
description=(
"extracted head entity like Microsoft, Apple, John. "
"Must use human-readable unique identifier."
)
)
head_type: str = Field(
description="type of the extracted head entity like Person, Company, etc"
)
relation: str = Field(description="relation between the head and the tail entities")
tail: str = Field(
description=(
"extracted tail entity like Microsoft, Apple, John. "
"Must use human-readable unique identifier."
)
)
tail_type: str = Field(
description="type of the extracted tail entity like Person, Company, etc"
)
def create_unstructured_prompt(
node_labels: Optional[List[str]] = None, rel_types: Optional[List[str]] = None
) -> ChatPromptTemplate:
node_labels_str = str(node_labels) if node_labels else ""
rel_types_str = str(rel_types) if rel_types else ""
base_string_parts = [
"You are a top-tier algorithm designed for extracting information in "
"structured formats to build a knowledge graph. Your task is to identify "
"the entities and relations requested with the user prompt from a given "
"text. You must generate the output in a JSON format containing a list "
'with JSON objects. Each object should have the keys: "head", '
'"head_type", "relation", "tail", and "tail_type". The "head" '
"key must contain the text of the extracted entity with one of the types "
"from the provided list in the user prompt.",
f'The "head_type" key must contain the type of the extracted head entity, '
f"which must be one of the types from {node_labels_str}."
if node_labels
else "",
f'The "relation" key must contain the type of relation between the "head" '
f'and the "tail", which must be one of the relations from {rel_types_str}.'
if rel_types
else "",
f'The "tail" key must represent the text of an extracted entity which is '
f'the tail of the relation, and the "tail_type" key must contain the type '
f"of the tail entity from {node_labels_str}."
if node_labels
else "",
"Attempt to extract as many entities and relations as you can. Maintain "
"Entity Consistency: When extracting entities, it's vital to ensure "
'consistency. If an entity, such as "John Doe", is mentioned multiple '
"times in the text but is referred to by different names or pronouns "
'(e.g., "Joe", "he"), always use the most complete identifier for '
"that entity. The knowledge graph should be coherent and easily "
"understandable, so maintaining consistency in entity references is "
"crucial.",
"IMPORTANT NOTES:\n- Don't add any explanation and text.",
]
system_prompt = "\n".join(filter(None, base_string_parts))
system_message = SystemMessage(content=system_prompt)
parser = JsonOutputParser(pydantic_object=UnstructuredRelation)
human_string_parts = [
"Based on the following example, extract entities and "
"relations from the provided text.\n\n",
"Use the following entity types, don't use other entity "
"that is not defined below:"
"# ENTITY TYPES:"
"{node_labels}"
if node_labels
else "",
"Use the following relation types, don't use other relation "
"that is not defined below:"
"# RELATION TYPES:"
"{rel_types}"
if rel_types
else "",
"Below are a number of examples of text and their extracted "
"entities and relationships."
"{examples}\n"
"For the following text, extract entities and relations as "
"in the provided example."
"{format_instructions}\nText: {input}",
]
human_prompt_string = "\n".join(filter(None, human_string_parts))
human_prompt = PromptTemplate(
template=human_prompt_string,
input_variables=["input"],
partial_variables={
"format_instructions": parser.get_format_instructions(),
"node_labels": node_labels,
"rel_types": rel_types,
"examples": examples,
},
)
human_message_prompt = HumanMessagePromptTemplate(prompt=human_prompt)
chat_prompt = ChatPromptTemplate.from_messages(
[system_message, human_message_prompt]
)
return chat_prompt
def create_simple_model(
node_labels: Optional[List[str]] = None,
rel_types: Optional[List[str]] = None,
node_properties: Union[bool, List[str]] = False,
llm_type: Optional[str] = None,
relationship_properties: Union[bool, List[str]] = False,
) -> Type[_Graph]:
"""
Create a simple graph model with optional constraints on node
and relationship types.
Args:
node_labels (Optional[List[str]]): Specifies the allowed node types.
Defaults to None, allowing all node types.
rel_types (Optional[List[str]]): Specifies the allowed relationship types.
Defaults to None, allowing all relationship types.
node_properties (Union[bool, List[str]]): Specifies if node properties should
be included. If a list is provided, only properties with keys in the list
will be included. If True, all properties are included. Defaults to False.
relationship_properties (Union[bool, List[str]]): Specifies if relationship
properties should be included. If a list is provided, only properties with
keys in the list will be included. If True, all properties are included.
Defaults to False.
llm_type (Optional[str]): The type of the language model. Defaults to None.
Only openai supports enum param: openai-chat.
Returns:
Type[_Graph]: A graph model with the specified constraints.
Raises:
ValueError: If 'id' is included in the node or relationship properties list.
"""
node_fields: Dict[str, Tuple[Any, Any]] = {
"id": (
str,
Field(..., description="Name or human-readable unique identifier."),
),
"type": (
str,
optional_enum_field(
node_labels,
description="The type or label of the node.",
input_type="node",
llm_type=llm_type,
),
),
}
if node_properties:
if isinstance(node_properties, list) and "id" in node_properties:
raise ValueError("The node property 'id' is reserved and cannot be used.")
# Map True to empty array
node_properties_mapped: List[str] = (
[] if node_properties is True else node_properties
)
class Property(BaseModel):
"""A single property consisting of key and value"""
key: str = optional_enum_field(
node_properties_mapped,
description="Property key.",
input_type="property",
llm_type=llm_type,
)
value: str = Field(..., description="value")
node_fields["properties"] = (
Optional[List[Property]],
Field(None, description="List of node properties"),
)
SimpleNode = create_model("SimpleNode", **node_fields) # type: ignore
relationship_fields: Dict[str, Tuple[Any, Any]] = {
"source_node_id": (
str,
Field(
...,
description="Name or human-readable unique identifier of source node",
),
),
"source_node_type": (
str,
optional_enum_field(
node_labels,
description="The type or label of the source node.",
input_type="node",
llm_type=llm_type,
),
),
"target_node_id": (
str,
Field(
...,
description="Name or human-readable unique identifier of target node",
),
),
"target_node_type": (
str,
optional_enum_field(
node_labels,
description="The type or label of the target node.",
input_type="node",
llm_type=llm_type,
),
),
"type": (
str,
optional_enum_field(
rel_types,
description="The type of the relationship.",
input_type="relationship",
llm_type=llm_type,
),
),
}
if relationship_properties:
if (
isinstance(relationship_properties, list)
and "id" in relationship_properties
):
raise ValueError(
"The relationship property 'id' is reserved and cannot be used."
)
# Map True to empty array
relationship_properties_mapped: List[str] = (
[] if relationship_properties is True else relationship_properties
)
class RelationshipProperty(BaseModel):
"""A single property consisting of key and value"""
key: str = optional_enum_field(
relationship_properties_mapped,
description="Property key.",
input_type="property",
llm_type=llm_type,
)
value: str = Field(..., description="value")
relationship_fields["properties"] = (
Optional[List[RelationshipProperty]],
Field(None, description="List of relationship properties"),
)
SimpleRelationship = create_model("SimpleRelationship", **relationship_fields) # type: ignore
class DynamicGraph(_Graph):
"""Represents a graph document consisting of nodes and relationships."""
nodes: Optional[List[SimpleNode]] = Field(description="List of nodes") # type: ignore
relationships: Optional[List[SimpleRelationship]] = Field( # type: ignore
description="List of relationships"
)
return DynamicGraph
def map_to_base_node(node: Any) -> Node:
"""Map the SimpleNode to the base Node."""
properties = {}
if hasattr(node, "properties") and node.properties:
for p in node.properties:
properties[format_property_key(p.key)] = p.value
return Node(id=node.id, type=node.type, properties=properties)
def map_to_base_relationship(rel: Any) -> Relationship:
"""Map the SimpleRelationship to the base Relationship."""
source = Node(id=rel.source_node_id, type=rel.source_node_type)
target = Node(id=rel.target_node_id, type=rel.target_node_type)
properties = {}
if hasattr(rel, "properties") and rel.properties:
for p in rel.properties:
properties[format_property_key(p.key)] = p.value
return Relationship(
source=source, target=target, type=rel.type, properties=properties
)
def _parse_and_clean_json(
argument_json: Dict[str, Any],
) -> Tuple[List[Node], List[Relationship]]:
nodes = []
for node in argument_json["nodes"]:
if not node.get("id"): # Id is mandatory, skip this node
continue
node_properties = {}
if "properties" in node and node["properties"]:
for p in node["properties"]:
node_properties[format_property_key(p["key"])] = p["value"]
nodes.append(
Node(
id=node["id"],
type=node.get("type", "Node"),
properties=node_properties,
)
)
relationships = []
for rel in argument_json["relationships"]:
# Mandatory props
if (
not rel.get("source_node_id")
or not rel.get("target_node_id")
or not rel.get("type")
):
continue
# Node type copying if needed from node list
if not rel.get("source_node_type"):
try:
rel["source_node_type"] = [
el.get("type")
for el in argument_json["nodes"]
if el["id"] == rel["source_node_id"]
][0]
except IndexError:
rel["source_node_type"] = None
if not rel.get("target_node_type"):
try:
rel["target_node_type"] = [
el.get("type")
for el in argument_json["nodes"]
if el["id"] == rel["target_node_id"]
][0]
except IndexError:
rel["target_node_type"] = None
rel_properties = {}
if "properties" in rel and rel["properties"]:
for p in rel["properties"]:
rel_properties[format_property_key(p["key"])] = p["value"]
source_node = Node(
id=rel["source_node_id"],
type=rel["source_node_type"],
)
target_node = Node(
id=rel["target_node_id"],
type=rel["target_node_type"],
)
relationships.append(
Relationship(
source=source_node,
target=target_node,
type=rel["type"],
properties=rel_properties,
)
)
return nodes, relationships
def _format_nodes(nodes: List[Node]) -> List[Node]:
return [
Node(
id=el.id.title() if isinstance(el.id, str) else el.id,
type=el.type.capitalize() # type: ignore[arg-type]
if el.type
else None, # handle empty strings # type: ignore[arg-type]
properties=el.properties,
)
for el in nodes
]
def _format_relationships(rels: List[Relationship]) -> List[Relationship]:
return [
Relationship(
source=_format_nodes([el.source])[0],
target=_format_nodes([el.target])[0],
type=el.type.replace(" ", "_").upper(),
properties=el.properties,
)
for el in rels
]
def format_property_key(s: str) -> str:
words = s.split()
if not words:
return s
first_word = words[0].lower()
capitalized_words = [word.capitalize() for word in words[1:]]
return "".join([first_word] + capitalized_words)
def _convert_to_graph_document(
raw_schema: Dict[Any, Any],
) -> Tuple[List[Node], List[Relationship]]:
# If there are validation errors
if not raw_schema["parsed"]:
try:
try: # OpenAI type response
argument_json = json.loads(
raw_schema["raw"].additional_kwargs["tool_calls"][0]["function"][
"arguments"
]
)
except Exception: # Google type response
try:
argument_json = json.loads(
raw_schema["raw"].additional_kwargs["function_call"][
"arguments"
]
)
except Exception: # Ollama type response
argument_json = raw_schema["raw"].tool_calls[0]["args"]
if isinstance(argument_json["nodes"], str):
argument_json["nodes"] = json.loads(argument_json["nodes"])
if isinstance(argument_json["relationships"], str):
argument_json["relationships"] = json.loads(
argument_json["relationships"]
)
nodes, relationships = _parse_and_clean_json(argument_json)
except Exception: # If we can't parse JSON
return ([], [])
else: # If there are no validation errors use parsed pydantic object
parsed_schema: _Graph = raw_schema["parsed"]
nodes = (
[map_to_base_node(node) for node in parsed_schema.nodes if node.id]
if parsed_schema.nodes
else []
)
relationships = (
[
map_to_base_relationship(rel)
for rel in parsed_schema.relationships
if rel.type and rel.source_node_id and rel.target_node_id
]
if parsed_schema.relationships
else []
)
# Title / Capitalize
return _format_nodes(nodes), _format_relationships(relationships)
class LLMGraphTransformer:
"""Transform documents into graph-based documents using a LLM.
It allows specifying constraints on the types of nodes and relationships to include
in the output graph. The class supports extracting properties for both nodes and
relationships.
Args:
llm (BaseLanguageModel): An instance of a language model supporting structured
output.
allowed_nodes (List[str], optional): Specifies which node types are
allowed in the graph. Defaults to an empty list, allowing all node types.
allowed_relationships (List[str], optional): Specifies which relationship types
are allowed in the graph. Defaults to an empty list, allowing all relationship
types.
prompt (Optional[ChatPromptTemplate], optional): The prompt to pass to
the LLM with additional instructions.
strict_mode (bool, optional): Determines whether the transformer should apply
filtering to strictly adhere to `allowed_nodes` and `allowed_relationships`.
Defaults to True.
node_properties (Union[bool, List[str]]): If True, the LLM can extract any
node properties from text. Alternatively, a list of valid properties can
be provided for the LLM to extract, restricting extraction to those specified.
relationship_properties (Union[bool, List[str]]): If True, the LLM can extract
any relationship properties from text. Alternatively, a list of valid
properties can be provided for the LLM to extract, restricting extraction to
those specified.
ignore_tool_usage (bool): Indicates whether the transformer should
bypass the use of structured output functionality of the language model.
If set to True, the transformer will not use the language model's native
function calling capabilities to handle structured output. Defaults to False.
Example:
.. code-block:: python
from langchain_experimental.graph_transformers import LLMGraphTransformer
from langchain_core.documents import Document
from langchain_openai import ChatOpenAI
llm=ChatOpenAI(temperature=0)
transformer = LLMGraphTransformer(
llm=llm,
allowed_nodes=["Person", "Organization"])
doc = Document(page_content="Elon Musk is suing OpenAI")
graph_documents = transformer.convert_to_graph_documents([doc])
"""
def __init__(
self,
llm: BaseLanguageModel,
allowed_nodes: List[str] = [],
allowed_relationships: List[str] = [],
prompt: Optional[ChatPromptTemplate] = None,
strict_mode: bool = True,
node_properties: Union[bool, List[str]] = False,
relationship_properties: Union[bool, List[str]] = False,
ignore_tool_usage: bool = False,
) -> None:
self.allowed_nodes = allowed_nodes
self.allowed_relationships = allowed_relationships
self.strict_mode = strict_mode
self._function_call = not ignore_tool_usage
# Check if the LLM really supports structured output
if self._function_call:
try:
llm.with_structured_output(_Graph)
except NotImplementedError:
self._function_call = False
if not self._function_call:
if node_properties or relationship_properties:
raise ValueError(
"The 'node_properties' and 'relationship_properties' parameters "
"cannot be used in combination with a LLM that doesn't support "
"native function calling."
)
try:
import json_repair # type: ignore
self.json_repair = json_repair
except ImportError:
raise ImportError(
"Could not import json_repair python package. "
"Please install it with `pip install json-repair`."
)
prompt = prompt or create_unstructured_prompt(
allowed_nodes, allowed_relationships
)
self.chain = prompt | llm
else:
# Define chain
try:
llm_type = llm._llm_type # type: ignore
except AttributeError:
llm_type = None
schema = create_simple_model(
allowed_nodes,
allowed_relationships,
node_properties,
llm_type,
relationship_properties,
)
structured_llm = llm.with_structured_output(schema, include_raw=True)
prompt = prompt or default_prompt
self.chain = prompt | structured_llm
def process_response(
self, document: Document, config: Optional[RunnableConfig] = None
) -> GraphDocument:
"""
Processes a single document, transforming it into a graph document using
an LLM based on the model's schema and constraints.
"""
text = document.page_content
raw_schema = self.chain.invoke({"input": text}, config=config)
print(raw_schema)
if self._function_call:
raw_schema = cast(Dict[Any, Any], raw_schema)
nodes, relationships = _convert_to_graph_document(raw_schema)
else:
nodes_set = set()
relationships = []
if not isinstance(raw_schema, str):
raw_schema = raw_schema.content
parsed_json = self.json_repair.loads(raw_schema)
if isinstance(parsed_json, dict):
parsed_json = [parsed_json]
for rel in parsed_json:
# Nodes need to be deduplicated using a set
nodes_set.add((rel["head"], rel["head_type"]))
nodes_set.add((rel["tail"], rel["tail_type"]))
source_node = Node(id=rel["head"], type=rel["head_type"])
target_node = Node(id=rel["tail"], type=rel["tail_type"])
relationships.append(
Relationship(
source=source_node, target=target_node, type=rel["relation"]
)
)
# Create nodes list
nodes = [Node(id=el[0], type=el[1]) for el in list(nodes_set)]
# Strict mode filtering
if self.strict_mode and (self.allowed_nodes or self.allowed_relationships):
if self.allowed_nodes:
lower_allowed_nodes = [el.lower() for el in self.allowed_nodes]
nodes = [
node for node in nodes if node.type.lower() in lower_allowed_nodes
]
relationships = [
rel
for rel in relationships
if rel.source.type.lower() in lower_allowed_nodes
and rel.target.type.lower() in lower_allowed_nodes
]
if self.allowed_relationships:
relationships = [
rel
for rel in relationships
if rel.type.lower()
in [el.lower() for el in self.allowed_relationships]
]
return GraphDocument(nodes=nodes, relationships=relationships, source=document)
def convert_to_graph_documents(
self, documents: Sequence[Document], config: Optional[RunnableConfig] = None
) -> List[GraphDocument]:
"""Convert a sequence of documents into graph documents.
Args:
documents (Sequence[Document]): The original documents.
kwargs: Additional keyword arguments.
Returns:
Sequence[GraphDocument]: The transformed documents as graphs.
"""
return [self.process_response(document, config) for document in documents]
async def aprocess_response(
self, document: Document, config: Optional[RunnableConfig] = None
) -> GraphDocument:
"""
Asynchronously processes a single document, transforming it into a
graph document.
"""
text = document.page_content
raw_schema = await self.chain.ainvoke({"input": text}, config=config)
raw_schema = cast(Dict[Any, Any], raw_schema)
nodes, relationships = _convert_to_graph_document(raw_schema)
if self.strict_mode and (self.allowed_nodes or self.allowed_relationships):
if self.allowed_nodes:
lower_allowed_nodes = [el.lower() for el in self.allowed_nodes]
nodes = [
node for node in nodes if node.type.lower() in lower_allowed_nodes
]
relationships = [
rel
for rel in relationships
if rel.source.type.lower() in lower_allowed_nodes
and rel.target.type.lower() in lower_allowed_nodes
]
if self.allowed_relationships:
relationships = [
rel
for rel in relationships
if rel.type.lower()
in [el.lower() for el in self.allowed_relationships]
]
return GraphDocument(nodes=nodes, relationships=relationships, source=document)
async def aconvert_to_graph_documents(
self, documents: Sequence[Document], config: Optional[RunnableConfig] = None
) -> List[GraphDocument]:
"""
Asynchronously convert a sequence of documents into graph documents.
"""
tasks = [
asyncio.create_task(self.aprocess_response(document, config))
for document in documents
]
results = await asyncio.gather(*tasks)
return results
from langchain_core.documents import Document
from langchain_google_genai import ChatGoogleGenerativeAI
llm = ChatGoogleGenerativeAI(model="gemini-1.5-flash",temperature=0, api_key=API_KEY)
llm_transformer = LLMGraphTransformer(llm=llm)
document = [Document(page_content='Anna was born in Australia.')]
graph_document = llm_transformer.convert_to_graph_documents(document)
```
### Error Message and Stack Trace (if applicable)
```
{'raw': AIMessage(content='', additional_kwargs={'function_call': {'name': 'DynamicGraph', 'arguments': '{"nodes": "\\n \\"Anna\\" \\"person\\"\\n \\"Australia\\" \\"country\\"\\n", "relationships": "\\n \\"Anna\\" \\"BORN_IN\\" \\"Australia\\"\\n"}'}}, response_metadata={'prompt_feedback': {'block_reason': 0, 'safety_ratings': []}, 'finish_reason': 'STOP', 'safety_ratings': [{'category': 'HARM_CATEGORY_HATE_SPEECH', 'probability': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_HARASSMENT', 'probability': 'NEGLIGIBLE', 'blocked': False}]}, id='run-fa87adb4-c6d8-4213-b0e3-117dadc72b37-0', tool_calls=[{'name': 'DynamicGraph', 'args': {'nodes': '\n "Anna" "person"\n "Australia" "country"\n', 'relationships': '\n "Anna" "BORN_IN" "Australia"\n'}, 'id': '729bbfac-419c-47b9-a35b-fb33d6fda0f8', 'type': 'tool_call'}], usage_metadata={'input_tokens': 491, 'output_tokens': 52, 'total_tokens': 543}), 'parsing_error': 2 validation errors for DynamicGraph
nodes
Input should be a valid list [type=list_type, input_value='\n "Anna" "person"\n "Australia" "country"\n', input_type=str]
For further information visit https://errors.pydantic.dev/2.9/v/list_type
relationships
Input should be a valid list [type=list_type, input_value='\n "Anna" "BORN_IN" "Australia"\n', input_type=str]
For further information visit https://errors.pydantic.dev/2.9/v/list_type, 'parsed': None}
```
### Description
Hello, I've been trying to use LLMGraphTransformer with a model that is not from OpenAI, so I tried Google Gemini, however, I noticed that the result is always empty for nodes and relationships.
So, I added this print statement of the raw_schema to check what was the reply from the LLM and I receive this error, so it looks like no Node or Relationship is created because the arguments to the DynamicGraph function are expected to be lists, but they are currently strings.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.1.0: Mon Oct 9 21:28:45 PDT 2023; root:xnu-10002.41.9~6/RELEASE_ARM64_T6020
> Python Version: 3.10.6 (main, Sep 1 2024, 16:19:04) [Clang 15.0.0 (clang-1500.0.40.1)]
Package Information
-------------------
> langchain_core: 0.3.0
> langchain: 0.3.0
> langchain_community: 0.3.0
> langsmith: 0.1.120
> langchain_experimental: 0.3.0
> langchain_google_genai: 2.0.0
> langchain_ollama: 0.2.0
> langchain_openai: 0.2.0
> langchain_text_splitters: 0.3.0
Optional packages not installed
-------------------------------
> langgraph
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.5
> async-timeout: 4.0.3
> dataclasses-json: 0.6.7
> google-generativeai: 0.7.2
> httpx: 0.27.2
> jsonpatch: 1.33
> numpy: 1.26.4
> ollama: 0.3.3
> openai: 1.45.0
> orjson: 3.10.7
> packaging: 24.1
> pillow: 10.4.0
> pydantic: 2.9.1
> pydantic-settings: 2.5.2
> PyYAML: 6.0.2
> requests: 2.32.3
> SQLAlchemy: 2.0.34
> tenacity: 8.5.0
> tiktoken: 0.7.0
> typing-extensions: 4.12.2 | 🤖:bug,Ɑ: core | low | Critical |
2,534,256,309 | tensorflow | Code error when feature name has multiple `_` | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
source
### TensorFlow version
tf 2.17.0
### Custom code
Yes
### OS platform and distribution
5.15.149-99.162.amzn2.x86_64
### Mobile device
_No response_
### Python version
Python 3.10.14
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
Expect there shouldn't be errors just by changing feature name.
### Standalone code to reproduce the issue
This is a code sample that will work normally
```
import tensorflow as tf
from tensorflow.keras.layers import Input, Dense, Concatenate
from tensorflow.keras import Model
import pandas as pd
import numpy as np
df = pd.DataFrame()
numeric_feature_name = 'a' * 27
categorical_feature_name = 'b' * 11
df[numeric_feature_name] = range(1000)
df[categorical_feature_name] = 'a'
df['label'] = 1
numeric_feature_layer = tf.keras.Input(shape=(1,), name=numeric_feature_name, dtype='float32')
categorical_feature_layer = tf.keras.Input(shape=(1,), name=categorical_feature_name, dtype="string")
encoding_layer = get_category_encoding_layer(vocab=['a'])
encoded_categorical_feature = encoding_layer(categorical_feature_layer)
all_inputs = [numeric_feature_layer, categorical_feature_layer]
encoded_features = [numeric_feature_layer, encoded_categorical_feature]
concat_features = Concatenate()(encoded_features)
output = Dense(units=1, activation='sigmoid')(concat_features)
model = Model(inputs=all_inputs, outputs=output)
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
dataframe_x = df[[numeric_feature_name, categorical_feature_name]]
dataframe_y = df['label']
df2 = ((dict(dataframe_x), dataframe_y))
ds = tf.data.Dataset.from_tensor_slices(df2)
ds = ds.batch(32)
ds_train = ds
model.fit(
ds_train,
epochs=10,
batch_size=300,
verbose=1
)
```
However, if I change the feature name, the same code will throw error
```
df = pd.DataFrame()
## Just change the feature name here
numeric_feature_name = 'a_b_c_d_e_f_g'
categorical_feature_name = 'a_b_c_d_e_f'
df[numeric_feature_name] = range(1000)
df[categorical_feature_name] = 'a'
df['label'] = 1
numeric_feature_layer = tf.keras.Input(shape=(1,), name=numeric_feature_name, dtype='float32')
categorical_feature_layer = tf.keras.Input(shape=(1,), name=categorical_feature_name, dtype="string")
encoding_layer = get_category_encoding_layer(vocab=['a'])
encoded_categorical_feature = encoding_layer(categorical_feature_layer)
all_inputs = [numeric_feature_layer, categorical_feature_layer]
encoded_features = [numeric_feature_layer, encoded_categorical_feature]
concat_features = Concatenate()(encoded_features)
output = Dense(units=1, activation='sigmoid')(concat_features)
model = Model(inputs=all_inputs, outputs=output)
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
dataframe_x = df[[numeric_feature_name, categorical_feature_name]]
dataframe_y = df['label']
df2 = ((dict(dataframe_x), dataframe_y))
ds = tf.data.Dataset.from_tensor_slices(df2)
ds = ds.batch(32)
ds_train = ds
model.fit(
ds_train,
epochs=10,
batch_size=300,
verbose=1
)
```
We have tested that this error is on 2.17.0 and if we are using 2.15 tensorflow, both codes will run smoothly.
```
### Relevant log output
```shell
Epoch 1/10
2024-09-18 05:28:05.240962: W tensorflow/core/framework/op_kernel.cc:1817] OP_REQUIRES failed at cast_op.cc:122 : UNIMPLEMENTED: Cast string to float is not supported
---------------------------------------------------------------------------
UnimplementedError Traceback (most recent call last)
Cell In[14], line 8
5 ds = ds.batch(32)
6 ds_train = ds
----> 8 model.fit(
9 ds_train,
10 epochs=10,
11 batch_size=300,
12 verbose=1
13 )
File ~/.airconda-environments/production--payments--tensorflow--ray_tf215--v0.0.1/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py:122, in filter_traceback.<locals>.error_handler(*args, **kwargs)
119 filtered_tb = _process_traceback_frames(e.__traceback__)
120 # To get the full stack trace, call:
121 # `keras.config.disable_traceback_filtering()`
--> 122 raise e.with_traceback(filtered_tb) from None
123 finally:
124 del filtered_tb
File ~/.airconda-environments/production--payments--tensorflow--ray_tf215--v0.0.1/lib/python3.10/site-packages/tensorflow/python/eager/execute.py:53, in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
51 try:
52 ctx.ensure_initialized()
---> 53 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
54 inputs, attrs, num_outputs)
55 except core._NotOkStatusException as e:
56 if name is not None:
UnimplementedError: Graph execution error:
Detected at node functional_5_1/Cast defined at (most recent call last):
File "/home/jinqi_shen/.airconda-environments/production--payments--tensorflow--ray_tf215--v0.0.1/lib/python3.10/runpy.py", line 196, in _run_module_as_main
File "/home/jinqi_shen/.airconda-environments/production--payments--tensorflow--ray_tf215--v0.0.1/lib/python3.10/runpy.py", line 86, in _run_code
File "/home/jinqi_shen/.local/lib/python3.10/site-packages/ipykernel_launcher.py", line 18, in <module>
File "/home/jinqi_shen/.local/lib/python3.10/site-packages/traitlets/config/application.py", line 1075, in launch_instance
File "/home/jinqi_shen/.local/lib/python3.10/site-packages/ipykernel/kernelapp.py", line 739, in start
File "/home/jinqi_shen/.local/lib/python3.10/site-packages/tornado/platform/asyncio.py", line 205, in start
File "/home/jinqi_shen/.airconda-environments/production--payments--tensorflow--ray_tf215--v0.0.1/lib/python3.10/asyncio/base_events.py", line 603, in run_forever
File "/home/jinqi_shen/.airconda-environments/production--payments--tensorflow--ray_tf215--v0.0.1/lib/python3.10/asyncio/base_events.py", line 1909, in _run_once
File "/home/jinqi_shen/.airconda-environments/production--payments--tensorflow--ray_tf215--v0.0.1/lib/python3.10/asyncio/events.py", line 80, in _run
File "/home/jinqi_shen/.local/lib/python3.10/site-packages/ipykernel/kernelbase.py", line 545, in dispatch_queue
File "/home/jinqi_shen/.local/lib/python3.10/site-packages/ipykernel/kernelbase.py", line 534, in process_one
File "/home/jinqi_shen/.local/lib/python3.10/site-packages/ipykernel/kernelbase.py", line 437, in dispatch_shell
File "/home/jinqi_shen/.local/lib/python3.10/site-packages/ipykernel/ipkernel.py", line 362, in execute_request
File "/home/jinqi_shen/.local/lib/python3.10/site-packages/ipykernel/kernelbase.py", line 778, in execute_request
File "/home/jinqi_shen/.local/lib/python3.10/site-packages/ipykernel/ipkernel.py", line 449, in do_execute
File "/home/jinqi_shen/.local/lib/python3.10/site-packages/ipykernel/zmqshell.py", line 549, in run_cell
File "/home/jinqi_shen/.local/lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 3075, in run_cell
File "/home/jinqi_shen/.local/lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 3130, in _run_cell
File "/home/jinqi_shen/.local/lib/python3.10/site-packages/IPython/core/async_helpers.py", line 128, in _pseudo_sync_runner
File "/home/jinqi_shen/.local/lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 3334, in run_cell_async
File "/home/jinqi_shen/.local/lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 3517, in run_ast_nodes
File "/home/jinqi_shen/.local/lib/python3.10/site-packages/IPython/core/interactiveshell.py", line 3577, in run_code
File "/tmp/ipykernel_37779/4021243845.py", line 8, in <module>
File "/home/jinqi_shen/.airconda-environments/production--payments--tensorflow--ray_tf215--v0.0.1/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py", line 117, in error_handler
File "/home/jinqi_shen/.airconda-environments/production--payments--tensorflow--ray_tf215--v0.0.1/lib/python3.10/site-packages/keras/src/backend/tensorflow/trainer.py", line 320, in fit
File "/home/jinqi_shen/.airconda-environments/production--payments--tensorflow--ray_tf215--v0.0.1/lib/python3.10/site-packages/keras/src/backend/tensorflow/trainer.py", line 121, in one_step_on_iterator
File "/home/jinqi_shen/.airconda-environments/production--payments--tensorflow--ray_tf215--v0.0.1/lib/python3.10/site-packages/keras/src/backend/tensorflow/trainer.py", line 108, in one_step_on_data
File "/home/jinqi_shen/.airconda-environments/production--payments--tensorflow--ray_tf215--v0.0.1/lib/python3.10/site-packages/keras/src/backend/tensorflow/trainer.py", line 51, in train_step
File "/home/jinqi_shen/.airconda-environments/production--payments--tensorflow--ray_tf215--v0.0.1/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py", line 117, in error_handler
File "/home/jinqi_shen/.airconda-environments/production--payments--tensorflow--ray_tf215--v0.0.1/lib/python3.10/site-packages/keras/src/layers/layer.py", line 901, in __call__
File "/home/jinqi_shen/.airconda-environments/production--payments--tensorflow--ray_tf215--v0.0.1/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py", line 117, in error_handler
File "/home/jinqi_shen/.airconda-environments/production--payments--tensorflow--ray_tf215--v0.0.1/lib/python3.10/site-packages/keras/src/ops/operation.py", line 46, in __call__
File "/home/jinqi_shen/.airconda-environments/production--payments--tensorflow--ray_tf215--v0.0.1/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py", line 156, in error_handler
File "/home/jinqi_shen/.airconda-environments/production--payments--tensorflow--ray_tf215--v0.0.1/lib/python3.10/site-packages/keras/src/models/functional.py", line 167, in call
File "/home/jinqi_shen/.airconda-environments/production--payments--tensorflow--ray_tf215--v0.0.1/lib/python3.10/site-packages/keras/src/models/functional.py", line 258, in _standardize_inputs
File "/home/jinqi_shen/.airconda-environments/production--payments--tensorflow--ray_tf215--v0.0.1/lib/python3.10/site-packages/keras/src/models/functional.py", line 218, in _convert_inputs_to_tensors
File "/home/jinqi_shen/.airconda-environments/production--payments--tensorflow--ray_tf215--v0.0.1/lib/python3.10/site-packages/keras/src/ops/core.py", line 822, in convert_to_tensor
File "/home/jinqi_shen/.airconda-environments/production--payments--tensorflow--ray_tf215--v0.0.1/lib/python3.10/site-packages/keras/src/backend/tensorflow/core.py", line 132, in convert_to_tensor
Cast string to float is not supported
[[{{node functional_5_1/Cast}}]] [Op:__inference_one_step_on_iterator_6125]
```
| stat:awaiting tensorflower,type:bug,comp:keras,2.17 | low | Critical |
2,534,279,235 | excalidraw | Quick switch between shapes (rect <-> circle, etc.) | Add the ability to switch between different shapes quickly, keeping the original shape style and bound text size.
The current way of doing this includes a lot of manual steps:
- turn on snapping and create a new shape of a similar size
- copy the styles from the original shape and paste the style into the new shape
- unbind text from the original shape and bind it to the new shape (usually the size of the bound text is different than the built-in size)
Instead, changing from rect <-> circle/diamond should be doable with a single user action, for all the selected elements.
Related discussion #8277
| UX/UI | low | Major |
2,534,291,934 | storybook | [Bug]: ToolbarMenuListItem.tsx is missing __suppressDeprecationWarning for <Icons> | ### Describe the bug
This line of code:
` const Icon = icon && <Icons style={{ opacity: 1 }} icon={icon} />;`
Should have been:
` const Icon = icon && <Icons style={{ opacity: 1 }} icon={icon} __suppressDeprecationWarning={true} />;`
Like in `ToolbarMenuButton.tsx`:
` {icon && <Icons icon={icon} __suppressDeprecationWarning={true} />} `
Otherwise I get a very annoying warning in my console, like this:

### Reproduction link
https://github.com/storybookjs/storybook/blob/next/code/addons/toolbars/src/components/ToolbarMenuListItem.tsx
### Reproduction steps
Go to this file: https://github.com/storybookjs/storybook/blob/next/code/addons/toolbars/src/components/ToolbarMenuButton.tsx
Read the comment:
```
// We can't remove the Icons component just yet because there's no way for now to import icons
// in the preview directly. Before having a better solution, we are going to keep the Icons component
// for now and remove the deprecated warning.
```
Read the code:
```
{icon && <Icons icon={icon} __suppressDeprecationWarning={true} />}
```
Go to this file:
https://github.com/storybookjs/storybook/blob/next/code/addons/toolbars/src/components/ToolbarMenuListItem.tsx
There is no comment to read.
And the code is:
```
const Icon = icon && <Icons style={{ opacity: 1 }} icon={icon} />;
```
It's missing the `__suppressDeprecationWarning`.
### System
```bash
Storybook Environment Info:
System:
OS: Windows 10 10.0.19045
CPU: (16) x64 13th Gen Intel(R) Core(TM) i7-1360P
Binaries:
Node: 20.13.1 - C:\Program Files\nodejs\node.EXE
npm: 10.8.3 - C:\Program Files\nodejs\npm.CMD <----- active
Browsers:
Edge: Chromium (127.0.2651.74)
npmPackages:
@storybook/addon-a11y: ^8.3.1 => 8.3.1
@storybook/addon-essentials: ^8.3.1 => 8.3.1
@storybook/addon-toolbars: ^8.3.1 => 8.3.1
@storybook/react-vite: ^8.3.1 => 8.3.1
@storybook/theming: ^8.3.0 => 8.3.1
storybook: ^8.3.0 => 8.3.1
```
### Additional context
_No response_ | bug,needs triage | low | Critical |
2,534,338,792 | rust | ICE: broken MIR: NoSolution on HRTB over GAT in trait object | <!--
Thank you for finding an Internal Compiler Error! 🧊 If possible, try to provide
a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for
how to create smaller examples.
http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/
-->
### Code
MWE below, also tested on nightly (via the playground). Interestingly the error goes away if I manually inline the `Node::new` calls. Minimised from a filter graph that uses trait objects for type erasure.
```Rust
pub trait Transform {
type Output<'a>;
}
pub trait Propagate<Input> {}
type Child<T> = Box<dyn for<'a> Propagate<<T as Transform>::Output<'a>>>;
pub struct Node<T>
where
T: Transform,
{
transform: T,
children: Vec<Child<T>>,
}
impl<T> Node<T>
where
T: Transform,
{
pub fn new(transform: T, children: Vec<Child<T>>) -> Self {
Node {
transform,
children,
}
}
}
impl<Input, T> Propagate<Input> for Node<T> where T: Transform {}
pub fn main() {
struct Noop;
impl Transform for Noop {
type Output<'a> = ();
}
let node = Box::new(Node::new(Noop, vec![Box::new(Node::new(Noop, vec![]))]));
}
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.81.0 (eeb90cda1 2024-09-04)
binary: rustc
commit-hash: eeb90cda1969383f56a2637cbd3037bdf598841c
commit-date: 2024-09-04
host: x86_64-unknown-linux-gnu
release: 1.81.0
LLVM version: 18.1.7
```
### Error output
```
error: internal compiler error: broken MIR in DefId(0:31 ~ playground[6f8d]::test::run#1) ([move _11]): std::boxed::Box<dyn [Binder { value: Trait(Propagate<()>), bound_vars: [Region(BrNamed(DefId(0:10 ~ playground[6f8d]::Child::'a), 'a))] }] + '?14, std::alloc::Global> is not a subtype of std::boxed::Box<dyn [Binder { value: Trait(Propagate<<test::Noop as Transform>::Output<'a>>), bound_vars: [Region(BrNamed(DefId(0:10 ~ playground[6f8d]::Child::'a), 'a))] }] + '?7, std::alloc::Global>: NoSolution
--> src/lib.rs:43:45
|
43 | let node = Box::new(Node::new(Noop, vec![Box::new(Node::new(Noop, vec![]))]));
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
note: delayed at compiler/rustc_borrowck/src/type_check/mod.rs:2570:17 - disabled backtrace
--> src/lib.rs:43:45
|
43 | let node = Box::new(Node::new(Noop, vec![Box::new(Node::new(Noop, vec![]))]));
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
= note: this error: internal compiler error originates in the macro `vec` (in Nightly builds, run with -Z macro-backtrace for more info)
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: rustc 1.81.0 (eeb90cda1 2024-09-04) running on x86_64-unknown-linux-gnu
note: compiler flags: -C embed-bitcode=no -C codegen-units=1 -C debuginfo=2
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
end of query stack
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary><strong>Backtrace</strong></summary>
<p>
```
Compiling my_test v0.1.0 (/tmp/my_test)
warning: unused variable: `node`
--> src/main.rs:38:9
|
38 | let node = Box::new(Node::new(Noop, vec![Box::new(Node::new(Noop, vec![]))]));
| ^^^^ help: if this is intentional, prefix it with an underscore: `_node`
|
= note: `#[warn(unused_variables)]` on by default
warning: fields `transform` and `children` are never read
--> src/main.rs:13:5
|
9 | pub struct Node<T>
| ---- fields in this struct
...
13 | transform: T,
| ^^^^^^^^^
14 | children: Vec<Child<T>>,
| ^^^^^^^^
|
= note: `#[warn(dead_code)]` on by default
note: no errors encountered even though delayed bugs were created
note: those delayed bugs will now be shown as internal compiler errors
error: internal compiler error: broken MIR in DefId(0:21 ~ my_test[db07]::main) ([move _11]): std::boxed::Box<dyn [Binder { value: Trait(Propagate<()>), bound_vars: [Region(BrNamed(DefId(0:10 ~ my_test[db07]::Child::'a), 'a))] }] + '?14, std::alloc::Global> is not a subtype of std::boxed::Box<dyn [Binder { value: Trait(Propagate<<main::Noop as Transform>::Output<'a>>), bound_vars: [Region(BrNamed(DefId(0:10 ~ my_test[db07]::Child::'a), 'a))] }] + '?7, std::alloc::Global>: NoSolution
--> src/main.rs:38:41
|
38 | let node = Box::new(Node::new(Noop, vec![Box::new(Node::new(Noop, vec![]))]));
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
note: delayed at compiler/rustc_borrowck/src/type_check/mod.rs:2570:17
0: <rustc_errors::DiagCtxtInner>::emit_diagnostic
1: <rustc_errors::DiagCtxtHandle>::emit_diagnostic
2: <rustc_span::ErrorGuaranteed as rustc_errors::diagnostic::EmissionGuarantee>::emit_producing_guarantee
3: <rustc_errors::DiagCtxtHandle>::span_delayed_bug::<rustc_span::span_encoding::Span, alloc::string::String>
4: <rustc_borrowck::type_check::TypeChecker>::typeck_mir
5: rustc_borrowck::type_check::type_check
6: rustc_borrowck::nll::compute_regions
7: rustc_borrowck::do_mir_borrowck
8: rustc_query_impl::plumbing::__rust_begin_short_backtrace::<rustc_query_impl::query_impl::mir_borrowck::dynamic_query::{closure#2}::{closure#0}, rustc_middle::query::erase::Erased<[u8; 8]>>
9: rustc_query_system::query::plumbing::try_execute_query::<rustc_query_impl::DynamicConfig<rustc_query_system::query::caches::VecCache<rustc_span::def_id::LocalDefId, rustc_middle::query::erase::Erased<[u8; 8]>>, false, false, false>, rustc_query_impl::plumbing::QueryCtxt, true>
10: rustc_query_impl::query_impl::mir_borrowck::get_query_incr::__rust_end_short_backtrace
11: rustc_interface::passes::analysis
12: rustc_query_impl::plumbing::__rust_begin_short_backtrace::<rustc_query_impl::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle::query::erase::Erased<[u8; 1]>>
13: rustc_query_system::query::plumbing::try_execute_query::<rustc_query_impl::DynamicConfig<rustc_query_system::query::caches::SingleCache<rustc_middle::query::erase::Erased<[u8; 1]>>, false, false, false>, rustc_query_impl::plumbing::QueryCtxt, true>
14: rustc_query_impl::query_impl::analysis::get_query_incr::__rust_end_short_backtrace
15: rustc_interface::interface::run_compiler::<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}
16: std::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface::util::run_in_thread_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>
17: <<std::thread::Builder>::spawn_unchecked_<rustc_interface::util::run_in_thread_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#1} as core::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
18: <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/alloc/src/boxed.rs:2070:9
19: <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/alloc/src/boxed.rs:2070:9
20: std::sys::pal::unix::thread::Thread::new::thread_start
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library/std/src/sys/pal/unix/thread.rs:108:17
21: <unknown>
22: <unknown>
--> src/main.rs:38:41
|
38 | let node = Box::new(Node::new(Noop, vec![Box::new(Node::new(Noop, vec![]))]));
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
= note: this error: internal compiler error originates in the macro `vec` (in Nightly builds, run with -Z macro-backtrace for more info)
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: rustc 1.81.0 (eeb90cda1 2024-09-04) running on x86_64-unknown-linux-gnu
note: compiler flags: --crate-type bin -C embed-bitcode=no -C debuginfo=2 -C incremental=[REDACTED]
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
end of query stack
warning: `my_test` (bin "my_test") generated 2 warnings
error: could not compile `my_test` (bin "my_test"); 2 warnings emitted
```
</p>
</details>
| A-lifetimes,I-ICE,A-trait-system,T-compiler,C-bug,S-bug-has-test,fixed-by-next-solver,A-trait-objects,A-GATs,A-higher-ranked | low | Critical |
2,534,364,060 | rust | wrapping an erroring type in a tuple causes E0277 to be shown where it otherwise wouldn't be | ### Code
```rust
use bevy::prelude::*;
#[derive(Component)]
struct Comp1(NonZeroU8);
fn main() {}
fn setup(mut commands: Commands) {
commands.spawn((Comp1(1.try_into().unwrap()),));
}
```
### Current output
```
Compiling game v0.1.0 (/playground)
error[E0412]: cannot find type `NonZeroU8` in this scope
--> src/main.rs:4:14
|
4 | struct Comp1(NonZeroU8);
| ^^^^^^^^^ not found in this scope
|
help: consider importing this type alias
|
1 + use std::num::NonZeroU8;
|
error[E0277]: `(Comp1,)` is not a `Bundle`
--> src/main.rs:9:20
|
9 | commands.spawn((Comp1(1.try_into().unwrap()),));
| ----- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ invalid `Bundle`
| |
| required by a bound introduced by this call
|
= help: the trait `Bundle` is not implemented for `(Comp1,)`
= note: consider annotating `(Comp1,)` with `#[derive(Component)]` or `#[derive(Bundle)]`
= help: the following other types implement trait `Bundle`:
()
(B0, B1)
(B0, B1, B2)
(B0, B1, B2, B3)
(B0, B1, B2, B3, B4)
(B0, B1, B2, B3, B4, B5)
(B0, B1, B2, B3, B4, B5, B6)
(B0, B1, B2, B3, B4, B5, B6, B7)
and 8 others
note: required by a bound in `bevy::prelude::Commands::<'w, 's>::spawn`
--> /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bevy_ecs-0.14.2/src/system/commands/mod.rs:362:21
|
362 | pub fn spawn<T: Bundle>(&mut self, bundle: T) -> EntityCommands {
| ^^^^^^ required by this bound in `Commands::<'w, 's>::spawn`
Some errors have detailed explanations: E0277, E0412.
For more information about an error, try `rustc --explain E0277`.
error: could not compile `game` (bin "game") due to 2 previous errors
```
### Desired output
```
Compiling game v0.1.0 (/playground)
error[E0412]: cannot find type `NonZeroU8` in this scope
--> src/main.rs:4:14
|
4 | struct Comp1(NonZeroU8);
| ^^^^^^^^^ not found in this scope
|
help: consider importing this type alias
|
1 + use std::num::NonZeroU8;
|
Some errors have detailed explanations: E0277, E0412.
For more information about an error, try `rustc --explain E0277`.
error: could not compile `game` (bin "game") due to 2 previous errors
```
### Rationale and extra context
this is a bug, because `E0277` is *not* shown when trying to directly use a trait on a type that contains an error, or when the trait is `Default`. only when using a user-defined trait within a tuple is this shown.
### Other cases
_No response_
### Rust Version
rustc 1.83.0-nightly (04a318787 2024-09-15)
binary: rustc
commit-hash: 04a318787b39732e306faf5ef6dc584990f4f417
commit-date: 2024-09-15
host: x86_64-unknown-linux-gnu
release: 1.83.0-nightly
LLVM version: 19.1.0
### Anything else?
_No response_ | A-diagnostics,T-compiler | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.