id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k โ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,642,511,418 | flutter | github-actions bot not closing issues. | ### Type of Request
bug
### Infrastructure Environment
github-actions bot
### What is happening?
I marked an issue as "waiting for customer response" 3 weeks ago, and after a period of inaction, the `github-actions` bot messaged that it was going to "reluctantly going to close this bug for now".
It didn't actually close the issue though (I guess it was feeling *very* reluctant!)
Here's the link to the issue:
* https://github.com/flutter/flutter/issues/155444
I'm leaving it open in case there's something fishy on it.
### Steps to reproduce
1. Add the `waiting for customer response` label to a flutter/flutter issue.
2. Wait 3 weeks.
### Expected results
I'd expect the issue to be *closed* after the inactivity period. The issue where I applied this label remained open. | team-infra,P2,triaged-infra | low | Critical |
2,642,536,934 | storybook | [Bug]: Vitest/Test Addon doesn't work if space in path | ### Describe the bug
If your project is in a folder that has a space in it, Vitest cannot find your tests. This may be a bug with Vitest but since it gives a specific error for Storybook I'll log it here. This could be windows only, unsure.
Create any Storybook project, with the test addon as described here : https://storybook.js.org/docs/writing-tests/test-addon
Ensure that the path is something like c:\my projects\storybook\src with a space somewhere in the path
When I execute npm run test, I get the following error for *all* stories. Note that I do not have any specific spec/test files, at this stage I'm just using it as a smoke test.
```
Error: No test suite found in file C:/my projects/storybook/src/stories/global/popover/popover.stories.tsx
```
If I remove the space in "my projects" the tests all run fine.
I'm not sure if this should be added to the docs because it is "true" that I do not have specific test suite files in those files (They are simply stories), so I thought maybe I had to write specific spec files etc.
### Reproduction link
http://no.com
### Reproduction steps
_No response_
### System
Storybook Environment Info:
System:
OS: Windows 10 10.0.19045
CPU: (12) x64 12th Gen Intel(R) Core(TM) i5-1245U
Binaries:
Node: 20.10.0 - C:\Node\node-v20.10.0-win-x64\node.EXE
npm: 10.2.3 - C:\Node\node-v20.10.0-win-x64\npm.CMD <----- active
Browsers:
Edge: Chromium (128.0.2739.42)
npmPackages:
@storybook/addon-a11y: ^8.4.2 => 8.4.2
@storybook/addon-essentials: ^8.4.2 => 8.4.2
@storybook/addon-links: ^8.4.2 => 8.4.2
@storybook/blocks: ^8.4.2 => 8.4.2
@storybook/experimental-addon-test: ^8.4.2 => 8.4.2
@storybook/react: ^8.4.2 => 8.4.2
@storybook/react-vite: ^8.4.2 => 8.4.2
@storybook/test: ^8.4.2 => 8.4.2
@storybook/test-runner: ^0.19.1 => 0.19.1
eslint-plugin-storybook: ^0.11.0 => 0.11.0
storybook: ^8.4.2 => 8.4.2
### Additional context
_No response_ | bug,windows,addon: test | low | Critical |
2,642,560,947 | deno | deno fmt: different configuration per language type | I would like to format my yaml files different than my typescript code. For example, I'd like to use different indentation level.
In Prettier, it's possible by using:
```json
{
"tabWidth": 4,
"overrides": [
{
"files": [".yaml"],
"options": {
"tabWidth": 2
}
}
]
}
```
It seems it's currently not possible to achieve in deno.json file. | deno fmt,config | low | Minor |
2,642,583,722 | ollama | Linux ollama 0.4.0, 0.4.2, 0.4.5, 0.5.0 custom compile for AMD ROCm fails missing ggml_rocm in go compile | ### What is the issue?
Report date: 2024-11-07
During a custom compile of ollama 0.4.0 on Linux (POP OS 22.04) for AMD ROCm GPUs (AMD 6650 GPU), the initial compile works.
However, when trying to execute the go compile, the compile fails after about two minutes citing exit code 1 and saying the error is unable to find ggml_rocm.
ROCm 6.0
### OS
Linux
### GPU
AMD
### CPU
AMD
### Ollama version
0.4.0 | bug,build | medium | Critical |
2,642,595,735 | flutter | [Impeller] Implement OpenGLES multisampling without render to texture extension. | Currently the OpenGL backend uses an ES multisampling extension. We should also support "regular" desktop OpenGL multisampling via creating multisampled textures/render buffers and performing the blit resolve ourselves. | P2,e: impeller,team-engine,triaged-engine,e: opengl | low | Major |
2,642,605,716 | deno | Short flag `-w` for `--watch` | Small quality-of-life improvement, but it'd be handy if the `--watch` option for `deno run`, `deno test`, etc. could be supplied as `-w`. At least for me, it's easily the single most common CLI option I use (certainly more common than `-c`, `-q`, or `-r`, which all get their own short flags). | suggestion,--watch | low | Minor |
2,642,633,821 | excalidraw | Canvas panning on the right mouse button | I need to move the canvas with the right mouse button; the middle mouse button is very inconvenient. Thanks! | enhancement | low | Minor |
2,642,639,286 | pytorch | Building PyTorch from source fails when magma.h is in /usr/include | ### ๐ Describe the bug
Building PyTorch from source fails when `magma.h` from libmagma2 is in `/usr/include` instead of a non-standard directory.
Breakage is:
```
In file included from /usr/include/c++/10/bits/stl_algo.h:59,
from /usr/include/c++/10/functional:65,
from /builds/py3ps/pytorch/c10/core/Allocator.h:5,
from /builds/py3ps/pytorch/aten/src/ATen/detail/CUDAHooksInterface.h:3,
from /builds/py3ps/pytorch/aten/src/ATen/cuda/detail/CUDAHooks.h:3,
from /builds/py3ps/pytorch/aten/src/ATen/cuda/detail/CUDAHooks.cpp:1:
/usr/include/c++/10/cstdlib:75:15: fatal error: stdlib.h: No such file or directory
75 | #include_next <stdlib.h>
| ^~~~~~~~~~
```
Commenting out these three lines seems to fix the issue:
https://github.com/pytorch/pytorch/blob/81d077cca2aacaf107afdefcd2e0292a95d2671b/caffe2/CMakeLists.txt#L986-L988
It seems those three lines are a workaround for an issue when `magma.h` lives on a non-standard directory, but then it fails when it lives in the standard include directory.
Can we have a fix for the cmake config that would take that into account? For now I'm patching the tree to comment out these lines, but it would be nice if I didn't have to maintain this downstream patch. Thanks!
### Versions
Collecting environment information...
PyTorch version: 2.2.1+cu12.xyz
Is debug build: False
CUDA used to build PyTorch: 12.2
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 11 (bullseye) (x86_64)
GCC version: (Debian 10.2.1-6) 10.2.1 20210110
Clang version: Could not collect
CMake version: version 3.25.1
Libc version: glibc-2.31
Python version: 3.10.13 (main, Apr 4 2024, 19:12:22) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.6.43-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
Nvidia driver version: 555.42.02
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
Versions of relevant libraries:
[pip3] flake8==6.0.0
[pip3] gpytorch==1.13
[pip3] msgpack-numpy==0.4.8
[pip3] mypy==1.10.1
[pip3] mypy-extensions==1.0.0
[pip3] mypy-protobuf==3.2.0
[pip3] numpy==1.23.2
[pip3] numpyro==0.10.1
[pip3] pytorch-lightning==2.4.0
[pip3] torch==2.2.1+cu12.xyz
[pip3] torchmetrics==1.4.2
[pip3] triton==2.2.0
[conda] Could not collect
cc @malfet @seemethere | module: build,triaged,module: magma | low | Critical |
2,642,643,567 | pytorch | Add Android Support (Python Wheel) | ### ๐ The feature, motivation and pitch
Now that Python 3.13 has official support for Android, it would be great to have an Android wheel for PyTorch.
Thereโs a project called Chaquopy that has been manually patching libraries like PyTorch, and thereโs currently an outstanding request for adding an updated wheel (so I figured I should bring the request to the official devs): https://github.com/chaquo/chaquopy/issues/1215
### Alternatives
_No response_
### Additional context
_No response_
cc @seemethere @malfet @osalpekar @atalman | module: binaries,feature,triaged,module: android | low | Minor |
2,642,649,442 | pytorch | [Compiled_autograd] running nn.LayerNorm failed for torch.compile with compiled_autograd when deepspeed Zero3 | ### ๐ Describe the bug
When running a simple model including torch.nn.LayerNorm using deepspeed zero3 with torch.compile and [compiled_autograd](https://github.com/pytorch/tutorials/blob/main/intermediate_source/compiled_autograd_tutorial.rst). An error occurs:
> site-packages/torch/_subclasses/fake_tensor.py:2017] RuntimeError: Attempting to broadcast a dimension of length 0 at -1! Mismatching argument at index 1 had torch.Size([0]); but expected shape should be broadcastable to [100, 120]
We first found this error in BERT model with deepspeed Zero3 with torch.compile and compiled_autograd.
- It's ok for deepspeed Zero1/2 with torch.compile and compiled_autograd
- It's ok for deepspeed Zero3 with torch.compile and without compiled_autograd
- There are a lot of graph beaks and recompiles in deepspeed Zero3 with torch.compile. And Zero3 will partition model parameters through [hooks](https://github.com/microsoft/DeepSpeed/blob/master/deepspeed/runtime/zero/parameter_offload.py#L259).
- To simplify the issue, I made a small reproducer to extract error op(torch.nn.LayerNorm)
**Investigation**
The error: "RuntimeError: Attempting to broadcast a dimension of length 0 at -1! Mismatching argument at index 1 had torch.Size([0]); but expected shape should be broadcastable to [100, 120]"
**It occurs when compiled autograd tries to trace the backward graph.**
It appears in [LayerNorm backward decompositions](https://github.com/pytorch/pytorch/blob/main/torch/_decomp/decompositions.py#L1703). It tries to broadcast weight_cast(torch.Size([0]) to grad_out_cast' shape([100, 120]) and fails.
```
if weight_cast is not None:
grad_x_hat = grad_out_cast * weight_cast
```
If bypassing the LayerNorm weight by setting `nn.LayerNorm(120, eps=1e-12, elementwise_affine=False)` instead of `elementwise_affine=True` in the file deepspeed_reproducer_cpu.py, the running is ok.
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @chauhang @penguinwu @xmfan @yf225 @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7
### Error logs
```
[rank0]: Traceback (most recent call last):
[rank0]: File "/home1/yitingw1/habana/deepspeed_demo/deepspeed_reproducer_cpu.py", line 83, in <module>
[rank0]: model_engine.backward(loss)
[rank0]: File "/home1/yitingw1/mambaforge/envs/wyt_pt/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 18, in wrapped_fn
[rank0]: ret_val = func(*args, **kwargs)
[rank0]: File "/home1/yitingw1/mambaforge/envs/wyt_pt/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 2020, in backward
[rank0]: self.optimizer.backward(loss, retain_graph=retain_graph)
[rank0]: File "/home1/yitingw1/mambaforge/envs/wyt_pt/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 18, in wrapped_fn
[rank0]: ret_val = func(*args, **kwargs)
[rank0]: File "/home1/yitingw1/mambaforge/envs/wyt_pt/lib/python3.10/site-packages/deepspeed/runtime/zero/stage3.py", line 2250, in backward
[rank0]: self.loss_scaler.backward(loss.float(), retain_graph=retain_graph)
[rank0]: File "/home1/yitingw1/mambaforge/envs/wyt_pt/lib/python3.10/site-packages/deepspeed/runtime/fp16/loss_scaler.py", line 63, in backward
[rank0]: scaled_loss.backward(retain_graph=retain_graph)
[rank0]: File "/home1/yitingw1/mambaforge/envs/wyt_pt/lib/python3.10/site-packages/torch/_tensor.py", line 581, in backward
[rank0]: torch.autograd.backward(
[rank0]: File "/home1/yitingw1/mambaforge/envs/wyt_pt/lib/python3.10/site-packages/torch/autograd/__init__.py", line 347, in backward
[rank0]: _engine_run_backward(
[rank0]: File "/home1/yitingw1/mambaforge/envs/wyt_pt/lib/python3.10/site-packages/torch/autograd/graph.py", line 825, in _engine_run_backward
[rank0]: return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
[rank0]: File "/home1/yitingw1/mambaforge/envs/wyt_pt/lib/python3.10/site-packages/torch/utils/_stats.py", line 21, in wrapper
[rank0]: return fn(*args, **kwargs)
[rank0]: File "/home1/yitingw1/mambaforge/envs/wyt_pt/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 1308, in __torch_dispatch__
[rank0]: return proxy_call(self, func, self.pre_dispatch, args, kwargs)
[rank0]: File "/home1/yitingw1/mambaforge/envs/wyt_pt/lib/python3.10/site-packages/torch/fx/experimental/proxy_tensor.py", line 906, in proxy_call
[rank0]: out = func(*args, **kwargs)
[rank0]: File "/home1/yitingw1/mambaforge/envs/wyt_pt/lib/python3.10/site-packages/torch/_ops.py", line 716, in __call__
[rank0]: return self._op(*args, **kwargs)
[rank0]: File "/home1/yitingw1/mambaforge/envs/wyt_pt/lib/python3.10/site-packages/torch/utils/_stats.py", line 21, in wrapper
[rank0]: return fn(*args, **kwargs)
[rank0]: File "/home1/yitingw1/mambaforge/envs/wyt_pt/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1238, in __torch_dispatch__
[rank0]: return self.dispatch(func, types, args, kwargs)
[rank0]: File "/home1/yitingw1/mambaforge/envs/wyt_pt/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1692, in dispatch
[rank0]: return self._cached_dispatch_impl(func, types, args, kwargs)
[rank0]: File "/home1/yitingw1/mambaforge/envs/wyt_pt/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1348, in _cached_dispatch_impl
[rank0]: output = self._dispatch_impl(func, types, args, kwargs)
[rank0]: File "/home1/yitingw1/mambaforge/envs/wyt_pt/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1943, in _dispatch_impl
[rank0]: return decomposition_table[func](*args, **kwargs)
[rank0]: File "/home1/yitingw1/mambaforge/envs/wyt_pt/lib/python3.10/site-packages/torch/_decomp/decompositions.py", line 1729, in native_layer_norm_backward
[rank0]: grad_x_hat = grad_out_cast * weight_cast
[rank0]: File "/home1/yitingw1/mambaforge/envs/wyt_pt/lib/python3.10/site-packages/torch/utils/_stats.py", line 21, in wrapper
[rank0]: return fn(*args, **kwargs)
[rank0]: File "/home1/yitingw1/mambaforge/envs/wyt_pt/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1238, in __torch_dispatch__
[rank0]: return self.dispatch(func, types, args, kwargs)
[rank0]: File "/home1/yitingw1/mambaforge/envs/wyt_pt/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1692, in dispatch
[rank0]: return self._cached_dispatch_impl(func, types, args, kwargs)
[rank0]: File "/home1/yitingw1/mambaforge/envs/wyt_pt/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 1339, in _cached_dispatch_impl
[rank0]: output = self._dispatch_impl(func, types, args, kwargs)
[rank0]: File "/home1/yitingw1/mambaforge/envs/wyt_pt/lib/python3.10/site-packages/torch/_subclasses/fake_tensor.py", line 2013, in _dispatch_impl
[rank0]: r = func(*args, **kwargs)
[rank0]: File "/home1/yitingw1/mambaforge/envs/wyt_pt/lib/python3.10/site-packages/torch/_ops.py", line 716, in __call__
[rank0]: return self._op(*args, **kwargs)
[rank0]: File "/home1/yitingw1/mambaforge/envs/wyt_pt/lib/python3.10/site-packages/torch/_prims_common/wrappers.py", line 273, in _fn
[rank0]: result = fn(*args, **kwargs)
[rank0]: File "/home1/yitingw1/mambaforge/envs/wyt_pt/lib/python3.10/site-packages/torch/_prims_common/wrappers.py", line 141, in _fn
[rank0]: result = fn(**bound.arguments)
[rank0]: File "/home1/yitingw1/mambaforge/envs/wyt_pt/lib/python3.10/site-packages/torch/_refs/__init__.py", line 1049, in _ref
[rank0]: a, b = _maybe_broadcast(a, b)
[rank0]: File "/home1/yitingw1/mambaforge/envs/wyt_pt/lib/python3.10/site-packages/torch/_refs/__init__.py", line 422, in _maybe_broadcast
[rank0]: common_shape = _broadcast_shapes(
[rank0]: File "/home1/yitingw1/mambaforge/envs/wyt_pt/lib/python3.10/site-packages/torch/_refs/__init__.py", line 411, in _broadcast_shapes
[rank0]: raise RuntimeError(
[rank0]: RuntimeError: Attempting to broadcast a dimension of length 0 at -1! Mismatching argument at index 1 had torch.Size([0]); but expected shape should be broadcastable to [100, 120]
```
### Minified repro
Running script:
`TORCHDYNAMO_EXTENDED_DEBUG_CPP=1 TORCH_LOGS="+dynamo,graph,graph_code,graph_breaks,recompiles,aot_graphs,aot_joint_graph,compiled_autograd_verbose"
deepspeed --num_nodes 1 --num_gpus 1 deepspeed_reproducer_cpu.py `
Below is deepspeed_reproducer_cpu.py
```python
import torch
import torchvision
import torchvision.transforms as transforms
import torch.distributed as dist
import deepspeed
from deepspeed.accelerator import get_accelerator
from tqdm import tqdm
from torch.utils.data import DataLoader
from torch.utils.data.distributed import DistributedSampler
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
class Net(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(32 * 32 * 3, 120)
self.fc2 = nn.Linear(120, 10)
self.LayerNorm1 = nn.LayerNorm(120, eps=1e-12, elementwise_affine=True)
def forward(self, x):
x = torch.flatten(x, 1) # flatten all dimensions except batch
x = F.relu(self.fc1(x))
x = self.LayerNorm1(x)
x = self.fc2(x)
return x
compile_kwargs = {"dynamic": False}
device = torch.device('cpu')
model = Net()
model.to(device)
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9)
model_engine, optimizer, *_ = deepspeed.initialize(
model=model,
model_parameters=model.parameters(),
optimizer=optimizer,
config="./deepspeed_config.json",
)
# torch_compile
model_engine.compile(
compile_kwargs=compile_kwargs,
)
# dataset
transform = transforms.Compose(
[transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]
)
batch_size = 100
trainset = torchvision.datasets.CIFAR10(
root="./DATA/CIFAR10", train=True, download=True, transform=transform
)
# process dataset
trainloader = DataLoader(
trainset,
batch_size=batch_size,
sampler=DistributedSampler(trainset, shuffle=True),
num_workers=16,
pin_memory=True,
)
progress_bar = tqdm(
total=len(trainloader),
desc=f"Training 1/1 epoch",
position=0,
leave=True,
disable= dist.is_initialized() and dist.get_rank() != 0,
)
for epoch in range(100):
with torch._dynamo.compiled_autograd.enable(
torch.compile(backend=get_accelerator().get_compile_backend(), **compile_kwargs)):
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
inputs, labels = data
inputs, labels = inputs.to(device), labels.to(device)
optimizer.zero_grad()
# forward + backward + optimize
outputs = model_engine(inputs)
loss = criterion(outputs, labels)
model_engine.backward(loss)
model_engine.step()
# print statistics
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print(f"[{epoch + 1}, {i + 1:5d}] loss: {running_loss / 2000:.3f}")
running_loss = 0.0
progress_bar.update(1)
print("Finished Training")
```
Below is deepspeed_config.json
```json
{
"train_batch_size": 32,
"optimizer": {
"type": "SGD",
"params": {
"lr": 0.001,
"momentum": 0.9
}
},
"zero_allow_untested_optimizer": true,
"zero_optimization": {
"stage": 3,
"overlap_comm": false,
"reduce_scatter" : false,
"contiguous_gradients" : false
},
}
```
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.27.4
Libc version: glibc-2.35
Python version: 3.10.0 | packaged by conda-forge | (default, Nov 20 2021, 02:24:10) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-102-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.26.3
[pip3] torch==2.5.1+cpu
[pip3] torchaudio==2.5.1+cpu
[pip3] torchvision==0.20.1+cpu
[pip3] deepspeed==0.15.3 | oncall: distributed,triaged,oncall: pt2,module: compiled autograd | low | Critical |
2,642,721,047 | godot | Reverb "Damp" parameter values don't match description. | ### Tested versions
Godot 4.3 .Net
### System information
Win10 x64 22H2
### Issue description
"Damp" parameter says it measures how reflective the reverb is.
0 Damping should imply full reflectivity AKA no damping
1 Damping should imply no reflectivity AKA full damping.
Right now, 0 Damping performs no reflectivity AKA full damping. 1 Damping performs full reflectivity AKA no damping.
Solution: Invert values and keep current description OR keep current values and change Damping to Reflectiveness.
### Steps to reproduce
n/a
### Minimal reproduction project (MRP)
n/a | bug,documentation,topic:audio | low | Minor |
2,642,724,355 | pytorch | upsample_bilinear backward is super slow in bf16 as compared to fp32 | ### ๐ Describe the bug
backward of upsample_bilinear is super slow in bf16 as compared to fp32, may you please help check why? Thanks in advance.
following is the test case:
```python
import torch
import torch.nn as nn
import time
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.conv = nn.Conv2d(3, 1024, 3, padding=1)
self.upsample = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True)
def forward(self, x):
x = self.conv(x)
x = self.upsample(x)
return x
def profile_forward_backward(model, input_data, dtype):
model = model.to('cuda')
input_data = input_data.to('cuda').to(dtype)
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
for _ in range(10):
optimizer.zero_grad()
with torch.autocast(device_type='cuda', dtype=dtype):
output = model(input_data)
loss = output.mean()
loss.backward()
optimizer.step()
print("finish warm-up")
start_time = time.time()
for _ in range(1000):
optimizer.zero_grad()
with torch.autocast(device_type='cuda', dtype=dtype):
output = model(input_data)
loss = output.mean()
loss.backward()
optimizer.step()
end_time = time.time()
elapsed_time = end_time - start_time
print(f"{dtype} - Forward and backward time: {elapsed_time} seconds")
model = MyModel()
input_data = torch.randn(1, 3, 32, 32)
profile_forward_backward(model, input_data, torch.bfloat16)
profile_forward_backward(model, input_data, torch.float32)
```
results:
finish warm-up
torch.bfloat16 - Forward and backward time: 5.045899152755737 seconds
finish warm-up
torch.float32 - Forward and backward time: 0.5555577278137207 seconds
after profiling it further, we found that it is backward become slow, forward is quite close.
### Versions
pytorch2.3.0-py3.10-cudnn8-cuda12.3
cc @msaroufim @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @ptrblck | needs reproduction,module: performance,module: nn,module: cuda,triaged | low | Critical |
2,642,724,697 | react-native | [Bug]: maxFontSizeMultiplier is not being respected | ### Description
The maxFontSizeMultiplier prop is not being respected for Text and TextInput components. When accessibility settings are changed to increase the display text, the font size increases beyond the multiplier set.
### Steps to reproduce
- Init a new project from the CLI template
- Add a few Text/TextInput components to the screen
- Prevent some from scaling and allow others to scale with the maxFontSizeMultiplier prop passed
- Change OS accessibility settings from display text size and observe bug
### React Native Version
0.76.1
### Affected Platforms
Runtime - Android, Runtime - iOS
### Areas
Fabric - The New Renderer
### Output of `npx react-native info`
```text
System:
OS: macOS 15.0.1
CPU: (12) arm64 Apple M3 Pro
Memory: 107.84 MB / 36.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 23.1.0
path: /opt/homebrew/bin/node
Yarn:
version: 4.5.1
path: /opt/homebrew/bin/yarn
npm:
version: 10.9.0
path: /opt/homebrew/bin/npm
Watchman:
version: 2024.10.28.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods:
version: 1.15.2
path: /Users/craig/.rvm/gems/ruby-3.3.5/bin/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 24.1
- iOS 18.1
- macOS 15.1
- tvOS 18.1
- visionOS 2.1
- watchOS 11.1
Android SDK:
API Levels:
- "23"
- "24"
- "25"
- "26"
- "27"
- "28"
- "29"
- "30"
- "31"
- "33"
- "34"
- "35"
Build Tools:
- 28.0.3
- 29.0.2
- 29.0.3
- 30.0.1
- 30.0.2
- 30.0.3
- 31.0.0
- 33.0.0
- 33.0.1
- 33.0.2
- 34.0.0
- 35.0.0
System Images:
- android-28 | Google Play Intel x86 Atom
- android-29 | Google APIs Intel x86 Atom
- android-29 | Google Play Intel x86 Atom
- android-30 | Google APIs Intel x86 Atom
- android-30 | Google APIs Intel x86_64 Atom
- android-30 | Google Play Intel x86 Atom
- android-30 | Google Play Intel x86 Atom_64
- android-33 | Google APIs ARM 64 v8a
Android NDK: Not Found
IDEs:
Android Studio: 2024.2 AI-242.23339.11.2421.12550806
Xcode:
version: 16.1/16B40
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.12
path: /usr/bin/javac
Ruby:
version: 3.3.5
path: /Users/craig/.rvm/rubies/ruby-3.3.5/bin/ruby
npmPackages:
"@react-native-community/cli":
installed: 15.0.0
wanted: 15.0.0
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.1
wanted: 0.76.1
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: true
iOS:
hermesEnabled: true
newArchEnabled: true
```
### Stacktrace or Logs
```text
No logs
```
### Reproducer
https://github.com/mysport12/maxFontSizeMultiplierBug
### Screenshots and Videos


| Issue: Author Provided Repro,Impact: Regression,Resolution: PR Submitted,Component: TextInput,Component: Text,Needs: Attention,Type: New Architecture | medium | Critical |
2,642,730,245 | TypeScript | Proposal: Enhance String interface definition to support type inference for string literals | ### ๐ Search Terms
string generic methods, string concat, checked domain literal types
Related issues [#44268](https://github.com/microsoft/TypeScript/issues/44268)
### โ
Viability Checklist
- [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
- [x] This wouldn't change the runtime behavior of existing JavaScript code
- [x] This could be implemented without emitting different JS based on the types of the expressions
- [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, new syntax sugar for JS, etc.)
- [x] This isn't a request to add a new utility type: https://github.com/microsoft/TypeScript/wiki/No-New-Utility-Types
- [x] This feature would agree with the rest of our Design Goals: https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals
### โญ Suggestion
**Current Behavior:**
Currently, any manipulation on string other then assigning to literals and using `${templates}` doesn't infer the type from literal string. Basic toString and valueOf will lose type of literal string in the process.
**Desired Behavior:**
I propose enhancing the type definitions for String interface so that it can infer the exact type when used with string literals and templates. This would improve type safety and developer experience when working with string manipulation. At least valueOf, toString, toUpperCase, toLowerCase can be implemented without changing something other than the definition of String interface.
**Example of Current Issue:**
```typescript
const result = 'hello'.concat(' ', 'world'); // TypeScript infers 'string' instead of 'helloworld'
```
**Proposed Solution:**
Introduce a new type definition for `concat` that uses variadic tuple types to infer the correct concatenated string literal type:
```typescript
type Join<S extends string[], D extends string> =
S extends [] ? '' :
S extends [infer First extends string, ...infer Rest extends string[]] ?
`${First}${Rest extends [] ? '' : D}${Join<Rest, D>}` : string;
interface String {
concat<This extends string, S extends string[]>(this: This, ...strings: S): `${This}${Join<S, ''>}`;
}
const c = 'qwery'.concat("123", 'abcd')
```
**Benefits:**
- **Improved Type Inference:** Developers will get precise types when concatenating string literals.
- **Better Code Completion and Error Detection:** IDEs can provide better suggestions and catch more errors at compile-time.
- **Consistency:** Aligns with TypeScript's goal of providing accurate and useful type information.
**Potential Drawbacks:**
- **Complexity:** This might increase the complexity of TypeScript's type system for string operations.
- **Performance:** There could be an impact on type-checking performance for very complex string concatenations.
- **Error on reassign** There could be a problems when concat used in let variable initialization
```typescript
type Join<S extends string[], D extends string> =
S extends [] ? '' :
S extends [infer First extends string, ...infer Rest extends string[]] ?
`${First}${Rest extends [] ? '' : D}${Join<Rest, D>}` : string;
interface String {
concat<This extends string, S extends string[]>(this: This, ...strings: S): `${This}${Join<S, ''>}`;
toString<This extends string>(this: This): This;
toUpperCase<This extends string>(this: This): Uppercase<This>;
toLowerCase<This extends string>(this: This): Lowercase<This>;
valueOf<This extends string>(this: This): This;
}
let a = "123".concat("qwerty");
a = "something else"; // this will result in error if implemented as interface modification because of inferred type "123qwerty"
```
**Additional Context:**
This change would particularly benefit scenarios where string templates or literal string concatenation are heavily used, enhancing the robustness of TypeScript's type system in string manipulation contexts.
[Playground](https://www.typescriptlang.org/play/?#code/C4TwDgpgBAUg9gSwHYB4DKUIA9gSQEwGcpDgAnZAcwG0BdAGigBFMc8iTyqA+KAXigAoKCKgZsuAsTpQA-FADkCqAC5ho8WylRqyAGYQyUAGIIypVpI6kKSSowB0T-YagAlCBYntiNqnVo5IVEQqAADABIAb1NzYABfaI8vLQ4ZeSVVZkSo+GQUZOBGJm54sKy-OwBuQUFQSDEAGwQAYwg0YABDMmB0Sx9OW3soADl+7SQAVwBbACNDRgBBcesuOxkBOl4BdRFIqMXqBUa8SmAACwVaMpXifZGb3dD5NCeQlTFb8OiXI1jSHK-dyeBLlN6hEQvZptDrdXqFRgjRjUf5FKBOByLWi8cEQj5oGp1cDQNDQiAAUQIfW82kqw0p+C+UzmCygyxpHE6SBAGx0DDZLRaXzp-EUCm2wVE+0Ox1OFyuNw5d2iDLKuKCi0F6vxX32QNRgKQBiMhTVEIhUNaFKp6vNogRtrtUAZ9EdduoGMWjFRDDd5ulgpyBrCfpE3D9H01LUJ9WgAGFzt1Fr1NFZfGthgBJJkzeZkJbCjO8rai9SpgZ6o2uA0-KsmkFm0TyGUnOzywJKqCZ8HyVHgj4JpPwkGMTPIz3esykbFPD4AIjnMeJoxmhlaAAU4IQEMAEAA3CAoAAqy87zLzEv2J8gitSdwAtNFz4YbvIkBAD0YPteIFURLVkFwMg9E6NoxAzKAoieFo4CQFpOl6I9zgQYhOzpRhy1pItsQACguFDv2QwhHCcOlCHxABKD4ryInI8lQNBGCUUowhqEIWkTMhk2PIjCyGRhBzITdULvKBnzIbg8KIwiUMYDjumEgdOOEqioEE7ikNktTlK3bg2NEBCeJQviqAEnSRLTMTc0MST8PIqBNOIqB5KErclIUrdVPUxCiLMjzCD0p5gDgDohiMiyBjpWzpIcojVMc-SRGCgBVMBIDIONOkIQ9HJMuxooI2KUNU1L0vg7LwsCkJgoAGTgAB3QxMoq3K0IzAr7Mc1S6sashypyoiqtEPdOkaSYIAAeT0cK8soDqZMIeKiMSkgyRmtr+PAuEcxZfNnQIHa81FABGCAAE4LvmoqnNIOF8S6HpGHYWQPgZVTSStWEenQMkGXCxgGW4DCHuAcN4lqE5gGc0UFAAR16kAFAcGC4IQnC52OgAmABmOcmMWAAhOMmAUCjCRRix6oAfRc0VYzgPRnORzjkxw46ycECmoYgGnONFFoHEIMkcMxxhsY5rnMGmMBQHnOd+eZoc2YABmViXYIsGCwBAY6FeC0KqBw9WkE1uBtcxhWRrGya9CNmpJcmNLXAEAWUqdjKsogI3OY1qHGga52mdqgOPey72fZNqG9A4AR33q8ChnRgAWXGKIcfWM3DyXOlFOcAB1JiYAAOAB2ckC6YONlaTucHDRtWI8IOATgcf3KBwzoKIhiBubIMg4CMAQ5xTzHa5R+DgHR465y7wRDH7weoDnJvph75C7CpiBGmyxdakbqOEC3mOdDnJBOlXvGl4QfBL7nGDSDnQIsuc33CT0SY4N3WCoEoPBDAQ9oA9dx2AAIrjTICASk5AQAoFMEfWajAJpkHwK4Tsc5FhoDjPLAAPkvJg5JMFzkknoQ+jRGQfDgWQxgA8UFkA+Eg2hFFIJPDID3SYZAkBQBIUfRWXFJ5q3TnAUqTVPZG2RrBCeOFuFkMFsLdmjA5x8EvjQwwHNwb7ygLgCwAhpFEAcNMToYAcJSNIfgJhfBeC6PEajSec4RjnwgDPMmUAAD0LioC-GILGDRWjgAWx0aYwg+jDHGN0eY3g+xdHxHsavMIzi3FQHwHATwSAFBQyBBcaA3iNGsNhuNbRP8-5kAAWgIBVAwGGEgUgaBJij6EGoMrfk6DCFdyAA)
---
### ๐ Motivating Example
In TypeScript, while working with string literals, certain operations like concatenation or transformations (e.g., toUpperCase, toLowerCase) typically result in the loss of specific literal types, being inferred as a general string. This can lead to a loss of valuable type information, resulting in less strict compile-time checks and the need for manual type assertions or annotations.
Consider the following example:
```typescript
const basePath = "/api";
const usersPath = "/users";
const fullPath = basePath.concat(usersPath); // Inferred as `string`
```
Here, despite knowing that basePath is "/api" and usersPath is "/users", TypeScript loses the literal type information after concatenation, inferring fullPath as string, rather than "/api/users". This loss of precision means we can't rely on TypeScript to enforce strict types when building paths or identifiers, leading to potential runtime errors.
### ๐ป Use Cases
1. What do you want to use this for?
To get rid of boilerplate when transforming strings and ensure that result of transformation satisfies the constraints
2. What shortcomings exist with current approaches?
Explicit type declaration and cast after manipulations
3. What workarounds are you using in the meantime?
Create a bunch of utils functions that work as a Proxy for calling built in methods | Suggestion,Awaiting More Feedback | low | Critical |
2,642,777,848 | pytorch | Inconsistent gradients in torch script function | ### ๐ Describe the bug
This code reproduce the bug:
```python
import torch
@torch.jit.script
def log_diff(s: torch.Tensor, t: torch.Tensor) -> torch.Tensor:
loss = s.log() - t.log()
return loss.real
def log_diff_ref(s: torch.Tensor, t: torch.Tensor) -> torch.Tensor:
loss = s.log() - t.log()
return loss.real
for i in range(2):
print()
print(f"#### {i=} ####")
torch.manual_seed(23333)
s = torch.rand([], dtype=torch.complex64)
t = torch.rand([], dtype=torch.complex64)
print(f"{s=}")
print(f"{t=}")
s = s.requires_grad_()
loss = log_diff(s, t)
loss_ref = log_diff_ref(s, t)
print(f"{loss=}")
print(f"{loss_ref=}")
grad, = torch.autograd.grad(loss, s, retain_graph=True)
grad_ref, = torch.autograd.grad(loss_ref, s, retain_graph=True)
print(f"{grad=}")
print(f"{grad_ref=}")
diff = grad - grad_ref
print(f"{diff=}")
```
The result is something like:
```
#### i=0 ####
s=tensor(0.9137+0.6452j)
t=tensor(0.6840+0.0498j)
loss=tensor(0.4892, grad_fn=<SelectBackward0>)
loss_ref=tensor(0.4892, grad_fn=<SelectBackward0>)
grad=tensor(0.7303+0.5157j)
grad_ref=tensor(0.7303+0.5157j)
diff=tensor(0.+0.j)
#### i=1 ####
s=tensor(0.9137+0.6452j)
t=tensor(0.6840+0.0498j)
loss=tensor(0.4892, grad_fn=<SelectBackward0>)
loss_ref=tensor(0.4892, grad_fn=<SelectBackward0>)
grad=tensor(0.7303-0.5157j)
grad_ref=tensor(0.7303+0.5157j)
diff=tensor(0.-1.0314j)
```
As we can see, invoking the same function multiple times yield different gradient results (diff != 0 when i = 1 while diff = 0 when i = 0)
### Versions
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Arch Linux (x86_64)
GCC version: (GCC) 14.2.1 20240910
Clang version: 18.1.8
CMake version: version 3.31.0
Libc version: glibc-2.40
Python version: 3.12.7 (main, Oct 1 2024, 11:15:50) [GCC 14.2.1 20240910] (64-bit runtime)
Python platform: Linux-6.6.59-1-lts-x86_64-with-glibc2.40
Is CUDA available: True
CUDA runtime version: 12.6.77
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 565.57.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i5-12400F
CPU family: 6
Model: 151
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 5
CPU(s) scaling MHz: 49%
CPU max MHz: 4400.0000
CPU min MHz: 800.0000
BogoMIPS: 4993.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 288 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 7.5 MiB (6 instances)
L3 cache: 18 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1
[pip3] triton==3.1.0
[conda] Could not collect
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel | oncall: jit | low | Critical |
2,642,787,727 | godot | FogVolume incorrectly culls nearby faces when looking "inward". | ### Tested versions
Reproducible in 4.3 stable mono, 4.2.2 stable mono, 4.0 stable
Its been here the entire time
### System information
Godot v4.3.stable.mono - Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3060 (NVIDIA; 32.0.15.6094) - Intel(R) Core(TM) i5-10600KF CPU @ 4.10GHz (12 Threads)
### Issue description
If a camera is within the "bounds" of a face of a fog volume and looks "inward" away from that face, the area of fog between the camera and that face will not be rendered creating a very obvious discontinuity; https://www.youtube.com/watch?v=7L5AhYp65Ec
This effect only occurs if the camera is close to the face that will be skipped. For a cube this region is quite small (https://www.youtube.com/watch?v=f6VrGqzpAyo) and can be mitigated significantly with edge fade, however the size of this region appears to be a function of the ratio between the length of each sides. For a very long or wide volume, this "error region" can extend all the way to the centre and fill the entire box creating extreme artifacts; https://www.youtube.com/watch?v=Lu84S4dGf5Q
for the sake of being verbose, my real world use case where I run into this bug is trying to create a a fog layer with mountains peeking out of them. Looks beautiful almost right out of the box when viewed from outside, my compliments to the ~~chefs~~ technical artists.

however when viewed from within, the top and bottom of the frame are heavily culled depending on if you are looking down or up


Currently to get around this I will have to make the fog volume extend as high as it is wide, or use a global volume, wasting many unnecessary frames.
This issue occurs on low (64x64) and high (512x512) sample resolutions, with procedural and shader fog materials (said shader directly set the density to a uniform value), and with all volume shapes except world.
I have been using terms like culling and occlusion in this report, I am using these to describe the visuals as I see them; I have no idea what process is causing it
### Steps to reproduce
open the reproduction project, and position the editor camera near the inside boundary of the fog volume.
oscillate between looking inwards and outwards to observe the cutoff. The camera already in the scene is in the right place and has a script for looking around attached to test in play mode. See the videos for specific.s
https://www.youtube.com/watch?v=Lu84S4dGf5Q
https://www.youtube.com/watch?v=7L5AhYp65Ec
https://www.youtube.com/watch?v=f6VrGqzpAyo
### Minimal reproduction project (MRP)
https://github.com/NatCracken/godot_fog_reproduction
was made in a mono build, but does not use any c# | bug,topic:rendering | low | Critical |
2,642,963,066 | pytorch | Support Joinable loss functions with DDP | ### ๐ The feature, motivation and pitch
I'm working on a loss function that requires synchronizing a buffer across workers, and that **isn't** recoverable by a custom reduction of gradients across workers (which is already achievable through DDP's `register_comm_hook`).
I would like it to also work with the `Join` context manager for handling uneven dataloader lengths, i.e. make the loss function a `Joinable`.
As of today, DDP only has a single `JoinHook` that shadows any operations in **both its forward and backward passes**. Note that DDP does indeed perform some distributed operations in its forward pass, namely `_check_global_requires_backward_grad_sync` in `def _pre_forward`.
Since the loss function's forward must operate _after_ the model's forward pass and _before_ the model's backward pass, there is no ordering of joinables to `Join` (either `[ddp_model, loss]` or `[loss, ddp_model]`) that is able to produce correct behavior: the workers will always hang.
Potential resolutions from looking at the source code:
- separate DDP's join hooks into forward and backward join hooks that can allow the user to insert their `Joinable` in between, or,
- avoid any distributed operations in the forward, so that the current DDP's join hook can be assumed to only shadow the backward pass.
Please let me know if the use case is not clear, or if I can help by providing a concrete example.
### Alternatives
For now, my alternative is to split my loss function into two steps, where I synchronize the buffer in question prior to the ddp_model's forward call, and perform the loss function's forward call using the pre-computed buffer.
This happens to work in the case of my loss function, but is not necessarily doable for a loss function whose synchronized buffer depends on the output of the model's forward.
### Additional context
_No response_
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed | low | Minor |
2,643,001,484 | transformers | Add functionality for deleting adapter layers in PEFT integration | ### Feature request
This request aims to introduce functionality to delete specific adapter layers integrated with PEFT (Parameter-Efficient Fine-Tuning) within the Hugging Face Transformers library. This would enable users to manage memory and computational resources more efficiently by unloading adapters that are no longer needed during model inference or fine-tuning.
### Motivation
This feature request addresses scenarios where users load multiple adapters during fine-tuning or inference but need the ability to selectively unload adapters without reloading the entire model. This enhancement is crucial for optimizing performance in memory-constrained environments and enhancing the flexibility of adapter management within Transformer models.
### Your contribution
Yes, I have submitted a PR #34650 | Feature request | low | Major |
2,643,002,968 | deno | Preserve double quotes in JSX with deno fmt | **Description:**
When formatting code with `deno fmt`, it currently converts double quotes to single quotes in JSX (HTML in JS) components. This behavior differs from Prettier, which preserves double quotes in JSX if the `singleQuotes` option is set to `true`.
**Example:**
In Prettier, with `singleQuotes: true`, double quotes are preserved in JSX, as shown below:
```javascript
import React from 'react';
const SimpleComponent = () => {
return (
<div className="app">
Hello, React!
</div>
);
};
```
However, `deno fmt` changes the quotes in JSX attributes to single quotes:
```javascript
import React from 'react';
const SimpleComponent = () => {
return (
<div className='app'>
Hello, React!
</div>
);
};
```
**Feature Request:**
Could Deno provide an option to preserve or enforce double quotes specifically in JSX, similar to how Prettier handles this with `singleQuotes: true`?
This feature would help users maintain consistency with double quotes in JSX without affecting other parts of the code where single quotes might be preferred. | suggestion,deno fmt | low | Minor |
2,643,013,627 | transformers | Add EXAONE | ### Model description
EXAONE is a large language model developed by LG AI Research. We released [EXAONE 3.0](https://github.com/LG-AI-EXAONE/EXAONE-3.0) in August, but we've since updated the model code and are working on integrating our implementation with the Huggingface transformers library.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
- Model URL : https://huggingface.co/LGAI-EXAONE/EXAONE-3.0-7.8B-Instruct
- GitHub repo for EXAONE : https://github.com/LG-AI-EXAONE/EXAONE-3.0 | New model | low | Minor |
2,643,091,348 | deno | [node compatibility] `npm:cdk8s` generates wrong json value | Version: `deno 2.1.1 (stable, release, x86_64-unknown-linux-gnu)`
### Prerequisites
```sh
โฏ helm version
version.BuildInfo{Version:"v3.16.2", GitCommit:"13654a52f7c70a143b1dd51416d633e1071faffb", GitTreeState:"clean", GoVersion:"go1.22.7"}
โฏ node -v
v22.11.0
โฏ bun -v
1.1.36
```
Deno configuration file
```jsonc
// deno.json
{
"imports": {
"cdk8s": "npm:cdk8s@2.69.18",
"cdk8s-cli": "npm:cdk8s-cli@2.198.267",
"constructs": "npm:constructs@10.4.2",
"tsx": "npm:tsx@4.19.2"
},
"nodeModulesDir": "auto",
"tasks": {
"run1": {
"command": "deno run --allow-all npm:cdk8s-cli synth --language typescript --output debug-deno/d1 --app 'deno run --allow-all a.ts'"
},
"run2": {
"command": "deno run --allow-all npm:cdk8s-cli synth --language typescript --output debug-deno/d2 --app 'npx tsx a.ts'"
},
"run3": {
"command": "deno run --allow-all npm:cdk8s-cli synth --language typescript --output debug-deno/d3 --app 'bun run a.ts'"
}
}
}
```
### Minimal reproducible code
```ts
// a.ts
import { App, Chart, Helm, YamlOutputType } from 'cdk8s';
const app = new App({
yamlOutputType: YamlOutputType.FILE_PER_RESOURCE,
});
const c = new Chart(app, 'debug', {
disableResourceNameHashes: true,
});
new Helm(c, 'chart', {
repo: 'https://prometheus-community.github.io/helm-charts',
chart: 'kube-prometheus-stack',
releaseName: 'monitoring',
version: '65.8.1', // Somehow version 66.2.1 seems fine?!
namespace: 'monitoring',
});
app.synth();
```
Run the tasks
```sh
deno task run1
deno task run2
deno task run3
```
### Result
These commands should create `debug-deno/d1`, `debug-deno/d2` and `debug-deno/d3` directories respectively.
Compare these 3 files
```sh
debug-deno/d1/ConfigMap.monitoring-kube-prometheus-k8s-resources-multicluster.k8s.yaml # deno version is bugged
debug-deno/d2/ConfigMap.monitoring-kube-prometheus-k8s-resources-multicluster.k8s.yaml # npx version is correct
debug-deno/d3/ConfigMap.monitoring-kube-prometheus-k8s-resources-multicluster.k8s.yaml # bun version is correct
```
For your convenience I uploaded these 2 files here as well.
<details>
<summary>debug-deno/d1/ConfigMap.monitoring-kube-prometheus-k8s-resources-multicluster.k8s.yaml</summary>
**Deno version got trimmed somehow**
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app: kube-prometheus-stack-grafana
app.kubernetes.io/instance: monitoring
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/part-of: kube-prometheus-stack
app.kubernetes.io/version: 65.8.1
chart: kube-prometheus-stack-65.8.1
grafana_dashboard: "1"
heritage: Helm
release: monitoring
name: monitoring-kube-prometheus-k8s-resources-multicluster
namespace: monitoring
data:
k8s-resources-multicluster.json: '{"editable":true,"links":[{"asDropdown":true,"includeVars":true,"keepTime":true,"tags":["kubernetes-mixin"],"targetBlank":false,"title":"Kubernetes","type":"dashboards"}],"panels":[{"datasource":{"type":"datasource","uid":"-- Mixed --"},"fieldConfig":{"defaults":{"unit":"none"}},"gridPos":{"h":3,"w":4,"x":0,"y":0},"id":1,"interval":"1m","options":{"colorMode":"none"},"pluginVersion":"v11.1.0","targets":[{"datasource":{"type":"prometheus","uid":"${datasource}"},"expr":"cluster:node_cpu:ratio_rate5m","instant":true}],"title":"CPU Utilisation","type":"stat"},{"datasource":{"type":"datasource","uid":"-- Mixed --"},"fieldConfig":{"defaults":{"unit":"percentunit"}},"gridPos":{"h":3,"w":4,"x":4,"y":0},"id":2,"interval":"1m","options":{"colorMode":"none"},"pluginVersion":"v11.1.0","targets":[{"datasource":{"type":"prometheus","uid":"${datasource}"},"expr":"sum(kube_pod_container_resource_requests{job=\"kube-state-metrics\", resource=\"cpu\"}) / sum(kube_node_status_allocatable{job=\"kube-state-metrics\", resourcetasource":{"type":"prometheus","uid":"${datasource}"},"expr":"sum(container_memory_rss{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", container!=\"\"}) by (cluster)","format":"table","instant":true},{"datasource":{"type":"prometheus","uid":"${datasource}"},"expr":"sum(kube_pod_container_resource_requests{job=\"kube-state-metrics\", resource=\"memory\"}) by (cluster)","format":"table","instant":true},{"datasource":{"type":"prometheus","uid":"${datasource}"},"expr":"sum(container_memory_rss{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", container!=\"\"}) by (cluster) / sum(kube_pod_container_resource_requests{job=\"kube-state-metrics\", resource=\"memory\"}) by (cluster)","format":"table","instant":true},{"datasource":{"type":"prometheus","uid":"${datasource}"},"expr":"sum(kube_pod_container_resource_limits{job=\"kube-state-metrics\", resource=\"memory\"}) by (cluster)","format":"table","instant":true},{"datasource":{"type":"prometheus","uid":"${datasource}"},"expr":"sum(container_memory_rss{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", container!=\"\"}) by (cluster) / sum(kube_pod_container_resource_limits{job=\"kube-state-metrics\", resource=\"memory\"}) by (cluster)","format":"table","instant":true}],"title":"Memory Requests by Cluster","transformations":[{"id":"joinByField","options":{"byField":"cluster","mode":"outer"}},{"id":"organize","options":{"excludeByName":{"Time":true,"Time 1":true,"Time 2":true,"Time 3":true,"Time 4":true,"Time 5":true},"indexByName":{"Time 1":0,"Time 2":1,"Time 3":2,"Time 4":3,"Time 5":4,"Value #A":6,"Value #B":7,"Value #C":8,"Value #D":9,"Value #E":10,"cluster":5},"renameByName":{"Value #A":"Memory Usage","Value #B":"Memory Requests","Value #C":"Memory Requests %","Value #D":"Memory Limits","Value #E":"Memory Limits %","cluster":"Cluster"}}}],"type":"table"}],"refresh":"10s","schemaVersion":39,"tags":["kubernetes-mixin"],"templating":{"list":[{"current":{"selected":true,"text":"default","value":"default"},"hide":0,"label":"Data source","name":"datasource","query":"prometheus","regex":"","type":"datasource"}]},"time":{"from":"now-1h","to":"now"},"timezone": "utc","title":"Kubernetes / Compute Resources / Multi-Cluster","uid":"b59e6c9f2fcbe2e16d77fc492374cc4f"}'
```
</details>
<details>
<summary>debug-deno/d2/ConfigMap.monitoring-kube-prometheus-k8s-resources-multicluster.k8s.yaml</summary>
**tsx version is fine**
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app: kube-prometheus-stack-grafana
app.kubernetes.io/instance: monitoring
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/part-of: kube-prometheus-stack
app.kubernetes.io/version: 65.8.1
chart: kube-prometheus-stack-65.8.1
grafana_dashboard: "1"
heritage: Helm
release: monitoring
name: monitoring-kube-prometheus-k8s-resources-multicluster
namespace: monitoring
data:
k8s-resources-multicluster.json: '{"editable":true,"links":[{"asDropdown":true,"includeVars":true,"keepTime":true,"tags":["kubernetes-mixin"],"targetBlank":false,"title":"Kubernetes","type":"dashboards"}],"panels":[{"datasource":{"type":"datasource","uid":"-- Mixed --"},"fieldConfig":{"defaults":{"unit":"none"}},"gridPos":{"h":3,"w":4,"x":0,"y":0},"id":1,"interval":"1m","options":{"colorMode":"none"},"pluginVersion":"v11.1.0","targets":[{"datasource":{"type":"prometheus","uid":"${datasource}"},"expr":"cluster:node_cpu:ratio_rate5m","instant":true}],"title":"CPU Utilisation","type":"stat"},{"datasource":{"type":"datasource","uid":"-- Mixed --"},"fieldConfig":{"defaults":{"unit":"percentunit"}},"gridPos":{"h":3,"w":4,"x":4,"y":0},"id":2,"interval":"1m","options":{"colorMode":"none"},"pluginVersion":"v11.1.0","targets":[{"datasource":{"type":"prometheus","uid":"${datasource}"},"expr":"sum(kube_pod_container_resource_requests{job=\"kube-state-metrics\", resource=\"cpu\"}) / sum(kube_node_status_allocatable{job=\"kube-state-metrics\", resource=\"cpu\"})","instant":true}],"title":"CPU Requests Commitment","type":"stat"},{"datasource":{"type":"datasource","uid":"-- Mixed --"},"fieldConfig":{"defaults":{"unit":"percentunit"}},"gridPos":{"h":3,"w":4,"x":8,"y":0},"id":3,"interval":"1m","options":{"colorMode":"none"},"pluginVersion":"v11.1.0","targets":[{"datasource":{"type":"prometheus","uid":"${datasource}"},"expr":"sum(kube_pod_container_resource_limits{job=\"kube-state-metrics\", resource=\"cpu\"}) / sum(kube_node_status_allocatable{job=\"kube-state-metrics\", resource=\"cpu\"})","instant":true}],"title":"CPU Limits Commitment","type":"stat"},{"datasource":{"type":"datasource","uid":"-- Mixed --"},"fieldConfig":{"defaults":{"unit":"percentunit"}},"gridPos":{"h":3,"w":4,"x":12,"y":0},"id":4,"interval":"1m","options":{"colorMode":"none"},"pluginVersion":"v11.1.0","targets":[{"datasource":{"type":"prometheus","uid":"${datasource}"},"expr":"1 - sum(:node_memory_MemAvailable_bytes:sum) / sum(node_memory_MemTotal_bytes{job=\"node-exporter\"})","instant":true}],"title":"Memory Utilisation","type":"stat"},{"datasource":{"type":"datasource","uid":"-- Mixed --"},"fieldConfig":{"defaults":{"unit":"percentunit"}},"gridPos":{"h":3,"w":4,"x":16,"y":0},"id":5,"interval":"1m","options":{"colorMode":"none"},"pluginVersion":"v11.1.0","targets":[{"datasource":{"type":"prometheus","uid":"${datasource}"},"expr":"sum(kube_pod_container_resource_requests{job=\"kube-state-metrics\", resource=\"memory\"}) / sum(kube_node_status_allocatable{job=\"kube-state-metrics\", resource=\"memory\"})","instant":true}],"title":"Memory Requests Commitment","type":"stat"},{"datasource":{"type":"datasource","uid":"-- Mixed --"},"fieldConfig":{"defaults":{"unit":"percentunit"}},"gridPos":{"h":3,"w":4,"x":20,"y":0},"id":6,"interval":"1m","options":{"colorMode":"none"},"pluginVersion":"v11.1.0","targets":[{"datasource":{"type":"prometheus","uid":"${datasource}"},"expr":"sum(kube_pod_container_resource_limits{job=\"kube-state-metrics\", resource=\"memory\"}) / sum(kube_node_status_allocatable{job=\"kube-state-metrics\", resource=\"memory\"})","instant":true}],"title":"Memory Limits Commitment","type":"stat"},{"datasource":{"type":"datasource","uid":"-- Mixed --"},"fieldConfig":{"defaults":{"custom":{"showPoints":"never"}}},"gridPos":{"h":7,"w":24,"x":0,"y":1},"id":7,"interval":"1m","options":{"legend":{"asTable":true,"displayMode":"table","placement":"right","showLegend":true},"tooltip":{"mode":"single"}},"pluginVersion":"v11.1.0","targets":[{"datasource":{"type":"prometheus","uid":"${datasource}"},"expr":"sum(node_namespace_pod_container:container_cpu_usage_seconds_total:sum_irate) by (cluster)","legendFormat":"__auto"}],"title":"CPU Usage","type":"timeseries"},{"datasource":{"type":"datasource","uid":"-- Mixed --"},"fieldConfig":{"overrides":[{"matcher":{"id":"byRegexp","options":"/%/"},"properties":[{"id":"unit","value":"percentunit"}]},{"matcher":{"id":"byName","options":"Cluster"},"properties":[{"id":"links","value":[{"title":"Drill down","url":"/d/efa86fd1d0c121a26444b636a3f509a8/kubernetes-compute-resources-cluster?${datasource:queryparam}&var-cluster=${__data.fields.Cluster}"}]}]}]},"gridPos":{"h":7,"w":24,"x":0,"y":2},"id":8,"pluginVersion":"v11.1.0","targets":[{"datasource":{"type":"prometheus","uid":"${datasource}"},"expr":"sum(node_namespace_pod_container:container_cpu_usage_seconds_total:sum_irate) by (cluster)","format":"table","instant":true},{"datasource":{"type":"prometheus","uid":"${datasource}"},"expr":"sum(kube_pod_container_resource_requests{job=\"kube-state-metrics\", resource=\"cpu\"}) by (cluster)","format":"table","instant":true},{"datasource":{"type":"prometheus","uid":"${datasource}"},"expr":"sum(node_namespace_pod_container:container_cpu_usage_seconds_total:sum_irate) by (cluster) / sum(kube_pod_container_resource_requests{job=\"kube-state-metrics\", resource=\"cpu\"}) by (cluster)","format":"table","instant":true},{"datasource":{"type":"prometheus","uid":"${datasource}"},"expr":"sum(kube_pod_container_resource_limits{job=\"kube-state-metrics\", resource=\"cpu\"}) by (cluster)","format":"table","instant":true},{"datasource":{"type":"prometheus","uid":"${datasource}"},"expr":"sum(node_namespace_pod_container:container_cpu_usage_seconds_total:sum_irate) by (cluster) / sum(kube_pod_container_resource_limits{job=\"kube-state-metrics\", resource=\"cpu\"}) by (cluster)","format":"table","instant":true}],"title":"CPU Quota","transformations":[{"id":"joinByField","options":{"byField":"cluster","mode":"outer"}},{"id":"organize","options":{"excludeByName":{"Time":true,"Time 1":true,"Time 2":true,"Time 3":true,"Time 4":true,"Time 5":true},"indexByName":{"Time 1":0,"Time 2":1,"Time 3":2,"Time 4":3,"Time 5":4,"Value #A":6,"Value #B":7,"Value #C":8,"Value #D":9,"Value #E":10,"cluster":5},"renameByName":{"Value #A":"CPU Usage","Value #B":"CPU Requests","Value #C":"CPU Requests %","Value #D":"CPU Limits","Value #E":"CPU Limits %","cluster":"Cluster"}}}],"type":"table"},{"datasource":{"type":"datasource","uid":"-- Mixed --"},"fieldConfig":{"defaults":{"custom":{"showPoints":"never"},"unit":"bytes"}},"gridPos":{"h":7,"w":24,"x":0,"y":3},"id":9,"interval":"1m","options":{"legend":{"asTable":true,"displayMode":"table","placement":"right","showLegend":true},"tooltip":{"mode":"single"}},"pluginVersion":"v11.1.0","targets":[{"datasource":{"type":"prometheus","uid":"${datasource}"},"expr":"sum(container_memory_rss{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", container!=\"\"}) by (cluster)","legendFormat":"__auto"}],"title":"Memory Usage (w/o cache)","type":"timeseries"},{"datasource":{"type":"datasource","uid":"-- Mixed --"},"fieldConfig":{"defaults":{"unit":"bytes"},"overrides":[{"matcher":{"id":"byRegexp","options":"/%/"},"properties":[{"id":"unit","value":"percentunit"}]},{"matcher":{"id":"byName","options":"Cluster"},"properties":[{"id":"links","value":[{"title":"Drill down","url":"/d/efa86fd1d0c121a26444b636a3f509a8/kubernetes-compute-resources-cluster?${datasource:queryparam}&var-cluster=${__data.fields.Cluster}"}]}]}]},"gridPos":{"h":7,"w":24,"x":0,"y":4},"id":10,"pluginVersion":"v11.1.0","targets":[{"datasource":{"type":"prometheus","uid":"${datasource}"},"expr":"sum(container_memory_rss{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", container!=\"\"}) by (cluster)","format":"table","instant":true},{"datasource":{"type":"prometheus","uid":"${datasource}"},"expr":"sum(kube_pod_container_resource_requests{job=\"kube-state-metrics\", resource=\"memory\"}) by (cluster)","format":"table","instant":true},{"datasource":{"type":"prometheus","uid":"${datasource}"},"expr":"sum(container_memory_rss{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", container!=\"\"}) by (cluster) / sum(kube_pod_container_resource_requests{job=\"kube-state-metrics\", resource=\"memory\"}) by (cluster)","format":"table","instant":true},{"datasource":{"type":"prometheus","uid":"${datasource}"},"expr":"sum(kube_pod_container_resource_limits{job=\"kube-state-metrics\", resource=\"memory\"}) by (cluster)","format":"table","instant":true},{"datasource":{"type":"prometheus","uid":"${datasource}"},"expr":"sum(container_memory_rss{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", container!=\"\"}) by (cluster) / sum(kube_pod_container_resource_limits{job=\"kube-state-metrics\", resource=\"memory\"}) by (cluster)","format":"table","instant":true}],"title":"Memory Requests by Cluster","transformations":[{"id":"joinByField","options":{"byField":"cluster","mode":"outer"}},{"id":"organize","options":{"excludeByName":{"Time":true,"Time 1":true,"Time 2":true,"Time 3":true,"Time 4":true,"Time 5":true},"indexByName":{"Time 1":0,"Time 2":1,"Time 3":2,"Time 4":3,"Time 5":4,"Value #A":6,"Value #B":7,"Value #C":8,"Value #D":9,"Value #E":10,"cluster":5},"renameByName":{"Value #A":"Memory Usage","Value #B":"Memory Requests","Value #C":"Memory Requests %","Value #D":"Memory Limits","Value #E":"Memory Limits %","cluster":"Cluster"}}}],"type":"table"}],"refresh":"10s","schemaVersion":39,"tags":["kubernetes-mixin"],"templating":{"list":[{"current":{"selected":true,"text":"default","value":"default"},"hide":0,"label":"Data source","name":"datasource","query":"prometheus","regex":"","type":"datasource"}]},"time":{"from":"now-1h","to":"now"},"timezone": "utc","title":"Kubernetes / Compute Resources / Multi-Cluster","uid":"b59e6c9f2fcbe2e16d77fc492374cc4f"}'
```
</details>
<details>
<summary>debug-deno/d3/ConfigMap.monitoring-kube-prometheus-k8s-resources-multicluster.k8s.yaml</summary>
**bun version is fine**
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app: kube-prometheus-stack-grafana
app.kubernetes.io/instance: monitoring
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/part-of: kube-prometheus-stack
app.kubernetes.io/version: 65.8.1
chart: kube-prometheus-stack-65.8.1
grafana_dashboard: "1"
heritage: Helm
release: monitoring
name: monitoring-kube-prometheus-k8s-resources-multicluster
namespace: monitoring
data:
k8s-resources-multicluster.json: '{"editable":true,"links":[{"asDropdown":true,"includeVars":true,"keepTime":true,"tags":["kubernetes-mixin"],"targetBlank":false,"title":"Kubernetes","type":"dashboards"}],"panels":[{"datasource":{"type":"datasource","uid":"-- Mixed --"},"fieldConfig":{"defaults":{"unit":"none"}},"gridPos":{"h":3,"w":4,"x":0,"y":0},"id":1,"interval":"1m","options":{"colorMode":"none"},"pluginVersion":"v11.1.0","targets":[{"datasource":{"type":"prometheus","uid":"${datasource}"},"expr":"cluster:node_cpu:ratio_rate5m","instant":true}],"title":"CPU Utilisation","type":"stat"},{"datasource":{"type":"datasource","uid":"-- Mixed --"},"fieldConfig":{"defaults":{"unit":"percentunit"}},"gridPos":{"h":3,"w":4,"x":4,"y":0},"id":2,"interval":"1m","options":{"colorMode":"none"},"pluginVersion":"v11.1.0","targets":[{"datasource":{"type":"prometheus","uid":"${datasource}"},"expr":"sum(kube_pod_container_resource_requests{job=\"kube-state-metrics\", resource=\"cpu\"}) / sum(kube_node_status_allocatable{job=\"kube-state-metrics\", resource=\"cpu\"})","instant":true}],"title":"CPU Requests Commitment","type":"stat"},{"datasource":{"type":"datasource","uid":"-- Mixed --"},"fieldConfig":{"defaults":{"unit":"percentunit"}},"gridPos":{"h":3,"w":4,"x":8,"y":0},"id":3,"interval":"1m","options":{"colorMode":"none"},"pluginVersion":"v11.1.0","targets":[{"datasource":{"type":"prometheus","uid":"${datasource}"},"expr":"sum(kube_pod_container_resource_limits{job=\"kube-state-metrics\", resource=\"cpu\"}) / sum(kube_node_status_allocatable{job=\"kube-state-metrics\", resource=\"cpu\"})","instant":true}],"title":"CPU Limits Commitment","type":"stat"},{"datasource":{"type":"datasource","uid":"-- Mixed --"},"fieldConfig":{"defaults":{"unit":"percentunit"}},"gridPos":{"h":3,"w":4,"x":12,"y":0},"id":4,"interval":"1m","options":{"colorMode":"none"},"pluginVersion":"v11.1.0","targets":[{"datasource":{"type":"prometheus","uid":"${datasource}"},"expr":"1 - sum(:node_memory_MemAvailable_bytes:sum) / sum(node_memory_MemTotal_bytes{job=\"node-exporter\"})","instant":true}],"title":"Memory Utilisation","type":"stat"},{"datasource":{"type":"datasource","uid":"-- Mixed --"},"fieldConfig":{"defaults":{"unit":"percentunit"}},"gridPos":{"h":3,"w":4,"x":16,"y":0},"id":5,"interval":"1m","options":{"colorMode":"none"},"pluginVersion":"v11.1.0","targets":[{"datasource":{"type":"prometheus","uid":"${datasource}"},"expr":"sum(kube_pod_container_resource_requests{job=\"kube-state-metrics\", resource=\"memory\"}) / sum(kube_node_status_allocatable{job=\"kube-state-metrics\", resource=\"memory\"})","instant":true}],"title":"Memory Requests Commitment","type":"stat"},{"datasource":{"type":"datasource","uid":"-- Mixed --"},"fieldConfig":{"defaults":{"unit":"percentunit"}},"gridPos":{"h":3,"w":4,"x":20,"y":0},"id":6,"interval":"1m","options":{"colorMode":"none"},"pluginVersion":"v11.1.0","targets":[{"datasource":{"type":"prometheus","uid":"${datasource}"},"expr":"sum(kube_pod_container_resource_limits{job=\"kube-state-metrics\", resource=\"memory\"}) / sum(kube_node_status_allocatable{job=\"kube-state-metrics\", resource=\"memory\"})","instant":true}],"title":"Memory Limits Commitment","type":"stat"},{"datasource":{"type":"datasource","uid":"-- Mixed --"},"fieldConfig":{"defaults":{"custom":{"showPoints":"never"}}},"gridPos":{"h":7,"w":24,"x":0,"y":1},"id":7,"interval":"1m","options":{"legend":{"asTable":true,"displayMode":"table","placement":"right","showLegend":true},"tooltip":{"mode":"single"}},"pluginVersion":"v11.1.0","targets":[{"datasource":{"type":"prometheus","uid":"${datasource}"},"expr":"sum(node_namespace_pod_container:container_cpu_usage_seconds_total:sum_irate) by (cluster)","legendFormat":"__auto"}],"title":"CPU Usage","type":"timeseries"},{"datasource":{"type":"datasource","uid":"-- Mixed --"},"fieldConfig":{"overrides":[{"matcher":{"id":"byRegexp","options":"/%/"},"properties":[{"id":"unit","value":"percentunit"}]},{"matcher":{"id":"byName","options":"Cluster"},"properties":[{"id":"links","value":[{"title":"Drill down","url":"/d/efa86fd1d0c121a26444b636a3f509a8/kubernetes-compute-resources-cluster?${datasource:queryparam}&var-cluster=${__data.fields.Cluster}"}]}]}]},"gridPos":{"h":7,"w":24,"x":0,"y":2},"id":8,"pluginVersion":"v11.1.0","targets":[{"datasource":{"type":"prometheus","uid":"${datasource}"},"expr":"sum(node_namespace_pod_container:container_cpu_usage_seconds_total:sum_irate) by (cluster)","format":"table","instant":true},{"datasource":{"type":"prometheus","uid":"${datasource}"},"expr":"sum(kube_pod_container_resource_requests{job=\"kube-state-metrics\", resource=\"cpu\"}) by (cluster)","format":"table","instant":true},{"datasource":{"type":"prometheus","uid":"${datasource}"},"expr":"sum(node_namespace_pod_container:container_cpu_usage_seconds_total:sum_irate) by (cluster) / sum(kube_pod_container_resource_requests{job=\"kube-state-metrics\", resource=\"cpu\"}) by (cluster)","format":"table","instant":true},{"datasource":{"type":"prometheus","uid":"${datasource}"},"expr":"sum(kube_pod_container_resource_limits{job=\"kube-state-metrics\", resource=\"cpu\"}) by (cluster)","format":"table","instant":true},{"datasource":{"type":"prometheus","uid":"${datasource}"},"expr":"sum(node_namespace_pod_container:container_cpu_usage_seconds_total:sum_irate) by (cluster) / sum(kube_pod_container_resource_limits{job=\"kube-state-metrics\", resource=\"cpu\"}) by (cluster)","format":"table","instant":true}],"title":"CPU Quota","transformations":[{"id":"joinByField","options":{"byField":"cluster","mode":"outer"}},{"id":"organize","options":{"excludeByName":{"Time":true,"Time 1":true,"Time 2":true,"Time 3":true,"Time 4":true,"Time 5":true},"indexByName":{"Time 1":0,"Time 2":1,"Time 3":2,"Time 4":3,"Time 5":4,"Value #A":6,"Value #B":7,"Value #C":8,"Value #D":9,"Value #E":10,"cluster":5},"renameByName":{"Value #A":"CPU Usage","Value #B":"CPU Requests","Value #C":"CPU Requests %","Value #D":"CPU Limits","Value #E":"CPU Limits %","cluster":"Cluster"}}}],"type":"table"},{"datasource":{"type":"datasource","uid":"-- Mixed --"},"fieldConfig":{"defaults":{"custom":{"showPoints":"never"},"unit":"bytes"}},"gridPos":{"h":7,"w":24,"x":0,"y":3},"id":9,"interval":"1m","options":{"legend":{"asTable":true,"displayMode":"table","placement":"right","showLegend":true},"tooltip":{"mode":"single"}},"pluginVersion":"v11.1.0","targets":[{"datasource":{"type":"prometheus","uid":"${datasource}"},"expr":"sum(container_memory_rss{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", container!=\"\"}) by (cluster)","legendFormat":"__auto"}],"title":"Memory Usage (w/o cache)","type":"timeseries"},{"datasource":{"type":"datasource","uid":"-- Mixed --"},"fieldConfig":{"defaults":{"unit":"bytes"},"overrides":[{"matcher":{"id":"byRegexp","options":"/%/"},"properties":[{"id":"unit","value":"percentunit"}]},{"matcher":{"id":"byName","options":"Cluster"},"properties":[{"id":"links","value":[{"title":"Drill down","url":"/d/efa86fd1d0c121a26444b636a3f509a8/kubernetes-compute-resources-cluster?${datasource:queryparam}&var-cluster=${__data.fields.Cluster}"}]}]}]},"gridPos":{"h":7,"w":24,"x":0,"y":4},"id":10,"pluginVersion":"v11.1.0","targets":[{"datasource":{"type":"prometheus","uid":"${datasource}"},"expr":"sum(container_memory_rss{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", container!=\"\"}) by (cluster)","format":"table","instant":true},{"datasource":{"type":"prometheus","uid":"${datasource}"},"expr":"sum(kube_pod_container_resource_requests{job=\"kube-state-metrics\", resource=\"memory\"}) by (cluster)","format":"table","instant":true},{"datasource":{"type":"prometheus","uid":"${datasource}"},"expr":"sum(container_memory_rss{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", container!=\"\"}) by (cluster) / sum(kube_pod_container_resource_requests{job=\"kube-state-metrics\", resource=\"memory\"}) by (cluster)","format":"table","instant":true},{"datasource":{"type":"prometheus","uid":"${datasource}"},"expr":"sum(kube_pod_container_resource_limits{job=\"kube-state-metrics\", resource=\"memory\"}) by (cluster)","format":"table","instant":true},{"datasource":{"type":"prometheus","uid":"${datasource}"},"expr":"sum(container_memory_rss{job=\"kubelet\", metrics_path=\"/metrics/cadvisor\", container!=\"\"}) by (cluster) / sum(kube_pod_container_resource_limits{job=\"kube-state-metrics\", resource=\"memory\"}) by (cluster)","format":"table","instant":true}],"title":"Memory Requests by Cluster","transformations":[{"id":"joinByField","options":{"byField":"cluster","mode":"outer"}},{"id":"organize","options":{"excludeByName":{"Time":true,"Time 1":true,"Time 2":true,"Time 3":true,"Time 4":true,"Time 5":true},"indexByName":{"Time 1":0,"Time 2":1,"Time 3":2,"Time 4":3,"Time 5":4,"Value #A":6,"Value #B":7,"Value #C":8,"Value #D":9,"Value #E":10,"cluster":5},"renameByName":{"Value #A":"Memory Usage","Value #B":"Memory Requests","Value #C":"Memory Requests %","Value #D":"Memory Limits","Value #E":"Memory Limits %","cluster":"Cluster"}}}],"type":"table"}],"refresh":"10s","schemaVersion":39,"tags":["kubernetes-mixin"],"templating":{"list":[{"current":{"selected":true,"text":"default","value":"default"},"hide":0,"label":"Data source","name":"datasource","query":"prometheus","regex":"","type":"datasource"}]},"time":{"from":"now-1h","to":"now"},"timezone": "utc","title":"Kubernetes / Compute Resources / Multi-Cluster","uid":"b59e6c9f2fcbe2e16d77fc492374cc4f"}'
```
</details> | needs investigation,node compat | low | Critical |
2,643,142,539 | pytorch | inconsistency in ```torch.nn.BatchNorm1d``` on CPU and GPU | ### ๐ Describe the bug
getting inconsistent results on CPU and GPU when computing torch.nn.BatchNorm1d
```python #
#include <iostream>
#include <torch/torch.h>
int main() {
torch::Tensor input = torch::tensor({{{1.3047, 0.8789}}}, torch::kBFloat16);
std::cout << "initialized tensor (CPU):\n" << input << std::endl;
auto options =
torch::nn::BatchNormOptions(1)
.eps(1e-05)
.momentum(0.1)
.affine(false)
.track_running_stats(false);
auto module = torch::nn::BatchNorm1d(options);
module->to(torch::kBFloat16);
auto result_cpu = module->forward(input);
torch::Tensor input_cuda = input.cuda();
auto result_gpu = module->forward(input_cuda);
std::cout << "CPU result: \n" << result_cpu << std::endl;
std::cout << "GPU result: \n" << result_gpu << std::endl;
bool inconsistent = !torch::allclose(result_cpu, result_gpu.cpu(), 1e-03, 1e-02);
std::cout << "inconsistency with atol=1e-02 and rtol=1e-03: " << std::boolalpha << inconsistent << std::endl;
return 0;
}
```
outputs:
```
initialized tensor (CPU):
(1,.,.) =
1.3047 0.8789
[ CPUBFloat16Type{1,1,2} ]
CPU result:
(1,.,.) =
0.9883 -1.0078
[ CPUBFloat16Type{1,1,2} ]
GPU result:
(1,.,.) =
1 -1
[ CUDABFloat16Type{1,1,2} ]
inconsistency with atol=1e-02 and rtol=1e-03: true
```
### Versions
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 16.0.4 (https://github.com/llvm/llvm-project ae42196bc493ffe877a7e3dff8be32035dea4d07)
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.8.5 (default, Sep 4 2020, 07:30:14) [GCC 7.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-105-generic-x86_64-with-glibc2.10
Is CUDA available: N/A
CUDA runtime version: 12.1.66
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 550.78
cuDNN version: Probably one of the following:
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn.so.8.9.2
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.9.2
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.9.2
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.9.2
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.9.2
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.9.2
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.9.2
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6248R CPU @ 3.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 1200.0000
BogoMIPS: 6000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.5 MiB (48 instances)
L1i cache: 1.5 MiB (48 instances)
L2 cache: 48 MiB (48 instances)
L3 cache: 71.5 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] flake8==3.8.4
[pip3] numpy==1.19.2
[pip3] numpydoc==1.1.0
[pip3] torch==2.2.0a0+git9fa3350
[conda] blas 1.0 mkl
[conda] mkl 2020.2 256
[conda] mkl-service 2.3.0 py38he904b0f_0
[conda] mkl_fft 1.2.0 py38h23d657b_0
[conda] mkl_random 1.1.1 py38h0573a6f_0
[conda] numpy 1.19.2 py38h54aff64_0
[conda] numpy-base 1.19.2 py38hfa32c7d_0
[conda] numpydoc 1.1.0 pyhd3eb1b0_1
[conda] torch 2.2.0a0+git9fa3350 dev_0 <develop>
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 | module: numerical-stability,module: cpu,triaged | low | Critical |
2,643,207,265 | ollama | I wanted to the add Donut LLM model which seems to be not supported at the moment | after cloning: https://huggingface.co/docs/transformers/en/model_doc/donut
I have tried to run
`docker run --rm -v .:/model ollama/quantize -q q8_0 /model` but it fails with:
`unknown architecture VisionEncoderDecoderModel`
I think one can never have enough vision models, so please add support for Donut models and its fine tunings.
| model request | low | Minor |
2,643,208,936 | transformers | The support of `Mllama` in AutoModel | ### Feature request
The `AutoModel.from_config` does not work with Mllama (MllamaConfig, MllamaVisionConfig). I would like to request the ability to use Mllama through `AutoModel`.
### Motivation
There are many codes written to dynamically load models using `AutoModel`. It would be great if `AutoModel` could support Mllama to accommodate these codes.
### Your contribution
not yet. | Feature request | low | Minor |
2,643,226,637 | three.js | LogarithmicDepthBuffer causes problems with the drawing order of transparent objects in some devices | ### Description
Set `logarithmicDepthBuffer` to true in THREE.WebGLRenderer,When rendering a scene with a transparent object that has an opaque object behind it,I got the expected result in my devices ,but the transparent object disappears in some Windows devices.
Furthermore, if do not use `RenderPass` and draw directly to the screen, the result will be correct.
Here is some device information to help
CPU:Intel(R) Core(TM) i5-10210U CPU @ 1.60GHz 2.11 GHz
GPU:Intel(R) UHD Graphics
SYSTEM:Windows 10
If it is difficult to reproduce, I hope can provide some guesses on this problem. Thanks.
### Reproduction steps
1.Set `logarithmicDepthBuffer` to true in THREE.WebGLRenderer
2.use RenderPass,render to framebuffer
3.darw a transparent object that has an opaque surface behind it
### Code
```js
import * as THREE from "three";
import { EffectComposer } from "three/addons/postprocessing/EffectComposer.js";
import { Pass, FullScreenQuad } from 'three/addons/postprocessing/Pass.js';
import { RenderPass } from "three/addons/postprocessing/RenderPass.js";
import { OutputPass } from "three/addons/postprocessing/OutputPass.js";
let camera, renderer, group, container;
let composer;
init();
function init() {
container = document.getElementById("container");
camera = new THREE.PerspectiveCamera(
45,
container.offsetWidth / container.offsetHeight,
10,
2000000
);
camera.position.z = 1000;
const scene = new THREE.Scene();
const hemiLight = new THREE.HemisphereLight(0xffffff, 0x222222, 5);
hemiLight.position.set(1, 1, 1);
scene.add(hemiLight);
//
group = new THREE.Group();
const geometry = new THREE.SphereGeometry(10, 64, 40);
const material = new THREE.MeshBasicMaterial({
color: 0xff0000
});
const material2 = new THREE.MeshBasicMaterial({
color: 0x00ff00,
transparent: true,
depthWrite: true,
depthTest: true,
opacity: 0.5
});
for (let i = 0; i < 1; i++) {
const mesh = new THREE.Mesh(geometry, material);
mesh.position.z = 40;
mesh.scale.setScalar(40);
group.add(mesh);
const mesh2 = new THREE.Mesh(geometry, material2);
mesh2.scale.setScalar(10);
mesh2.position.z = 500;
group.add(mesh2);
}
scene.add(group);
//
renderer = new THREE.WebGLRenderer({
antialias: true,
logarithmicDepthBuffer: true
});
renderer.setPixelRatio(window.devicePixelRatio);
renderer.setSize(container.offsetWidth, container.offsetHeight);
renderer.autoClear = false;
container.appendChild(renderer.domElement);
renderer.setClearColor(0x000000);
renderer.setClearAlpha(0.0);
//
const size = renderer.getDrawingBufferSize(new THREE.Vector2());
const renderTarget = new THREE.WebGLRenderTarget(size.width, size.height);
const renderPass = new RenderPass(scene, camera);
const outputPass = new OutputPass();
//
composer = new EffectComposer(renderer);
composer.addPass(renderPass);
composer.addPass(outputPass);
window.addEventListener("resize", onWindowResize);
animate();
}
function onWindowResize() {
camera.aspect = container.offsetWidth / container.offsetHeight;
camera.updateProjectionMatrix();
renderer.setSize(container.offsetWidth, container.offsetHeight);
composer.setSize(container.offsetWidth, container.offsetHeight);
}
function animate() {
requestAnimationFrame(animate);
composer.render();
}
```
### Live example
* [jsfiddle-latest-release WebGLRenderer](https://jsfiddle.net/2tmf7xj9/)
### Screenshots
There was a problem with displaying results on a very few devices

It appears below on most devices

### Version
R170
### Device
Desktop
### Browser
Chrome, Edge
### OS
Windows | Device Issue | low | Minor |
2,643,253,220 | rust | `isqrt` treated as a `black_box` | I tried this code:
```rust
#[inline(never)]
pub const fn f(n: u8) {
assert!(n >= 4);
assert!(2 <= n.isqrt());
}
```
> [!note]
> I've tried:
> - removing `const`
> - replacing `u8` by `usize`
> - replacing `4` by `9` and `2` by `3`
>
> Outcome was the same
I expected to see this happen: 2nd assertion non-existent in assembly.
Instead, this happened:
```asm
# ... (snip)
.LBB0_4:
leaq .L__unnamed_4(%rip), %rdi
leaq .L__unnamed_5(%rip), %rdx
movl $32, %esi
callq *core::panicking::panic@GOTPCREL(%rip)
# ...
.L__unnamed_4:
.ascii "assertion failed: 2 <= n.isqrt()"
# ...
```
### Meta
[Playground](https://play.rust-lang.org/?version=nightly&mode=release&edition=2024&gist=1ace1f411cb80a044a15835fbdb4f2df)
`rustc --version --verbose`:
```
1.84.0-nightly (2024-11-06 8549802939cd01111c46)
```
_No Backtrace_
@rustbot label: +I-slow, -C-bug | I-slow,T-compiler | low | Critical |
2,643,264,721 | three.js | TRAAPassNode: Incomplete MRT support. | ### Description
If we add some other postprocessing node after TRAA, we got a black screen.
It appears that TRAA can not work with other MRT, it only takes two textures as output when rendering pass.
https://jsfiddle.net/ligaofeng0901/g0u9qdb6/14/
I created a demo. In this demo, traa and bloom doesn't work fine together. Or, my usage of traa is not a proper way?
### Reproduction steps
1.
2.
3.
### Code
```js
// code goes here
```
### Live example
* [jsfiddle-latest-release WebGLRenderer](https://jsfiddle.net/3mrkqyea/)
* [jsfiddle-dev WebGLRenderer](https://jsfiddle.net/gcqx26jv/)
* [jsfiddle-latest-release WebGPURenderer](https://jsfiddle.net/8L2jkmx7/)
* [jsfiddle-dev WebGPURenderer](https://jsfiddle.net/L3n1w4yh/)
### Screenshots
_No response_
### Version
0.170.0
### Device
_No response_
### Browser
_No response_
### OS
_No response_ | Post-processing | low | Minor |
2,643,285,239 | angular | Mistake in [Hierarchical Injection] "@Host and viewProviders" section | ### Description
The following line describes the behaviour of using the injection resolution modifiers `@Host` along with `@SkipSelf` but it says that `@SkipSelf` will start from `app-child` which looks incorrect, instead, I believe it should be `app-root`.
#### Original
When `@Host()` and `@SkipSelf()` were applied to the FlowerService, which is in the providers array, the result was null because `@SkipSelf()` starts its search in the `<app-child>` injector, but `@Host()` stops searching at `<#VIEW>` โwhere there is no FlowerService In the logical tree, you can see that the FlowerService is visible in `<app-child>`, not its <#VIEW>.
#### Correction
When `@Host()` and `@SkipSelf()` were applied to the FlowerService, which is in the providers array, the result was null because `@SkipSelf()` starts its search in the `<app-root>` injector, but `@Host()` stops searching at `<app-child> <#VIEW>` โwhere there is no FlowerService In the logical tree, you can see that the FlowerService is visible in `<app-root>`, not its <#VIEW>.
This is also evident in the logical tree representation given in the _Visibility of Provided tokens_ section
```
<app-root @ApplicationConfig
@Inject(FlowerService) flower=>"๐บ">
<#VIEW> <!-- end search here with null-->
<app-child @Provide(FlowerService="๐ป")> <!-- start search here -->
<#VIEW @Inject(FlowerService, @SkipSelf, @Host, @Optional)=>null>
</#VIEW>
</app-parent>
</#VIEW>
</app-root> | area: docs | low | Minor |
2,643,323,771 | react | Firebase Authentication State Resets on Page Reload in Vite-React-TypeScript App | When using Firebase Authentication, the signed-in user's state does not persist on page reload, logging out the user each time the page refreshes. This happens in both development and production builds.
**Steps to Reproduce**
1. Clone the starter repo and configure Firebase.
2. Sign in with a test user account.
3. Refresh the page.
**Expected Behavior**
The user should stay signed in after a page reload.
**Actual Behavior**
The user is logged out automatically on each page reload.
**Reproduction Code**
```typescript
// Firebase Auth setup in a React component or context
useEffect(() => {
const unsubscribe = firebase.auth().onAuthStateChanged((user) => {
setUser(user); // user state is reset on page reload
});
return () => unsubscribe();
}, []);
```
**System Info:**
- Vite version: 3.0.0
- React version: 18.0.0
- TypeScript version: 4.4.4
- Tailwind CSS version: 3.0.0
- Firebase version: 9.1.0
- Node.js version: 14.17.0
- Operating System: macOS 11.4
**Potential Solution**
Consider using Firebaseโs `onAuthStateChanged` with a persistent storage option like `localStorage` or `sessionStorage`. Additionally, check if initializing Firebase Auth inside a `useEffect` hook in a context provider resolves the issue.
**Additional Context**
This issue impacts the user experience, as users need to log in repeatedly. | Status: Unconfirmed | medium | Minor |
2,643,343,440 | vscode | Allow to pin/filter a tree node with all it's child elements | In some cases it's helpful to be able to only see a sub tree of the entire tree. For example as mentioned in this issue for the Outline View: https://github.com/microsoft/vscode/issues/233185#issue-2637853923
or, for the file explorer when a user has a very large workspace but is only working on one or two sub directories. | feature-request,tree-widget | low | Minor |
2,643,345,884 | kubernetes | PersistentVolumeClaim cannot be deleted. | ### What happened?
https://github.com/kubernetes/kubernetes/blob/c25f5eefe4efda4c0d9561d06942cd3de3dfe2e4/pkg/controller/volume/pvcprotection/pvc_protection_controller.go#L374-L388
If a pod with UnexpectedAdmissionError exists in the environment and PersistentVolumeClaim is used, the PVC cannot be deleted after the pod is deleted. The controller-manager log shows that Pod uses PVC xxx. Check the code and find that the podUsesPVC method does not determine the pod status. I think this is a problem.
### What did you expect to happen?
The pvc should be deleted correctly.
### How can we reproduce it (as minimally and precisely as possible)?
Construct a pod in the UnexpectedAdmissionError state, configure a PVC, and delete the pod that is using the PVC.
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
$ kubectl version
# paste output here
```
1.31
</details>
### Cloud provider
<details>
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/storage,needs-triage | low | Critical |
2,643,357,668 | neovim | checkhealth: show detected terminal features | ### Problem
There is a number of heuristics the Neovim TUI performs on startup to detect terminal features like CSI-u/Kitty protocol/modifyOtherKeys, truecolor/24-bit color support, and a few other modes/features.
Some examples from `terminfo_start` and friends:
<https://github.com/neovim/neovim/blob/master/src/nvim/tui/tui.c#L407>
<https://github.com/neovim/neovim/blob/master/src/nvim/tui/tui.c#L441>
<https://github.com/neovim/neovim/blob/master/src/nvim/tui/tui.c#L453>
<https://github.com/neovim/neovim/blob/master/src/nvim/tui/tui.c#L459>
For a lot of these, there isn't a good way to test that they are actually working, beyond things like 'try mapping `<C-S-Tab>` and see if it works' or 'try using a highlight group with extended underline styles'.
The fact the Neovim [FAQ](https://neovim.io/doc/user/faq.html) lists a large number of potential TUI issues indicates to me that some of this information/these checks could/should have a place in checkhealth.
### Expected behavior
It would be valuable troubleshooting information if the healthcheck could show information about what Neovim knows and expects about the terminal based on the information available to it.
I don't have a comprehensive list, but at minimum, I think it would be good to show the input mode (legacy/modifyOtherKeys/CSI u/kitty) and the detected terminal type (to see if e.g. the detection fell back to something generic like `xterm` when it shouldn't have), and ideally any other features that directly affect the functionality of the TUI but aren't exposed through options (e.g. extended underline support), though that's more for-the-users-interest than for debugging. | enhancement,api,tui | low | Critical |
2,643,372,072 | rust | [ICE]: index out of bounds | ### Code
```rust
trait LendingIterator {
type Item<'q>: 'a;
fn for_each(mut self, mut f: Box<dyn FnMut(Self::Item<'_>) + 'static>) {}
}
struct Query<'q> {}
impl<'static> Query<'q> {
pub fn new() -> Self {}
}
fn data() {
LendingIterator::for_each(Box::new(&data), Box::new);
}
pub fn main() {}
```
### Affected release channels
- [ ] Previous Stable
- [ ] Current Stable
- [ ] Current Beta
- [x] Current Nightly
### Rust Version
rustc 1.84.0-nightly (b91a3a056 2024-11-07)
binary: rustc
commit-hash: b91a3a05609a46f73d23e0995ae7ebb4a4f429a5
commit-date: 2024-11-07
host: x86_64-unknown-linux-gnu
release: 1.84.0-nightly
LLVM version: 19.1.3
### Current error output
```
error[E0261]: use of undeclared lifetime name `'a`
--> mutant.rs:2:20
|
2 | type Item<'q>: 'a;
| ^^ undeclared lifetime
|
help: consider introducing lifetime `'a` here
|
2 | type Item<'a, 'q>: 'a;
| +++
help: consider introducing lifetime `'a` here
|
1 | trait LendingIterator<'a> {
| ++++
error[E0262]: invalid lifetime parameter name: `'static`
--> mutant.rs:6:6
|
6 | impl<'static> Query<'q> {
| ^^^^^^^ 'static is a reserved lifetime name
error[E0261]: use of undeclared lifetime name `'q`
--> mutant.rs:6:21
|
6 | impl<'static> Query<'q> {
| - ^^ undeclared lifetime
| |
| help: consider introducing lifetime `'q` here: `'q,`
error[E0392]: lifetime parameter `'q` is never used
--> mutant.rs:5:14
|
5 | struct Query<'q> {}
| ^^ unused lifetime parameter
|
= help: consider removing `'q`, referring to it in a field, or using a marker such as `PhantomData`
error[E0277]: the size for values of type `Self` cannot be known at compilation time
--> mutant.rs:3:21
|
3 | fn for_each(mut self, mut f: Box<dyn FnMut(Self::Item<'_>) + 'static>) {}
| ^^^^ doesn't have a size known at compile-time
|
= help: unsized fn params are gated as an unstable feature
help: consider further restricting `Self`
|
3 | fn for_each(mut self, mut f: Box<dyn FnMut(Self::Item<'_>) + 'static>) where Self: Sized {}
| +++++++++++++++++
help: function arguments must have a statically known size, borrowed types always have a known size
|
3 | fn for_each(mut &self, mut f: Box<dyn FnMut(Self::Item<'_>) + 'static>) {}
| +
error[E0277]: the trait bound `Box<&fn() {data}>: LendingIterator` is not satisfied
--> mutant.rs:10:31
|
10 | LendingIterator::for_each(Box::new(&data), Box::new);
| ------------------------- ^^^^^^^^^^^^^^^ the trait `LendingIterator` is not implemented for `Box<&fn() {data}>`
| |
| required by a bound introduced by this call
|
help: this trait has no implementations, consider adding one
--> mutant.rs:1:1
|
1 | trait LendingIterator {
| ^^^^^^^^^^^^^^^^^^^^^
thread 'rustc' panicked at /rust/deps/ena-0.14.3/src/snapshot_vec.rs:199:10:
index out of bounds: the len is 7 but the index is 7
stack backtrace:
0: 0x7f719d25517a - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::hfadad24fb33e3d1a
1: 0x7f719da040a6 - core::fmt::write::h42d25fbda60cd99f
2: 0x7f719edc3351 - std::io::Write::write_fmt::hc2819193e80b365e
3: 0x7f719d254fd2 - std::sys::backtrace::BacktraceLock::print::h9450230402d77664
4: 0x7f719d2574d6 - std::panicking::default_hook::{{closure}}::h739047d4d787c596
5: 0x7f719d257320 - std::panicking::default_hook::h203d1229480f37a5
6: 0x7f719c2d2269 - std[56fe22ad9ea837fd]::panicking::update_hook::<alloc[b5641001d343df5f]::boxed::Box<rustc_driver_impl[945e9afaf49c7d35]::install_ice_hook::{closure#0}>>::{closure#0}
7: 0x7f719d257be8 - std::panicking::rust_panic_with_hook::h657fdcc17f7e2546
8: 0x7f719d2579ba - std::panicking::begin_panic_handler::{{closure}}::h6c1a7592f2611ed5
9: 0x7f719d255629 - std::sys::backtrace::__rust_end_short_backtrace::h3e1efd1ff0b15465
10: 0x7f719d25767c - rust_begin_unwind
11: 0x7f7199ccd320 - core::panicking::panic_fmt::h41647251c9f15c53
12: 0x7f719b9f0b1b - core::panicking::panic_bounds_check::h66515744fb563c4b
13: 0x7f719dacc1f6 - <rustc_middle[f0eb6ba890d0a9bb]::ty::Ty as rustc_type_ir[8408d34320f8a6fb]::fold::TypeSuperFoldable<rustc_middle[f0eb6ba890d0a9bb]::ty::context::TyCtxt>>::try_super_fold_with::<rustc_infer[21ddc8b2a5f19898]::infer::resolve::OpportunisticVarResolver>
14: 0x7f719dac6323 - <&rustc_middle[f0eb6ba890d0a9bb]::ty::list::RawList<(), rustc_middle[f0eb6ba890d0a9bb]::ty::generic_args::GenericArg> as rustc_type_ir[8408d34320f8a6fb]::fold::TypeFoldable<rustc_middle[f0eb6ba890d0a9bb]::ty::context::TyCtxt>>::try_fold_with::<rustc_infer[21ddc8b2a5f19898]::infer::resolve::OpportunisticVarResolver>
15: 0x7f719dac9e17 - <rustc_middle[f0eb6ba890d0a9bb]::ty::Ty as rustc_type_ir[8408d34320f8a6fb]::fold::TypeSuperFoldable<rustc_middle[f0eb6ba890d0a9bb]::ty::context::TyCtxt>>::try_super_fold_with::<rustc_infer[21ddc8b2a5f19898]::infer::resolve::OpportunisticVarResolver>
16: 0x7f719dac8554 - <rustc_middle[f0eb6ba890d0a9bb]::ty::Ty as rustc_type_ir[8408d34320f8a6fb]::fold::TypeSuperFoldable<rustc_middle[f0eb6ba890d0a9bb]::ty::context::TyCtxt>>::try_super_fold_with::<rustc_infer[21ddc8b2a5f19898]::infer::resolve::OpportunisticVarResolver>
17: 0x7f719d01a962 - <rustc_infer[21ddc8b2a5f19898]::infer::resolve::OpportunisticVarResolver as rustc_type_ir[8408d34320f8a6fb]::fold::FallibleTypeFolder<rustc_middle[f0eb6ba890d0a9bb]::ty::context::TyCtxt>>::try_fold_ty
18: 0x7f719d087548 - <rustc_trait_selection[c0f45c4e16f8dab6]::error_reporting::TypeErrCtxt>::same_type_modulo_infer::<rustc_middle[f0eb6ba890d0a9bb]::ty::Ty>
19: 0x7f719d0fcaba - <rustc_trait_selection[c0f45c4e16f8dab6]::error_reporting::TypeErrCtxt>::note_type_err
20: 0x7f719d084350 - <rustc_trait_selection[c0f45c4e16f8dab6]::error_reporting::TypeErrCtxt>::report_and_explain_type_error
21: 0x7f719c62446e - <rustc_hir_typeck[9dbf9add14d719fd]::fn_ctxt::FnCtxt>::report_arg_errors
22: 0x7f719a103be3 - <rustc_hir_typeck[9dbf9add14d719fd]::fn_ctxt::FnCtxt>::confirm_builtin_call
23: 0x7f719e6fa2bf - <rustc_hir_typeck[9dbf9add14d719fd]::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
24: 0x7f719e6f4736 - <rustc_hir_typeck[9dbf9add14d719fd]::fn_ctxt::FnCtxt>::check_block_with_expected
25: 0x7f719e6fabb4 - <rustc_hir_typeck[9dbf9add14d719fd]::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
26: 0x7f719dc4df5b - rustc_hir_typeck[9dbf9add14d719fd]::check::check_fn
27: 0x7f719dc43bac - rustc_hir_typeck[9dbf9add14d719fd]::typeck
28: 0x7f719dc43553 - rustc_query_impl[1357963d8dd30e8b]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[1357963d8dd30e8b]::query_impl::typeck::dynamic_query::{closure#2}::{closure#0}, rustc_middle[f0eb6ba890d0a9bb]::query::erase::Erased<[u8; 8usize]>>
29: 0x7f719e123681 - rustc_query_system[887bb79932b1d8c1]::query::plumbing::try_execute_query::<rustc_query_impl[1357963d8dd30e8b]::DynamicConfig<rustc_query_system[887bb79932b1d8c1]::query::caches::VecCache<rustc_span[db86d96c2ae2e3a4]::def_id::LocalDefId, rustc_middle[f0eb6ba890d0a9bb]::query::erase::Erased<[u8; 8usize]>>, false, false, false>, rustc_query_impl[1357963d8dd30e8b]::plumbing::QueryCtxt, false>
30: 0x7f719e121b4d - rustc_query_impl[1357963d8dd30e8b]::query_impl::typeck::get_query_non_incr::__rust_end_short_backtrace
31: 0x7f719e1217c7 - <rustc_middle[f0eb6ba890d0a9bb]::hir::map::Map>::par_body_owners::<rustc_hir_analysis[af6e6fecb0b5810e]::check_crate::{closure#4}>::{closure#0}
32: 0x7f719e11f799 - rustc_hir_analysis[af6e6fecb0b5810e]::check_crate
33: 0x7f719e268aca - rustc_interface[5fea8bf9cd0b71b5]::passes::run_required_analyses
34: 0x7f719e80861e - rustc_interface[5fea8bf9cd0b71b5]::passes::analysis
35: 0x7f719e8085ef - rustc_query_impl[1357963d8dd30e8b]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[1357963d8dd30e8b]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[f0eb6ba890d0a9bb]::query::erase::Erased<[u8; 1usize]>>
36: 0x7f719e992cee - rustc_query_system[887bb79932b1d8c1]::query::plumbing::try_execute_query::<rustc_query_impl[1357963d8dd30e8b]::DynamicConfig<rustc_query_system[887bb79932b1d8c1]::query::caches::SingleCache<rustc_middle[f0eb6ba890d0a9bb]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[1357963d8dd30e8b]::plumbing::QueryCtxt, false>
37: 0x7f719e9929ce - rustc_query_impl[1357963d8dd30e8b]::query_impl::analysis::get_query_non_incr::__rust_end_short_backtrace
38: 0x7f719e88707a - rustc_interface[5fea8bf9cd0b71b5]::interface::run_compiler::<core[5ba82ee3405aa490]::result::Result<(), rustc_span[db86d96c2ae2e3a4]::ErrorGuaranteed>, rustc_driver_impl[945e9afaf49c7d35]::run_compiler::{closure#0}>::{closure#1}
39: 0x7f719e8cd5d0 - std[56fe22ad9ea837fd]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[5fea8bf9cd0b71b5]::util::run_in_thread_with_globals<rustc_interface[5fea8bf9cd0b71b5]::util::run_in_thread_pool_with_globals<rustc_interface[5fea8bf9cd0b71b5]::interface::run_compiler<core[5ba82ee3405aa490]::result::Result<(), rustc_span[db86d96c2ae2e3a4]::ErrorGuaranteed>, rustc_driver_impl[945e9afaf49c7d35]::run_compiler::{closure#0}>::{closure#1}, core[5ba82ee3405aa490]::result::Result<(), rustc_span[db86d96c2ae2e3a4]::ErrorGuaranteed>>::{closure#0}, core[5ba82ee3405aa490]::result::Result<(), rustc_span[db86d96c2ae2e3a4]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[5ba82ee3405aa490]::result::Result<(), rustc_span[db86d96c2ae2e3a4]::ErrorGuaranteed>>
40: 0x7f719e8cd9eb - <<std[56fe22ad9ea837fd]::thread::Builder>::spawn_unchecked_<rustc_interface[5fea8bf9cd0b71b5]::util::run_in_thread_with_globals<rustc_interface[5fea8bf9cd0b71b5]::util::run_in_thread_pool_with_globals<rustc_interface[5fea8bf9cd0b71b5]::interface::run_compiler<core[5ba82ee3405aa490]::result::Result<(), rustc_span[db86d96c2ae2e3a4]::ErrorGuaranteed>, rustc_driver_impl[945e9afaf49c7d35]::run_compiler::{closure#0}>::{closure#1}, core[5ba82ee3405aa490]::result::Result<(), rustc_span[db86d96c2ae2e3a4]::ErrorGuaranteed>>::{closure#0}, core[5ba82ee3405aa490]::result::Result<(), rustc_span[db86d96c2ae2e3a4]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[5ba82ee3405aa490]::result::Result<(), rustc_span[db86d96c2ae2e3a4]::ErrorGuaranteed>>::{closure#1} as core[5ba82ee3405aa490]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
41: 0x7f719e8ce4b9 - std::sys::pal::unix::thread::Thread::new::thread_start::hb3d6392adeea417c
42: 0x7f7198a6bac3 - start_thread
at ./nptl/pthread_create.c:442:8
43: 0x7f7198afd850 - __GI___clone3
at ./misc/../sysdeps/unix/sysv/linux/x86_64/clone3.S:81
44: 0x0 - <unknown>
error: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: please make sure that you have updated to the latest nightly
note: compiler flags: --crate-type staticlib -C link-dead-code -C debuginfo=2 -C opt-level=3 -Z mir-opt-level=3
query stack during panic:
#0 [typeck] type-checking `data`
#1 [analysis] running analysis passes on this crate
end of query stack
error: aborting due to 6 previous errors
Some errors have detailed explanations: E0261, E0262, E0277, E0392.
For more information about an error, try `rustc --explain E0261`.
```
### Backtrace
```
thread 'rustc' panicked at /rust/deps/ena-0.14.3/src/snapshot_vec.rs:199:10:
index out of bounds: the len is 7 but the index is 7
stack backtrace:
0: 0x7f719d25517a - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::hfadad24fb33e3d1a
1: 0x7f719da040a6 - core::fmt::write::h42d25fbda60cd99f
2: 0x7f719edc3351 - std::io::Write::write_fmt::hc2819193e80b365e
3: 0x7f719d254fd2 - std::sys::backtrace::BacktraceLock::print::h9450230402d77664
4: 0x7f719d2574d6 - std::panicking::default_hook::{{closure}}::h739047d4d787c596
5: 0x7f719d257320 - std::panicking::default_hook::h203d1229480f37a5
6: 0x7f719c2d2269 - std[56fe22ad9ea837fd]::panicking::update_hook::<alloc[b5641001d343df5f]::boxed::Box<rustc_driver_impl[945e9afaf49c7d35]::install_ice_hook::{closure#0}>>::{clos
7: 0x7f719d257be8 - std::panicking::rust_panic_with_hook::h657fdcc17f7e2546
8: 0x7f719d2579ba - std::panicking::begin_panic_handler::{{closure}}::h6c1a7592f2611ed5
9: 0x7f719d255629 - std::sys::backtrace::__rust_end_short_backtrace::h3e1efd1ff0b15465
10: 0x7f719d25767c - rust_begin_unwind
11: 0x7f7199ccd320 - core::panicking::panic_fmt::h41647251c9f15c53
12: 0x7f719b9f0b1b - core::panicking::panic_bounds_check::h66515744fb563c4b
13: 0x7f719dacc1f6 - <rustc_middle[f0eb6ba890d0a9bb]::ty::Ty as rustc_type_ir[8408d34320f8a6fb]::fold::TypeSuperFoldable<rustc_middle[f0eb6ba890d0a9bb]::ty::context::TyCtxt>>::try_
14: 0x7f719dac6323 - <&rustc_middle[f0eb6ba890d0a9bb]::ty::list::RawList<(), rustc_middle[f0eb6ba890d0a9bb]::ty::generic_args::GenericArg> as rustc_type_ir[8408d34320f8a6fb]::fold:
15: 0x7f719dac9e17 - <rustc_middle[f0eb6ba890d0a9bb]::ty::Ty as rustc_type_ir[8408d34320f8a6fb]::fold::TypeSuperFoldable<rustc_middle[f0eb6ba890d0a9bb]::ty::context::TyCtxt>>::try_
16: 0x7f719dac8554 - <rustc_middle[f0eb6ba890d0a9bb]::ty::Ty as rustc_type_ir[8408d34320f8a6fb]::fold::TypeSuperFoldable<rustc_middle[f0eb6ba890d0a9bb]::ty::context::TyCtxt>>::try_
17: 0x7f719d01a962 - <rustc_infer[21ddc8b2a5f19898]::infer::resolve::OpportunisticVarResolver as rustc_type_ir[8408d34320f8a6fb]::fold::FallibleTypeFolder<rustc_middle[f0eb6ba890d0
18: 0x7f719d087548 - <rustc_trait_selection[c0f45c4e16f8dab6]::error_reporting::TypeErrCtxt>::same_type_modulo_infer::<rustc_middle[f0eb6ba890d0a9bb]::ty::Ty>
19: 0x7f719d0fcaba - <rustc_trait_selection[c0f45c4e16f8dab6]::error_reporting::TypeErrCtxt>::note_type_err
20: 0x7f719d084350 - <rustc_trait_selection[c0f45c4e16f8dab6]::error_reporting::TypeErrCtxt>::report_and_explain_type_error
21: 0x7f719c62446e - <rustc_hir_typeck[9dbf9add14d719fd]::fn_ctxt::FnCtxt>::report_arg_errors
22: 0x7f719a103be3 - <rustc_hir_typeck[9dbf9add14d719fd]::fn_ctxt::FnCtxt>::confirm_builtin_call
23: 0x7f719e6fa2bf - <rustc_hir_typeck[9dbf9add14d719fd]::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
24: 0x7f719e6f4736 - <rustc_hir_typeck[9dbf9add14d719fd]::fn_ctxt::FnCtxt>::check_block_with_expected
25: 0x7f719e6fabb4 - <rustc_hir_typeck[9dbf9add14d719fd]::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
26: 0x7f719dc4df5b - rustc_hir_typeck[9dbf9add14d719fd]::check::check_fn
27: 0x7f719dc43bac - rustc_hir_typeck[9dbf9add14d719fd]::typeck
28: 0x7f719dc43553 - rustc_query_impl[1357963d8dd30e8b]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[1357963d8dd30e8b]::query_impl::typeck::dynamic_query::{closure#2}::{closure#0}, rustc_middle[f0eb6ba890d0a9bb]::query::erase::Erased<[u8; 8usize]>>
29: 0x7f719e123681 - rustc_query_system[887bb79932b1d8c1]::query::plumbing::try_execute_query::<rustc_query_impl[1357963d8dd30e8b]::DynamicConfig<rustc_query_system[887bb79932b1d8c1]::query::caches::VecCache<rustc_span[db86d96c2ae2e3a4]::def_id::LocalDefId, rustc_middle[f0eb6ba890d0a9bb]::query::erase::Erased<[u8; 8usize]>>, false, false, false>, rustc_query_impl[1357963d8dd30e8b]::plumbing::QueryCtxt, false>
30: 0x7f719e121b4d - rustc_query_impl[1357963d8dd30e8b]::query_impl::typeck::get_query_non_incr::__rust_end_short_backtrace
31: 0x7f719e1217c7 - <rustc_middle[f0eb6ba890d0a9bb]::hir::map::Map>::par_body_owners::<rustc_hir_analysis[af6e6fecb0b5810e]::check_crate::{closure#4}>::{closure#0}
32: 0x7f719e11f799 - rustc_hir_analysis[af6e6fecb0b5810e]::check_crate
33: 0x7f719e268aca - rustc_interface[5fea8bf9cd0b71b5]::passes::run_required_analyses
34: 0x7f719e80861e - rustc_interface[5fea8bf9cd0b71b5]::passes::analysis
35: 0x7f719e8085ef - rustc_query_impl[1357963d8dd30e8b]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[1357963d8dd30e8b]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[f0eb6ba890d0a9bb]::query::erase::Erased<[u8; 1usize]>>
36: 0x7f719e992cee - rustc_query_system[887bb79932b1d8c1]::query::plumbing::try_execute_query::<rustc_query_impl[1357963d8dd30e8b]::DynamicConfig<rustc_query_system[887bb79932b1d8c1]::query::caches::SingleCache<rustc_middle[f0eb6ba890d0a9bb]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[1357963d8dd30e8b]::plumbing::QueryCtxt, false>
37: 0x7f719e9929ce - rustc_query_impl[1357963d8dd30e8b]::query_impl::analysis::get_query_non_incr::__rust_end_short_backtrace
38: 0x7f719e88707a - rustc_interface[5fea8bf9cd0b71b5]::interface::run_compiler::<core[5ba82ee3405aa490]::result::Result<(), rustc_span[db86d96c2ae2e3a4]::ErrorGuaranteed>, rustc_driver_impl[945e9afaf49c7d35]::run_compiler::{closure#0}>::{closure#1}
39: 0x7f719e8cd5d0 - std[56fe22ad9ea837fd]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[5fea8bf9cd0b71b5]::util::run_in_thread_with_globals<rustc_interface[5fea8bf9cd0b71b5]::util::run_in_thread_pool_with_globals<rustc_interface[5fea8bf9cd0b71b5]::interface::run_compiler<core[5ba82ee3405aa490]::result::Result<(), rustc_span[db86d96c2ae2e3a4]::ErrorGuaranteed>, rustc_driver_impl[945e9afaf49c7d35]::run_compiler::{closure#0}>::{closure#1}, core[5ba82ee3405aa490]::result::Result<(), rustc_span[db86d96c2ae2e3a4]::ErrorGuaranteed>>::{closure#0}, core[5ba82ee3405aa490]::result::Result<(), rustc_span[db86d96c2ae2e3a4]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[5ba82ee3405aa490]::result::Result<(), rustc_span[db86d96c2ae2e3a4]::ErrorGuaranteed>>
40: 0x7f719e8cd9eb - <<std[56fe22ad9ea837fd]::thread::Builder>::spawn_unchecked_<rustc_interface[5fea8bf9cd0b71b5]::util::run_in_thread_with_globals<rustc_interface[5fea8bf9cd0b71b5]::util::run_in_thread_pool_with_globals<rustc_interface[5fea8bf9cd0b71b5]::interface::run_compiler<core[5ba82ee3405aa490]::result::Result<(), rustc_span[db86d96c2ae2e3a4]::ErrorGuaranteed>, rustc_driver_impl[945e9afaf49c7d35]::run_compiler::{closure#0}>::{closure#1}, core[5ba82ee3405aa490]::result::Result<(), rustc_span[db86d96c2ae2e3a4]::ErrorGuaranteed>>::{closure#0}, core[5ba82ee3405aa490]::result::Result<(), rustc_span[db86d96c2ae2e3a4]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[5ba82ee3405aa490]::result::Result<(), rustc_span[db86d96c2ae2e3a4]::ErrorGuaranteed>>::{closure#1} as core[5ba82ee3405aa490]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
41: 0x7f719e8ce4b9 - std::sys::pal::unix::thread::Thread::new::thread_start::hb3d6392adeea417c
42: 0x7f7198a6bac3 - start_thread
at ./nptl/pthread_create.c:442:8
43: 0x7f7198afd850 - __GI___clone3
at ./misc/../sysdeps/unix/sysv/linux/x86_64/clone3.S:81
44: 0x0 - <unknown>
```
### Anything else?
This issue is similar to https://github.com/rust-lang/rust/issues/122098; however, https://github.com/rust-lang/rust/issues/122098 does not report an ICE on the current nightly version. The difference is the addition of a `main` function. | I-ICE,T-compiler,C-bug,S-bug-has-test | low | Critical |
2,643,375,357 | kubernetes | DRA: detect stale DRA plugin sockets | ### What would you like to be added?
@klueska observed that after uninstalling his DRA driver *without* removing the Unix Domain socket used for gRPC towards the kubelet, kubelet didn't unregister the driver. We may have to add some liveness probing to `pkg/kubelet/cm/dra/plugin/registration.go`.
/sign node
/wg device-management
/priority backlog
### Why is this needed?
The kubelet needs to remove ResourceSlices on behalf of a defunct DRA driver. | priority/backlog,sig/node,kind/feature,needs-triage,wg/device-management | low | Major |
2,643,383,849 | ui | [Bug]: Triggering a form within a sheet thatโs embedded in form on a different sheet | ### Describe the bug
I have a sheet with a form, and another sheet with its own form. Submitting the form on the second sheet triggers the form on the first sheet.
Create product
```ts
"use client";
import { UnitsInput } from "@/client/components/custom/units-input";
import { Button } from "@/client/components/ui/button";
import {
Form,
FormControl,
FormDescription,
FormField,
FormItem,
FormLabel,
FormMessage,
} from "@/client/components/ui/form";
import { Input } from "@/client/components/ui/input";
import {
Sheet,
SheetContent,
SheetDescription,
SheetHeader,
SheetTitle,
SheetTrigger,
} from "@/client/components/ui/sheet";
import { Textarea } from "@/client/components/ui/textarea";
import { SOMETHING_WENT_WRONG } from "@/common/constants";
import { createProductSchema } from "@/common/validations";
import { createProductAction } from "@/server/actions/product";
import { zodResolver } from "@hookform/resolvers/zod";
import { useAction } from "next-safe-action/hooks";
import { useRouter } from "next/navigation";
import { useForm } from "react-hook-form";
import { toast } from "sonner";
import { z } from "zod";
const schema = createProductSchema;
type Payload = z.infer<typeof schema>;
export const Create = () => {
const router = useRouter();
const form = useForm<Payload>({
resolver: zodResolver(schema),
values: {
name: "",
unitId: "",
price: 1,
shortDescription: "",
description: "",
},
});
const createProduct = useAction(createProductAction, {
onSuccess: () => {
toast.success("Product created successfully");
form.reset();
router.refresh();
},
onError: ({ error }) => {
toast.error(error.serverError || SOMETHING_WENT_WRONG);
},
});
const onSubmit = (payload: Payload) => {
createProduct.execute(payload);
};
return (
<Sheet>
<SheetTrigger asChild>
<Button size="sm" variant="outline">
New product
</Button>
</SheetTrigger>
<SheetContent className="flex flex-col overflow-y-auto">
<SheetHeader>
<SheetTitle>New product</SheetTitle>
<SheetDescription>
You can add new products from here.
</SheetDescription>
</SheetHeader>
<Form {...form}>
<form
className="flex flex-col gap-4"
onSubmit={form.handleSubmit(onSubmit)}
>
<FormField
name="name"
control={form.control}
render={({ field }) => (
<FormItem>
<FormLabel>Name</FormLabel>
<FormControl>
<Input {...field} placeholder="Fearux" />
</FormControl>
<FormDescription>The name of the product.</FormDescription>
<FormMessage />
</FormItem>
)}
/>
<FormField
name="unitId"
control={form.control}
render={() => (
<FormItem>
<FormLabel>UnitId</FormLabel>
<FormControl>
<UnitsInput />
</FormControl>
<FormDescription>The unit of the product.</FormDescription>
<FormMessage />
</FormItem>
)}
/>
<FormField
name="price"
control={form.control}
render={({ field }) => (
<FormItem>
<FormLabel>Price</FormLabel>
<FormControl>
<Input
{...field}
onChange={(e) =>
field.onChange(parseFloat(e.target.value))
}
placeholder="Fearux"
type="number"
min="0.001"
step="0.001"
/>
</FormControl>
<FormDescription>The price of the product.</FormDescription>
<FormMessage />
</FormItem>
)}
/>
<FormField
name="shortDescription"
control={form.control}
render={({ field }) => (
<FormItem>
<FormLabel>Short description</FormLabel>
<FormControl>
<Input {...field} placeholder="This product is..." />
</FormControl>
<FormDescription>
The short description of the product.
</FormDescription>
<FormMessage />
</FormItem>
)}
/>
<FormField
name="description"
control={form.control}
render={({ field }) => (
<FormItem>
<FormLabel>Description</FormLabel>
<FormControl>
<Textarea
{...field}
placeholder="This product is..."
rows={8}
/>
</FormControl>
<FormDescription>
The description of the product.
</FormDescription>
<FormMessage />
</FormItem>
)}
/>
<div>
<Button type="submit" pending={createProduct.isPending}>
Add
</Button>
</div>
</form>
</Form>
</SheetContent>
</Sheet>
);
};
```
Units input
```ts
import { Plus } from "lucide-react";
import { Button } from "../ui/button";
import { Combobox } from "../ui/combobox";
import { UnitsCreate } from "./units-create";
import { useQuery } from "@tanstack/react-query";
import { useState } from "react";
import { getUnitsAction } from "@/server/actions/unit";
import { CustomError } from "@/server/lib/action";
import { SOMETHING_WENT_WRONG } from "@/common/constants";
import { Skeleton } from "../ui/skeleton";
import debounce from "debounce";
export const UnitsInput = () => {
const [query, setQuery] = useState("");
const unitsQuery = useQuery({
queryKey: ["units", query],
placeholderData: (phd) => phd,
queryFn: async () => {
const response = await getUnitsAction({
query,
page: 1,
});
if (!response?.data) {
throw new CustomError(response?.serverError || SOMETHING_WENT_WRONG);
}
return response.data.data;
},
});
const onQuery = debounce(setQuery, 300);
if (unitsQuery.isSuccess) {
return (
<div className="flex items-center gap-2">
<Combobox
onQuery={onQuery}
className="flex-1"
items={unitsQuery.data.map((unit) => ({
label: unit.name,
value: unit.id,
}))}
/>
<UnitsCreate refetch={() => unitsQuery.refetch()}>
<Button size="icon" variant="outline" className="shrink-0">
<Plus className="w-4 h-4" />
</Button>
</UnitsCreate>
</div>
);
}
return <Skeleton className="h-9" />;
};
```
Create unit
```ts
"use client";
import { Button } from "@/client/components/ui/button";
import {
Form,
FormControl,
FormDescription,
FormField,
FormItem,
FormLabel,
FormMessage,
} from "@/client/components/ui/form";
import { Input } from "@/client/components/ui/input";
import {
Sheet,
SheetContent,
SheetDescription,
SheetHeader,
SheetTitle,
SheetTrigger,
} from "@/client/components/ui/sheet";
import { SOMETHING_WENT_WRONG } from "@/common/constants";
import { createUnitSchema } from "@/common/validations";
import { createUnitAction } from "@/server/actions/unit";
import { zodResolver } from "@hookform/resolvers/zod";
import { useAction } from "next-safe-action/hooks";
import { useRouter } from "next/navigation";
import { useForm } from "react-hook-form";
import { toast } from "sonner";
import { z } from "zod";
const schema = createUnitSchema;
type Payload = z.infer<typeof schema>;
interface UnitsCreateProps {
children: React.ReactNode;
refetch?: () => void;
}
export const UnitsCreate: React.FC<UnitsCreateProps> = ({
children,
refetch,
}) => {
const router = useRouter();
const form = useForm<Payload>({
resolver: zodResolver(schema),
values: {
name: "",
},
});
const createUnit = useAction(createUnitAction, {
onSuccess: () => {
toast.success("Unit created successfully");
(refetch || router.refresh)();
form.reset();
},
onError: ({ error }) => {
toast.error(error.serverError || SOMETHING_WENT_WRONG);
},
});
const onSubmit = (payload: Payload) => {
createUnit.execute(payload);
};
return (
<Sheet>
<SheetTrigger asChild>{children}</SheetTrigger>
<SheetContent className="flex flex-col" side="left">
<SheetHeader>
<SheetTitle>New unit</SheetTitle>
<SheetDescription>You can add new units from here.</SheetDescription>
</SheetHeader>
<Form {...form}>
<form
className="flex flex-col gap-4"
onSubmit={form.handleSubmit(onSubmit)}
>
<FormField
name="name"
control={form.control}
render={({ field }) => (
<FormItem>
<FormLabel>Name</FormLabel>
<FormControl>
<Input {...field} placeholder="kg" />
</FormControl>
<FormDescription>The name of the unit.</FormDescription>
<FormMessage />
</FormItem>
)}
/>
<div>
<Button type="submit" pending={createUnit.isPending}>
Add
</Button>
</div>
</form>
</Form>
</SheetContent>
</Sheet>
);
};
```
https://github.com/user-attachments/assets/f18ed23e-f9fb-48bd-8eed-b5af3ed9c360
### Affected component/components
sheet, form, button
### How to reproduce
render a sheet with a form inside and inside that form render another sheet with another form inside and try submiting the second form and see that the first one triggers
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
MBP m3 pro
sequoia 15.1
Google chrome Version 130.0.6723.92 (Official Build) (arm64)
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,643,389,903 | pytorch | torch.set_autocast_enabled is not working in torch.compile(fullgraph=True) | ### ๐ Describe the bug
Dynamo creates a graph break around `set_autocast_enabled` causing fullgraph=True mode to fail. Since `torch.autocast` context manager is supported in Dynamo its lower-level component of disabling or enabling autocast could also be supported.
### Error logs
```py
Unsupported: Graph break due to unsupported builtin torch.set_autocast_enabled. This function is either a Python builtin (e.g. _warnings.warn) or a third-party C/C++ Python extension (perhaps created with pybind). If it is a Python builtin, please file an issue on GitHub so the PyTorch team can add support for it and see the next case for a workaround. If it is a third-party C/C++ Python extension, please either wrap it into a PyTorch-understood custom operator (see https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html for more details) or, if it is traceable, use torch.compiler.allow_in_graph.
from user code:
File "<ipython-input-6-85cbde30c3a9>", line 7, in f
set_autocast_enabled("cuda", False)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
### Minified repro
```py
from torch import set_autocast_enabled
import torch
@torch.compile(fullgraph=True)
def f(x):
try:
set_autocast_enabled("cuda", False)
return x @ x
finally:
set_autocast_enabled("cuda", True)
x = torch.randn(4, 4, device="cuda")
with torch.autocast("cuda", dtype=torch.bfloat16):
assert f(x).dtype == torch.float32
```
### Versions
`2.6.0a0+git8b08559`
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | triaged,oncall: pt2,module: dynamo | low | Critical |
2,643,396,363 | ant-design | DatePicker็นๅปๅผนๅบ้ขๆฟไธ็ๅ
็ด ๅไผ้่ฏฏๅฐ่งฆๅonBlurๆนๆณ | ### Reproduction link
[5.12.8๏ผไธไผ่งฆๅonBlur๏ผ](https://codesandbox.io/p/sandbox/suspicious-williams-jnzm76)
[5.21.6๏ผไผ่งฆๅonBlur๏ผ](https://codesandbox.io/p/sandbox/heuristic-buck-4jy484)๏ผ
### Steps to reproduce
1. ็นๅปDatePicker
2. ้ไพฟ็นๅปไธไธชๆฅๆ
### What is expected?
ไธ่งฆๅonBlurๆนๆณ
### What is actually happening?
่งฆๅไบonBlurๆนๆณ
| Environment | Info |
| --- | --- |
| antd | 5.21.6 |
| React | 18.2.0 |
| System | macos Sonoma 14.2.1 |
| Browser | Chrome 114.0.5735.133 |
---
ไป5.12.8ๅ็บงๅฐไบ5.21.6ๅๅ็ฐๆญค้ฎ้ข๏ผไธๅคช็กฎๅฎๆฏๅชไธช็ๆฌๅผๅ
ฅ็issue
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | ๐ฃ Discussion,Inactive | low | Major |
2,643,410,558 | vscode | Vscode does not open links in default browser | <!-- โ ๏ธโ ๏ธ Do Not Delete This! bug_report_template โ ๏ธโ ๏ธ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- ๐ฎ Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- ๐ Search existing issues to avoid creating duplicates. -->
<!-- ๐งช Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- ๐ก Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- ๐ง Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- ๐ช If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- ๐ฃ Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version:
```
Version: 1.95.1
Commit: 65edc4939843c90c34d61f4ce11704f09d3e5cb6
Date: 2024-10-31T05:14:54.222Z
Electron: 32.2.1
ElectronBuildId: 10427718
Chromium: 128.0.6613.186
Node.js: 20.18.0
V8: 12.8.374.38-electron.0
OS: Linux x64 5.15.0-124-generic snap
```
- OS Version:
```
Distributor ID: Ubuntu
Description: Ubuntu 22.04.3 LTS
Release: 22.04
Codename: jammy
```
Steps to Reproduce:
1. Enter a link in the editor such as `http://duckduckgo.com`
2. Ctrl+Click the link
3. Wrong browser is used to open the link
My default browser is firefox and if if I enter the command `open http://duckduckgo.com` in the terminal, it opens it in the correct firefox window, but vscode uses the wrong browser (opens in chrome). This issue is recent and I've noticed it in v1.95.1, previously it worked fine
If you can provide detail on which mime type the links are using I can further inspect the default application for the mime type, `text/html` is correctly configured on firefox hence not sure where the issue lies. Other applications also open the correct browser and I've only noticed the issue with vscode | bug,linux,snap | low | Critical |
2,643,422,236 | angular | Issue with withViewTransitions when used in iframe causing navigation to stall in Angular applications | ### Which @angular/* package(s) are the source of the bug?
router
### Is this a regression?
Yes
### Description
**Browser/Platform:**
Safari on iOS 18.x
**Current Behavior:**
When using the withViewTransitions feature in an Angular application, which is embedded in a different website via an iframe, the navigation changes the URL but does not render the new component. The application remains stuck on the initial page despite the URL change being reflected correctly in the address bar.
**Expected Behavior:**
When navigating to a new route within an Angular application embedded in an iframe, the new component should render correctly, and the URL should update without any issues.
**Reproduction:**
Create an Angular application with routes configured using withViewTransitions.
Embed this application in an iframe on another webpage.
Perform a navigation within the iframe-based application.
### Please provide a link to a minimal reproduction of the bug
_No response_
### Please provide the exception or error you saw
_No response_
### Please provide the environment you discovered this bug in (run `ng version`)
Angular CLI: 18.2.11
Node: 20.12.2
Package Manager: npm 10.5.0
OS: linux x64
Angular: 18.2.11
... animations, build, cli, common, compiler, compiler-cli, core
... forms, language-service, localize, platform-browser
... platform-browser-dynamic, platform-server, router
... service-worker, ssr
Package Version
------------------------------------------------------
@angular-devkit/architect 0.1802.11
@angular-devkit/core 18.2.11
@angular-devkit/schematics 18.2.11
@angular/cdk 18.2.12
@angular/fire 18.0.1
@angular/google-maps 18.2.12
@schematics/angular 18.2.11
rxjs 7.8.1
typescript 5.4.5
webpack 5.96.1
zone.js 0.14.10
### Anything else?
The problem is resolved if withViewTransitions is removed. | area: router | low | Critical |
2,643,427,444 | vscode | Is "Sucessfully signed out" notification really needed | 1. Sign out from an account in VS Code
2. Notice notification saying "Successfully signed out"
3. Though the Account view will most likely show a badge, since we now need you to sign in for something to light up
Do we really need the notification? Since we have the account view changing and confirming to user that everything is ok.

| feature-request,authentication | low | Minor |
2,643,451,611 | PowerToys | Image Resizer | ### Description of the new feature / enhancement
Have the ability to choose a group of image sizes, in order to resize the selected images into the different sizes that make up the group.
### Scenario when this would be used?
If I want to resize 10 images in 3 different sizes, it would be usefull
### Supporting information
Many CMS resize uploaded picture in serveral size. And this new feature will be usefull for many developpers and webmaster | Needs-Triage | low | Minor |
2,643,485,849 | rust | [ICE]: maybe try to call `try_normalize_erasing_regions` instead | <!--
Thank you for finding an Internal Compiler Error! ๐ง If possible, try to provide
a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for
how to create smaller examples.
http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/
-->
### Code
```Rust
use std::hint::black_box;
trait Func {
type Ret: Id;
}
trait Id {
type Assoc;
}
fn main() {}
impl Id for i32 {
type Assoc = i32;
}
impl<F: FnOnce() -> R, R: Id> Func for F {
type Ret = R;
}
fn bar() -> impl Copy + Id {
0u32
}
struct Foo<T: Func> {
_func: T,
value: Option<<<T as Func>::Ret as Id>::Assoc>,
}
fn main() {
let mut fn_def = black_box(Foo {
_func: bar,
value: None,
});
let fn_ptr = black_box(Foo {
_func: bar as fn() -> _,
value: None,
});
fn_def.value = fn_ptr.value;
black_box(fn_def);
}
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.84.0-nightly (b91a3a056 2024-11-07)
binary: rustc
commit-hash: b91a3a05609a46f73d23e0995ae7ebb4a4f429a5
commit-date: 2024-11-07
host: x86_64-unknown-linux-gnu
release: 1.84.0-nightly
LLVM version: 19.1.3
```
### Error output
```
error[E0428]: the name `main` is defined multiple times
--> mutant.rs:22:1
|
8 | fn main() {}
| --------- previous definition of the value `main` here
...
22 | fn main() {
| ^^^^^^^^^ `main` redefined here
|
= note: `main` must be defined only once in the value namespace of this module
error[E0277]: the trait bound `u32: Id` is not satisfied
--> mutant.rs:15:13
|
15 | fn bar() -> impl Copy + Id {
| ^^^^^^^^^^^^^^ the trait `Id` is not implemented for `u32`
16 | 0u32
| ---- return type was inferred to be `u32` here
|
= help: the trait `Id` is implemented for `i32`
error: internal compiler error: compiler/rustc_middle/src/ty/normalize_erasing_regions.rs:169:90: Failed to normalize std::option::Option<Alias(Projection, AliasTy { args: [Alias(Projection, AliasTy { args: [FnDef(DefId(0:15 ~ mutant[20f1]::bar), [])], def_id: DefId(0:5 ~ mutant[20f1]::Func::Ret), .. })], def_id: DefId(0:7 ~ mutant[20f1]::Id::Assoc), .. })>, maybe try to call `try_normalize_erasing_regions` instead
thread 'rustc' panicked at compiler/rustc_middle/src/ty/normalize_erasing_regions.rs:169:90:
Box<dyn Any>
stack backtrace:
...
note: compiler flags: --crate-type staticlib -C link-dead-code -C debuginfo=2 -C opt-level=3 -Z mir-opt-level=3
query stack during panic:
#0 [mir_drops_elaborated_and_const_checked] elaborating drops for `main`
#1 [analysis] running analysis passes on this crate
end of query stack
error: aborting due to 3 previous errors
Some errors have detailed explanations: E0277, E0428.
For more information about an error, try `rustc --explain E0277`.
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary><strong>Backtrace</strong></summary>
<p>
```
stack backtrace:
0: 0x7fde3585517a - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::hfadad24fb33e3d1a
1: 0x7fde360040a6 - core::fmt::write::h42d25fbda60cd99f
2: 0x7fde373c3351 - std::io::Write::write_fmt::hc2819193e80b365e
3: 0x7fde35854fd2 - std::sys::backtrace::BacktraceLock::print::h9450230402d77664
4: 0x7fde358574d6 - std::panicking::default_hook::{{closure}}::h739047d4d787c596
5: 0x7fde35857320 - std::panicking::default_hook::h203d1229480f37a5
6: 0x7fde348d2269 - std[56fe22ad9ea837fd]::panicking::update_hook::<alloc[b5641001d343df5f]::boxed::Box<rustc_driver_impl[945e9afaf49c7d35]::install_ice_hook::{closure#0}>>::{closure#0}
7: 0x7fde35857be8 - std::panicking::rust_panic_with_hook::h657fdcc17f7e2546
8: 0x7fde3490c191 - std[56fe22ad9ea837fd]::panicking::begin_panic::<rustc_errors[43c84716ac990581]::ExplicitBug>::{closure#0}
9: 0x7fde348ff166 - std[56fe22ad9ea837fd]::sys::backtrace::__rust_end_short_backtrace::<std[56fe22ad9ea837fd]::panicking::begin_panic<rustc_errors[43c84716ac990581]::ExplicitBug>::{closure#0}, !>
10: 0x7fde348feefe - std[56fe22ad9ea837fd]::panicking::begin_panic::<rustc_errors[43c84716ac990581]::ExplicitBug>
11: 0x7fde34915e71 - <rustc_errors[43c84716ac990581]::diagnostic::BugAbort as rustc_errors[43c84716ac990581]::diagnostic::EmissionGuarantee>::emit_producing_guarantee
12: 0x7fde34f8f2c3 - rustc_middle[f0eb6ba890d0a9bb]::util::bug::opt_span_bug_fmt::<rustc_span[db86d96c2ae2e3a4]::span_encoding::Span>::{closure#0}
13: 0x7fde34f758ba - rustc_middle[f0eb6ba890d0a9bb]::ty::context::tls::with_opt::<rustc_middle[f0eb6ba890d0a9bb]::util::bug::opt_span_bug_fmt<rustc_span[db86d96c2ae2e3a4]::span_encoding::Span>::{closure#0}, !>::{closure#0}
14: 0x7fde34f7574b - rustc_middle[f0eb6ba890d0a9bb]::ty::context::tls::with_context_opt::<rustc_middle[f0eb6ba890d0a9bb]::ty::context::tls::with_opt<rustc_middle[f0eb6ba890d0a9bb]::util::bug::opt_span_bug_fmt<rustc_span[db86d96c2ae2e3a4]::span_encoding::Span>::{closure#0}, !>::{closure#0}, !>
15: 0x7fde331e7300 - rustc_middle[f0eb6ba890d0a9bb]::util::bug::bug_fmt
16: 0x7fde368f39bc - <rustc_middle[f0eb6ba890d0a9bb]::ty::normalize_erasing_regions::NormalizeAfterErasingRegionsFolder as rustc_type_ir[8408d34320f8a6fb]::fold::TypeFolder<rustc_middle[f0eb6ba890d0a9bb]::ty::context::TyCtxt>>::fold_ty
17: 0x7fde3643e8af - <rustc_mir_dataflow[d330eec8ae5b9552]::elaborate_drops::DropCtxt<rustc_mir_transform[c62e463fc59f8bd1]::elaborate_drops::ElaborateDropsCtxt>>::elaborate_drop
18: 0x7fde32a290dd - <rustc_mir_transform[c62e463fc59f8bd1]::elaborate_drops::ElaborateDrops as rustc_mir_transform[c62e463fc59f8bd1]::pass_manager::MirPass>::run_pass
19: 0x7fde36009348 - rustc_mir_transform[c62e463fc59f8bd1]::run_analysis_to_runtime_passes
20: 0x7fde362ddb58 - rustc_mir_transform[c62e463fc59f8bd1]::mir_drops_elaborated_and_const_checked
21: 0x7fde362dd45b - rustc_query_impl[1357963d8dd30e8b]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[1357963d8dd30e8b]::query_impl::mir_drops_elaborated_and_const_checked::dynamic_query::{closure#2}::{closure#0}, rustc_middle[f0eb6ba890d0a9bb]::query::erase::Erased<[u8; 8usize]>>
22: 0x7fde36723681 - rustc_query_system[887bb79932b1d8c1]::query::plumbing::try_execute_query::<rustc_query_impl[1357963d8dd30e8b]::DynamicConfig<rustc_query_system[887bb79932b1d8c1]::query::caches::VecCache<rustc_span[db86d96c2ae2e3a4]::def_id::LocalDefId, rustc_middle[f0eb6ba890d0a9bb]::query::erase::Erased<[u8; 8usize]>>, false, false, false>, rustc_query_impl[1357963d8dd30e8b]::plumbing::QueryCtxt, false>
23: 0x7fde367230cd - rustc_query_impl[1357963d8dd30e8b]::query_impl::mir_drops_elaborated_and_const_checked::get_query_non_incr::__rust_end_short_backtrace
24: 0x7fde36869488 - rustc_interface[5fea8bf9cd0b71b5]::passes::run_required_analyses
25: 0x7fde36e0861e - rustc_interface[5fea8bf9cd0b71b5]::passes::analysis
26: 0x7fde36e085ef - rustc_query_impl[1357963d8dd30e8b]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[1357963d8dd30e8b]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[f0eb6ba890d0a9bb]::query::erase::Erased<[u8; 1usize]>>
27: 0x7fde36f92cee - rustc_query_system[887bb79932b1d8c1]::query::plumbing::try_execute_query::<rustc_query_impl[1357963d8dd30e8b]::DynamicConfig<rustc_query_system[887bb79932b1d8c1]::query::caches::SingleCache<rustc_middle[f0eb6ba890d0a9bb]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[1357963d8dd30e8b]::plumbing::QueryCtxt, false>
28: 0x7fde36f929ce - rustc_query_impl[1357963d8dd30e8b]::query_impl::analysis::get_query_non_incr::__rust_end_short_backtrace
29: 0x7fde36e8707a - rustc_interface[5fea8bf9cd0b71b5]::interface::run_compiler::<core[5ba82ee3405aa490]::result::Result<(), rustc_span[db86d96c2ae2e3a4]::ErrorGuaranteed>, rustc_driver_impl[945e9afaf49c7d35]::run_compiler::{closure#0}>::{closure#1}
30: 0x7fde36ecd5d0 - std[56fe22ad9ea837fd]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[5fea8bf9cd0b71b5]::util::run_in_thread_with_globals<rustc_interface[5fea8bf9cd0b71b5]::util::run_in_thread_pool_with_globals<rustc_interface[5fea8bf9cd0b71b5]::interface::run_compiler<core[5ba82ee3405aa490]::result::Result<(), rustc_span[db86d96c2ae2e3a4]::ErrorGuaranteed>, rustc_driver_impl[945e9afaf49c7d35]::run_compiler::{closure#0}>::{closure#1}, core[5ba82ee3405aa490]::result::Result<(), rustc_span[db86d96c2ae2e3a4]::ErrorGuaranteed>>::{closure#0}, core[5ba82ee3405aa490]::result::Result<(), rustc_span[db86d96c2ae2e3a4]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[5ba82ee3405aa490]::result::Result<(), rustc_span[db86d96c2ae2e3a4]::ErrorGuaranteed>>
31: 0x7fde36ecd9eb - <<std[56fe22ad9ea837fd]::thread::Builder>::spawn_unchecked_<rustc_interface[5fea8bf9cd0b71b5]::util::run_in_thread_with_globals<rustc_interface[5fea8bf9cd0b71b5]::util::run_in_thread_pool_with_globals<rustc_interface[5fea8bf9cd0b71b5]::interface::run_compiler<core[5ba82ee3405aa490]::result::Result<(), rustc_span[db86d96c2ae2e3a4]::ErrorGuaranteed>, rustc_driver_impl[945e9afaf49c7d35]::run_compiler::{closure#0}>::{closure#1}, core[5ba82ee3405aa490]::result::Result<(), rustc_span[db86d96c2ae2e3a4]::ErrorGuaranteed>>::{closure#0}, core[5ba82ee3405aa490]::result::Result<(), rustc_span[db86d96c2ae2e3a4]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[5ba82ee3405aa490]::result::Result<(), rustc_span[db86d96c2ae2e3a4]::ErrorGuaranteed>>::{closure#1} as core[5ba82ee3405aa490]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
32: 0x7fde36ece4b9 - std::sys::pal::unix::thread::Thread::new::thread_start::hb3d6392adeea417c
33: 0x7fde3106bac3 - start_thread
at ./nptl/pthread_create.c:442:8
34: 0x7fde310fd850 - __GI___clone3
at ./misc/../sysdeps/unix/sysv/linux/x86_64/clone3.S:81
35: 0x0 - <unknown>
```
</p>
</details>
### Anything else
Could this be related to https://github.com/rust-lang/rust/issues/120267?
| I-ICE,T-compiler,C-bug | low | Critical |
2,643,554,466 | TypeScript | Error: TypeError: Cannot read properties of undefined (reading 'kind') | ### ๐ Search Terms
Hi, I have an angular application that was running perfectly fine but recently started giving an error in heroku deployment. No matter what I try, I continue to get the same error. Although the application runs and builds fine in my local environment, it fails when deployed to heroku.
The error I get is:
`- Generating browser application bundles...
โ Browser application bundle generation complete.
Error: TypeError: Cannot read properties of undefined (reading 'kind')
at /tmp/build_27a77d7c/node_modules/typescript/lib/typescript.js:2580:166
at assert (/tmp/build_27a77d7c/node_modules/typescript/lib/typescript.js:2501:127)
at Object.assertNode (/tmp/build_27a77d7c/node_modules/typescript/lib/typescript.js:2580:17)
at visitNode (/tmp/build_27a77d7c/node_modules/typescript/lib/typescript.js:80360:18)
at Object.visitEachChild (/tmp/build_27a77d7c/node_modules/typescript/lib/typescript.js:80766:124)
at visitor (/tmp/build_27a77d7c/node_modules/@ngtools/webpack/src/transformers/make_transform.js:60:31)
at visitNode (/tmp/build_27a77d7c/node_modules/typescript/lib/typescript.js:80346:23)
at Object.visitEachChild (/tmp/build_27a77d7c/node_modules/typescript/lib/typescript.js:80764:172)
at visitor (/tmp/build_27a77d7c/node_modules/@ngtools/webpack/src/transformers/make_transform.js:60:31)
at visitNodes (/tmp/build_27a77d7c/node_modules/typescript/lib/typescript.js:80399:48)
at visitLexicalEnvironment (/tmp/build_27a77d7c/node_modules/typescript/lib/typescript.js:80439:22)
at Object.visitEachChild (/tmp/build_27a77d7c/node_modules/typescript/lib/typescript.js:80826:55)
at visitor (/tmp/build_27a77d7c/node_modules/@ngtools/webpack/src/transformers/make_transform.js:60:31)
at Object.visitNode (/tmp/build_27a77d7c/node_modules/typescript/lib/typescript.js:80346:23)
at transformer (/tmp/build_27a77d7c/node_modules/@ngtools/webpack/src/transformers/make_transform.js:67:31)
at transformSourceFileOrBundle (/tmp/build_27a77d7c/node_modules/typescript/lib/typescript.js:81505:57)
-----> Build failed`
I have tried changing the version of typescript, adding compile options in tsconfig.json to ignore typescript version check but still run into the same issue.
My package.json is:
`{
"name": "app",
"version": "2.0.0",
"browser": {
"fs": false,
"path": false,
"os": false
},
"scripts": {
"ng": "ng",
"main": "server.js",
"heroku-postbuild": "ng build app --aot --configuration=${ENV}",
"preinstall": "npm install --location=global @angular/cli @angular/compiler-cli --legacy-peer-deps",
"start": "ng serve",
"build": "ng build --prod",
"cypress:open": "cypress open",
"swagger": "node ./swagger.js",
"cypress:run": "cypress run",
"lint": "ng lint",
"e2e": "ng e2e"
},
"private": true,
"dependencies": {
"@angular-devkit/build-angular": "^12.2.18",
"@angular/animations": "^12.2.17",
"@angular/cdk": "^12.2.13",
"@angular/cli": "^12.2.18",
"@angular/common": "^12.2.17",
"@angular/compiler": "^12.2.17",
"@angular/compiler-cli": "^12.2.17",
"@angular/core": "^12.2.17",
"@angular/flex-layout": "^12.0.0-beta.34",
"@angular/forms": "^12.2.17",
"@angular/language-service": "^12.2.17",
"@angular/localize": "^12.2.17",
"@angular/material": "^12.2.13",
"@angular/platform-browser": "^12.2.17",
"@angular/platform-browser-dynamic": "^12.2.17",
"@angular/router": "^12.2.17",
"@capacitor/android": "^5.7.0",
"@capacitor/app": "5.0.7",
"@capacitor/core": "^5.7.0",
"@capacitor/dialog": "^5.0.7",
"@capacitor/haptics": "5.0.7",
"@capacitor/ios": "5.7.0",
"@capacitor/keyboard": "5.0.8",
"@capacitor/preferences": "^5.0.7",
"@capacitor/status-bar": "5.0.7",
"@capgo/capacitor-updater": "^5.9.0",
"@ng-bootstrap/ng-bootstrap": "9.0.2",
"@ngtools/webpack": "^12.2.18",
"@ngx-translate/core": "13.0.0",
"@ngx-translate/http-loader": "^4.0.0",
"@types/chart.js": "^2.7.42",
"@types/chartist": "^0.9.38",
"@types/crypto-js": "^3.1.47",
"@types/express": "^4.17.0",
"@types/googlemaps": "^3.43.3",
"@types/jasmine": "~2.8.22",
"@types/jasminewd2": "~2.0.3",
"@types/lodash": "4.14.135",
"@types/node": "^11.15.54",
"@types/socket.io": "^3.0.2",
"@types/socket.io-client": "^3.0.0",
"@types/uuid": "^8.3.0",
"@types/w3c-web-usb": "^1.0.10",
"@types/web-bluetooth": "0.0.4",
"angular-bootstrap-md": "^11.1.0",
"angular-cc-library": "^2.1.2",
"angular-cli-ghpages": "^0.6.2",
"angular-notifier": "^9.1.0",
"angular-responsive-carousel": "^2.0.2",
"angularx-qrcode": "^12",
"apexcharts": "^3.44.0",
"axios": "^1.6.1",
"bcryptjs": "^2.4.3",
"body-parser": "^1.18.3",
"bootstrap": "^4.5.3",
"chart.js": "^2.9.4",
"chartist": "^0.11.4",
"clover-ecomm-sdk": "^1.0.0",
"config": "^3.3.6",
"core-js": "^2.5.4",
"cors": "^2.8.5",
"cron": "^3.1.6",
"crypto": "^1.0.1",
"crypto-js": "^4.2.0",
"dotenv": "^6.1.0",
"exec": "^0.2.1",
"express": "^4.18.1",
"express-jwt": "^8.4.1",
"express-subdomain": "^1.0.6",
"font-awesome": "^4.7.0",
"fontawesome": "^5.6.3",
"fs": "^0.0.1-security",
"googlemaps": "^1.12.0",
"got": "^11.8.1",
"hammerjs": "^2.0.8",
"jsonwebtoken": "^9.0.2",
"jwt-decode": "^3.1.2",
"lodash": "^4.17.21",
"luxon": "^3.4.4",
"lz-string": "^1.5.0",
"material-dashboard": "^2.1.0",
"material-design-icons": "^3.0.1",
"material-design-lite": "^1.3.0",
"mdbootstrap": "^4.19.2",
"moment": "^2.30.1",
"mongodb": "^3.0.10",
"mongoose": "^5.11.15",
"mongoose-to-swagger": "^1.5.1",
"ng-apexcharts": "1.5.12",
"ng-chartist": "^4.1.0",
"ng-image-slider": "^3.0.1",
"ng-multiselect-dropdown": "^0.2.14",
"ng-otp-input": "1.8.1",
"ng-socket-io": "^0.2.4",
"ngx-autosize": "^1.8.4",
"ngx-bootstrap": "^6.2.0",
"ngx-device-detector": "^2.0.0",
"ngx-google-places-autocomplete": "^2.0.5",
"ngx-guided-tour": "^1.1.11",
"ngx-infinite-scroll": "^10.0.0",
"ngx-material-timepicker": "5.6.0",
"ngx-swiper-wrapper": "^10.0.0",
"ngx-toastr": "13.2.1",
"ngx-virtual-scroller": "^4.0.3",
"openai": "^4.17.4",
"path": "^0.12.7",
"popper.js": "^1.15.0",
"postcss": "^8.4.14",
"request-promise": "^4.2.4",
"resize-base64": "^1.0.12",
"rootpath": "^0.1.2",
"rxjs": "^6.5.2",
"rxjs-compat": "^6.3.3",
"simple-keyboard": "^3.7.65",
"socket.io": "^4.6.2",
"socket.io-client": "^4.6.2",
"swagger-ui-express": "^5.0.0",
"telnyx": "^1.26.0",
"time-ago-pipe": "^1.3.2",
"ts-node": "6.0.0",
"tslib": "^1.9.0",
"uuid": "^3.3.2",
"web-animations-js": "^2.3.2",
"zone.js": "~0.11.8"
},
"devDependencies": {
"@angular-devkit/core": "^12.2.18",
"@angular-devkit/schematics": "^12.2.18",
"@capacitor/cli": "^5.7.0",
"codelyzer": "^6.0.2",
"cypress": "^13.5.0",
"cypress-cucumber-preprocessor": "^4.3.1",
"cypress-multi-reporters": "^1.6.0",
"eslint-plugin-cypress": "^2.10.3",
"jasmine-core": "~2.99.1",
"jasmine-spec-reporter": "~4.2.1",
"karma": "^6.4.0",
"karma-chrome-launcher": "~2.2.0",
"karma-coverage-istanbul-reporter": "~2.0.0",
"karma-jasmine": "~1.1.1",
"karma-jasmine-html-reporter": "^0.2.2",
"ng2-charts-schematics": "^0.1.7",
"protractor": "^7.0.0",
"swagger-autogen": "^2.23.7",
"swiper": "^6.8.4",
"tslint": "6.1.3",
"typescript": "^4.2.3"
},
"engines": {
"node": "18.18.2",
"npm": "9.8.0"
}
}
`
I did not update the version or anything
### ๐ Version & Regression Information
- This changed between versions ______ and _______
- This changed in commit or PR _______
- This is the behavior in every version I tried, and I reviewed the FAQ for entries about _________
- I was unable to test this on prior versions because _______
### โฏ Playground Link
_No response_
### ๐ป Code
```ts
// Your code here
```
### ๐ Actual behavior
The build is failing with the above mentioned error. It only happens on the heroku dev pipeline and builds fine in local environment and when in production pipeline.
### ๐ Expected behavior
the build should pass.
### Additional information about the issue
_No response_ | Needs More Info | low | Critical |
2,643,564,837 | flutter | [Feature request] Support version catalog | ### Use case
setting.gradle.kts
```kts
pluginManagement {
val flutterSdkPath = run {
val properties = java.util.Properties()
file("local.properties").inputStream().use { properties.load(it) }
val path = properties.getProperty("flutter.sdk")
checkNotNull(path) { "flutter.sdk not set in local.properties" }
path
}
includeBuild("$flutterSdkPath/packages/flutter_tools/gradle")
repositories {
google()
mavenCentral()
gradlePluginPortal()
}
}
plugins {
alias(libs.plugins.flutterPluginLoader)
alias(libs.plugins.android.application) apply false
alias(libs.plugins.kotlin.android) apply false
}
include(":app")
include(":core")
```
### Proposal
e: file:///D:/Android/xx/xx/android/settings.gradle.kts:33:11: Unresolved reference: libs
sample test code:
https://github.com/android-dev2015/catalogs | c: new feature,platform-android,tool,t: gradle,P2,team-android,triaged-android | low | Minor |
2,643,565,947 | pytorch | [onnx] [njt] [feature request] Export NJT-enabled SDPA / MHA ops to ORT's PackingMode Attention | ### ๐ The feature, motivation and pitch
I found that some support for NJT-enabled SDPA / MHA exists in onnxruntime: https://github.com/microsoft/onnxruntime/issues/22764 as "PackedAttention"
https://github.com/microsoft/onnxruntime/blob/main/onnxruntime/python/tools/transformers/convert_to_packing_mode.py#L317
NJT-enabled SDPA also used to exist in FasterTransformer, known as "Effective Transformer kernels": https://github.com/NVIDIA/FasterTransformer/blob/main/docs/bert_guide.md#standard-bert-and-effective-fastertransformer
I wonder if in the long term it would be good to have some example of exporting NJT ops to ORT and mapping NJT-enabled SDPA to this PackingMode directly at export time.
### Alternatives
_No response_
### Additional context
_No response_
cc @jbschlosser | module: onnx,triaged | low | Minor |
2,643,613,093 | godot | Warnings are printed as errors on the web platform | ### Tested versions
- Reproducible in: 4.3.stable
### System information
Godot v4.3.stable - Firefox 132, Chromium 130 - X11 - OpenGL (Compatibility) - dedicated NVIDIA GeForce RTX 4090 (nvidia; 565.57.01) - 13th Gen Intel(R) Core(TM) i9-13900K (32 Threads)
### Issue description
In the browser's devtools, all warnings appear as errors:

They should appear using warnings instead, similar to `console.warn()`.
### Steps to reproduce
Export a project that calls `push_warning()`ย or uses unsupported features in the Compatibility rendering method, such as https://github.com/godotengine/godot-demo-projects/tree/master/2d/particles.
### Minimal reproduction project (MRP)
https://godotengine.github.io/godot-demo-projects/2d/particles/ | bug,platform:web,topic:porting | low | Critical |
2,643,667,999 | PowerToys | Keys like Shift and Alt don't get released after being used as triggers | ### Microsoft PowerToys version
0.86.0
### Installation method
GitHub, PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
Keyboard Manager
### Steps to reproduce
I wanted to have media controls on a normal US English laptop keyboard that lacks the numpad.
I remapped:
Insert to Volume Mute
Page Up to Volume Up
Page Down to Volume Down
PrintScreen to Play/Pause Media
Then made the shortcuts:
Shift + Play/Pause Media = Next Track
Alt + Play/Pause Media = Previous Track
### โ๏ธ Expected Behavior
When i pressed Shift or Alt + PrintScreen (remapped to Play/Pause Media), it should do Next/Previous Track, and after releasing these keys, Shift or Alt should act like their normal selves.
### โ Actual Behavior
After the shortcut works the first time, Shift or Alt stays pressed and it won't unpress even if i press Shift or Alt again.
This behavior has been ever since Keyboard Manager was first released, and i tried this in many computers. This is the first time i have time to make a report.
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,643,681,005 | node | detect-module: confusing error when parsing a CommonJS module with top-level `await` | Porting from https://github.com/nodejs/TSC/issues/1445#issuecomment-2388678002:
`index.js`:
```js
const {
getPort,
checkPort,
getRandomPort,
waitForPort,
} = require("get-port-please")
const port = await getPort()
```
Getting this:
```
Restarting 'index.js'
(node:15356) [MODULE_TYPELESS_PACKAGE_JSON] Warning: Module type of file:///C:/Users/Babak/Documents/Code/c12/index.js is not specified and it doesn't parse as CommonJS.
Reparsing as ES module because module syntax was detected. This incurs a performance overhead.
To eliminate this warning, add "type": "module" to C:\Users\Babak\Documents\Code\c12\package.json.
(Use `node --trace-warnings ...` to show where the warning was created)
file:///C:/Users/Babak/Documents/Code/c12/index.js:7
} = require("get-port-please")
^
ReferenceError: require is not defined in ES module scope, you can use import instead
at file:///C:/Users/Babak/Documents/Code/c12/index.js:7:5
at ModuleJob.run (node:internal/modules/esm/module_job:262:25)
at async onImport.tracePromise.__proto__ (node:internal/modules/esm/loader:483:26)
Node.js v22.9.0
Failed running 'index.js'
```
With this `package.json`:
```json
{
"dependencies": {
"express": "4.21.0",
"get-port-please": "3.1.2"
},
"devDependencies": {
"@types/express": "5.0.0"
}
}
```
LOL. (`await` ๐).
_Originally posted by @babakfp in https://github.com/nodejs/TSC/issues/1445#issuecomment-2388678002_
<hr>
So basically, this module fails to parse as _either_ CommonJS or as ESM, and we show the ESM parsing error message. Perhaps we should show both, or show a special message for the common use case of using top-level `await` in a CommonJS module. | module,esm | low | Critical |
2,643,775,371 | tensorflow | tf.cast to int8 produce wrong number | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
tf2.15 - 2.17
### Custom code
No
### OS platform and distribution
_No response_
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
input is a long list include random numbers.
tf.cast to int8 output is different from numpy cast.
The strange thing is if you truncate the long input list to a short list then things works fine.
The code is straight forward, you can reproduce it in the colab
### Standalone code to reproduce the issue
```shell
https://colab.research.google.com/drive/18dYjvY6JQk79hVq8JsjcWEwIG5KBuF1v?usp=sharing
```
### Relevant log output
```shell
NumPy Casted values (int8):
[ 0 0 0 0 1 -1 0 3 -3 2 -2 0 50 21
-125 62 25 31 127 -9 117 61 123 47 -20 28 52 36
-43 -45 -84 118 37 -17 -6 -79 1 75 -45 -60 -103 -63
85 -112 76 96 -56 86 -32 -108 -105 -121 -2 121 86 -54
91 -55 36 -119 -3 -36 95 127 -105 -60 37 -9 -106 7
-31 105 13 -103 -123 79 17 -48 -108 -56 -87 -128 35 -94
-45 118 -91 86 -63 -43 77 1 -127 -16 -71 -73 -76 -15
-11]
TensorFlow Casted values (int8):
[ 0 -128 0 0 1 -1 0 3 -3 2 -2 0 127 127
127 127 127 127 127 127 127 127 127 127 127 127 127 127
127 127 127 127 127 127 127 127 127 127 127 127 127 127
127 127 127 127 127 127 127 127 127 127 127 127 127 127
127 127 127 127 127 127 127 127 127 127 127 127 127 127
127 127 127 127 127 127 127 127 127 127 127 127 127 127
127 127 127 127 127 127 127 127 127 127 127 127 -76 -15
-11]
```
```
| type:bug,comp:apis,2.17 | medium | Critical |
2,643,811,982 | neovim | interact with virtual text | ### Problem
There is no easy way to yank a virtual text, eg. inlay hints, cursorline git blame from gitsigns, inline diff hunks(most useful usecase).
### Expected behavior
Make cursor capable at putting above virtual text , so one can yank text from them, but disallowing modification. | enhancement,complexity:high,marks,floatwin | low | Minor |
2,643,839,263 | storybook | [Bug]: Play function results carry over from story to story when navigating | ### Describe the bug
Report from Storybook test EAP:
When switching from a story that includes a component test to another story without one, the test results are incorrectly carried over and displayed in the wrong story.
### Reproduction link
https://github.com/AlmarAubel/sb-playground
### Reproduction steps
1. Start Storybook.
2. Go to story Case 1.
3. Then click the Default story (donโt wait for Case 1 to finish the component test).
It will always show failures, even when Case 1 normally succeeds.
Side note: Adding sleep in the test didnโt trigger this bug. I suspect that the `waitFor function (const text = await waitFor(() => canvas.getByText(/slurp/i), { timeout: 10000 }))` is suspicious and causing issues.
### System
-
### Additional context
_No response_ | bug,sev:S3,addon: test | low | Critical |
2,643,844,003 | opencv | Document 0d/1d Mat, MatShape and other 5.x specific changes | ### Describe the doc issue
1. API in header
2. doc/tutorials/core/mat_the_basic_image_container/mat_the_basic_image_container.markdown
3. Python/Java tutorials?
### Fix suggestion
_No response_ | category: core,category: documentation | low | Minor |
2,643,883,012 | ui | [bug]: Progress Bar accessibility missing attributes | ### Describe the bug
value of the progress bar needs to passed down here. Like this: `<ProgressPrimitive.Root
value={value}`
This will ensure the accessibility values aria-valuenow and aria-valuetext are applied.
### Affected component/components
Progress
### How to reproduce
View the HTML element when the progress bar has progress value.
### Codesandbox/StackBlitz link
https://ui.shadcn.com/docs/components/progress
Where it works:
https://www.radix-ui.com/primitives/docs/components/progress
_No response_
### Logs
_No response_
### System Info
```bash
Browser
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,643,887,225 | deno | Dependabot support/integration | Many companies rely on GitHub Advanced Security offerings to detect vulnerabilities in codebases. Dependabot is one such tool. Its ability to keep dependencies up to date is nice but from a security perspective its ability to create alerts on vulnerabilities in dependency versions is crucial. It currently supports various [packages ecosystems](https://docs.github.com/en/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file#package-ecosystem) including npm, pnpm, and yarn. It does not yet [support deno](https://github.com/dependabot/dependabot-core/issues/2417).
Lack of Dependabot support/integration is a blocker for teams wanting to use deno in organizations that require Dependabot security alerting.
Ideas:
1. Work with GitHub Advanced Security to help them support deno for security updates (ideally for general version updates and private repositories/registries too but at a minimum for security updates).
2. Support some npm/pnpm/yarn lock file format. e.g. If deno can generate the package-lock.json file in the same format that npm does then users will be able to use Dependabot today without issues. This should work short term until more tools support deno.lock but it could also work long term as part of Deno's Node compatibility. | suggestion | low | Major |
2,643,979,997 | PowerToys | PowerRename improper windows size | ### Microsoft PowerToys version
0.86.0
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
PowerRename
### Steps to reproduce
- right click and select PowerRename
### โ๏ธ Expected Behavior
- windows of PowerRename in the center of screen
### โ Actual Behavior

only part of PowerRename windows show up on the upper left, can't move, can't resize.
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,643,984,258 | PowerToys | OCR PDF that is images (scanned document) | ### Description of the new feature / enhancement
Please could we have a way to use the snipping tools OCR models for entire documents ?
### Scenario when this would be used?
It's extremely useful for my workflow, but it's only for screenshots. It's much better than text extractor, and i'd like to be able to have a powertoy, or a full blown windows application to perform OCR on entire (scanned) documents, or even groups of documents. Something like Wondershare PDF Element maybe ? Also, why doesn't MS office/ windows have software similar to the paid Adobe PDF creator? I'd pay for that. Scan to pdf, organize and OCR on one application.
### Supporting information
_No response_ | Idea-New PowerToy | low | Minor |
2,644,045,733 | langchain | Youtube requires login to view videoDetails | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I was trying to run this jupyter notebook
https://github.com/langchain-ai/rag-from-scratch/blob/main/rag_from_scratch_10_and_11.ipynb
Then it crashes at the beginning
```
from langchain_community.document_loaders import YoutubeLoader
docs = YoutubeLoader.from_youtube_url(
"https://www.youtube.com/watch?v=pbAd8O1Lvm4", add_video_info=True,
).load()
print(docs[0].metadata)
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/home/user/anaconda3/envs/ai/lib/python3.10/site-packages/pytube/__main__.py", line 341, in title
self._title = self.vid_info['videoDetails']['title']
KeyError: 'videoDetails'
### Description
After I dig more into why it crashed, I find that it is due to this video requires login.
By modifying line 322 of
langchain_community/document_loaders/youtube.py
from
yt = YouTube(f"https://www.youtube.com/watch?v={self.video_id}")
to
yt = YouTube(f"https://www.youtube.com/watch?v={self.video_id}", use_oauth=True, allow_oauth_cache=True)
Then it asks me to login and then it works.
I think this "bug" can be fixed by adding these two arguments to YoutubeLoader.from_youtube_url and pass them to pytube.
### System Info
langchain 0.3.4
langchain-community 0.3.3 | ๐ค:bug | low | Critical |
2,644,118,846 | godot | Editor ignores "use native file dialog" setting when run through Steam on Linux | ### Tested versions
Reproducible in:
- v4.3.stable.official [77dcf97d8]
### System information
Godot v4.3.stable - Steam Runtime 2 (soldier) 2 - Wayland - Vulkan (Mobile) - dedicated AMD Radeon RX 7600 (RADV NAVI33) - AMD Ryzen 5 7600 6-Core Processor (12 Threads)
### Issue description
When Godot is run through Steam, the editor seems to always use the native file dialogs, even when that option is disabled in the editor settings.
This appears to have started with the most recent Steam client update, which improved Wayland integration (judging by the Steam client now scaling properly).
Note that the same Godot binary run externally from Steam doesn't exhibit this behaviour. So something in the Steam runtime environment appears to be affecting Godot in this way. I tried changing from the Soldier (2) to Scout (1) runtime, but that didn't make a difference.
I also noticed that the native dialog that pops up doesn't seem to be configured correctly, since when *loading* a script file to a Node in the scene tree, the dialog warned me about overwriting the file! :confused:
### Steps to reproduce
1. Be a Linux (+Wayland?) (+KDE?) user;
2. Run Godot editor through Steam;
3. Make sure `Use Native File Dialogs` is disabled in the editor settings;
4. Perform any file access operation, eg. `Scene -> Open Scene`
5. :eyes:
### Minimal reproduction project (MRP)
N/A | bug,platform:linuxbsd | low | Minor |
2,644,126,267 | node | Tracking Issue: Syncify the ESM Loader | The code under [`lib/internal/modules/esm`](https://github.com/nodejs/node/tree/main/lib/internal/modules/esm), a.k.a. the ESM loader, contains many functions that are async. We should refactor as many of these as possible, ideally all of them, to be synchronous. This should improve the performance of evaluating ESM code, bringing it roughly on par with the speed of running CommonJS code.
Longer term, once the ESM loader is synchronous and we land the [synchronous module customization hooks](https://github.com/nodejs/loaders/blob/main/doc/design/proposal-synchronous-hooks.md), we could deprecate monkey-patching the CommonJS loader and merge together the CommonJS and ESM loaders, eliminating duplication: https://github.com/nodejs/node/issues/50356.
This issue will track our progress syncifying the various files and functions of the ESM loader until we can get as much of it to be as synchronous as possible.
### The files to be updated, all under`lib/internal/modules`:
- [ ] `run_main.js`: `asyncRunEntryPointWithESMLoader`
- [ ] `esm/fetch_module.js`: `fetchWithRedirects`
- [ ] `esm/fetch_module.js`: `isLocalAddress`
- [ ] `esm/hooks.js`: `Hooks` class (the async methods here probably donโt need updating as they will be removed once we migrate to the synchronous customization hooks)
- [ ] `esm/hooks.js`: `nextHookFactory`
- [ ] `esm/load.js`: `getSource`
- [ ] `esm/load.js`: `defaultLoad`
- [ ] `esm/loader.js`: `ModuleLoader.eval`
- [ ] `esm/loader.js`: `ModuleLoader.getModuleJobForImport`
- [ ] `esm/loader.js`: `ModuleLoader.loadAndTranslate`
- [ ] `esm/loader.js`: `ModuleLoader.import`
- [ ] `esm/loader.js`: `ModuleLoader.load`
- [ ] `esm/module_job.js`: `ModuleJob._link`
- [ ] `esm/module_job.js`: `ModuleJob._instantiate`
- [ ] `esm/module_job.js`: `ModuleJob.run`
- [ ] `esm/module_job.js`: `ModuleJobSync.run`
- [ ] `esm/translators.js`: `wasm` handler, via `translators.set('wasm', ...`
- [ ] `esm/utils.js`: `importModuleDynamicallyCallback`
- [ ] `esm/utils.js`: `initializeHooks` (might not need updating as we will remove this once the synchronous customization hooks land
- [ ] `esm/worker.js`: `customizedModuleWorker` (might not need updating as we will remove this once the synchronous customization hooks land
- [ ] `esm/worker.js`: `handleMessage` (might not need updating as we will remove this once the synchronous customization hooks land
@nodejs/loaders @mcollina @JakobJingleheimer @joyeecheung | performance,esm,loaders | low | Major |
2,644,159,710 | vscode | File Search/Replace replacing multiple times (incorrectly) when double-clicking |
Type: <b>Bug</b>
If you click too fast when replacing multiple occurences by clicking the replace button next to each found replacement, a replacements may be done multiple times for the same occurence.
The replace button should be disabled after it is clicked once.
The problem happens more often in big files and with extensions enabled, but it also happens without any extensions.
https://github.com/user-attachments/assets/2b4b4ea6-3c6a-4cb8-8690-9598cee697c3
VS Code version: Code 1.95.2 (e8653663e8840adaf45af01eab5c627a5af81807, 2024-11-07T11:07:22.054Z)
OS version: Windows_NT x64 10.0.19045
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Xeon(R) CPU E5-1620 v2 @ 3.70GHz (8 x 3691)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: unavailable_off<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|31.95GB (5.27GB free)|
|Process Argv|--disable-extensions --crash-reporter-id 52968fa8-c462-4bbb-8bf1-2603e8c6576c|
|Screen Reader|no|
|VM|0%|
</details>Extensions disabled
<!-- generated by issue reporter --> | bug,search | low | Critical |
2,644,253,009 | vscode | Terminal Command Truncation | For a long time, my Visual Studio Code setup, along with its extensions, functioned flawlessly and significantly improved my workflow. However, at some point, I began experiencing issues with certain extensions. Notably, when debugging C++ code with CodeLLDB or CMake Tools, the commands sent to the terminal would occasionally be truncated, preventing the debugger from launching properly.
Recently, I discovered that settingย `"terminal.integrated.defaultProfile.osx": "bash"`ย resolved the issue, while switching back to zsh caused it to resurface. Later, after transferring some conda and jenv initialization settings fromย `.zshrc`ย toย `.bash_profile`, I found that the problem began to intermittently occur in bash as well. Interestingly, the more configuration scripts I added, the more consistently the issue appeared.
Recognizing a pattern, I decided to investigate further by reviewing discussions and documentation in the projectโs GitHub repository.
## Historical Discussions
An issue similar to what I am currently experiencing was first raised back in 2017 asย [#38137](https://github.com/microsoft/vscode/issues/38137). Although it did not provide a direct solution to my problem and was eventually closed because it could not be reproduced, I would like to acknowledgeย [@Tyriar](https://github.com/Tyriar)ย for his extensive contributions in that discussion, where he referenced several related issues that offered valuable insights.
One notable comment came fromย [@fabiospampinato](https://github.com/microsoft/vscode/issues/38137#issuecomment-352450960), who mentioned that the command truncation only happens the first time text is sent to the terminal, but works properly afterwards.
The character limit at which commands are truncated seems to vary across different operating systems. According toย [#63613](https://github.com/microsoft/vscode/issues/63613), commands exceeding 1568 characters are truncated on Windows. Meanwhile, issues such asย [#59135](https://github.com/microsoft/vscode/issues/59135),ย [#87183](https://github.com/microsoft/vscode/issues/87183),ย [#130736](https://github.com/microsoft/vscode/issues/130736), andย [#134324](https://github.com/microsoft/vscode/issues/134324)ย indicate that on macOS, this limit is 1024 characters, which aligns with my observations.
Additionally, issuesย [#96973](https://github.com/microsoft/vscode/issues/96973)ย andย [#61999](https://github.com/microsoft/vscode/issues/61999)ย provided effective testing methods for the "Run selected text in active terminal" functionality. Additionally,ย [#136587](https://github.com/microsoft/vscode/issues/136587#issuecomment-966510277)ย offered an approach for testing using `launch.json`, which significantly simplified my process of reproducing this issue.
I also found theย [enableย traceย logging](https://github.com/microsoft/vscode/wiki/Terminal-Issues#enabling-trace-logging)ย guide in the projectโs wiki, which helped me expedite the process of identifying the cause of the problem.
## Steps to Reproduce
1. Modifyย `~/.zshrc`. The configurations forย `oh-my-zsh`,ย `jenv`, andย `conda`ย can introduce delays during zsh initialization. To simulate this, add aย `sleep`ย command with timestamps for tracking:
```bash
gdate "+%Y-%m-%d %H:%M:%S.%3N"
sleep 3
gdate "+%Y-%m-%d %H:%M:%S.%3N"
```
2. Configureย `settings.json`. Set the terminal settings to use zsh and disable environment inheritance:
```json
{
"terminal.external.osxExec": "Terminal.app",
"terminal.integrated.defaultProfile.osx": "zsh",
"terminal.integrated.inheritEnv": false
}
```
3. Set Log Level to Trace.
4. Verify that all Terminal instances are closed to ensure the next session undergoes fullย `.zshrc`ย initialization.
5. Create a text file containing a single line exceeding 1024 characters. Below is an example, where the space afterย `256`marks the 1024-character boundary:
```txt
001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050 051 052 053 054 055 056 057 058 059 060 061 062 063 064 065 066 067 068 069 070 071 072 073 074 075 076 077 078 079 080 081 082 083 084 085 086 087 088 089 090 091 092 093 094 095 096 097 098 099 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257
```
6. Select the entire text and useย **Terminal: Run Selected Text in Active Terminal**. Observe that commands beyond 1024 characters, starting fromย `257`, are truncated.
7. Examine the logs for details on the truncation and potential causes.

## Cross-referencing the code and logs
In theย [ptyhost.log](https://github.com/user-attachments/files/17680208/ptyhost.log), I observed the following two log entries. The line:
```log
2024-11-06 22:28:02.211 [trace] node-pty.IPty#write 251 252 253 254 255 256 257
```
is the last time the text I entered appeared in full. This log entry was generated byย [this._logService.trace('node-pty.IPty#write', object.data);](https://github.com/microsoft/vscode/blob/024999d114e2d9dccd8472a03a17fb0f97c1349e/src/vs/platform/terminal/node/terminalProcess.ts#L522). At this point, the input remains intact. The data is then passed throughย [this._ptyProcess!.write(object.data);](https://github.com/microsoft/vscode/blob/024999d114e2d9dccd8472a03a17fb0f97c1349e/src/vs/platform/terminal/node/terminalProcess.ts#L526C4-L526C41)ย into theย `node-pty`ย module, where it is further processed byย [this._socket.write(data);](https://github.com/microsoft/node-pty/blob/8bdbd712f40acb939dbab1def8a1c3a815254f74/src/unixTerminal.ts#L178). This method facilitates communication with the C++ layer, which writes the data to the master side of the pseudo-terminal created viaย [intย retย = openpty(&master, &slave, nullptr, NULL, static_cast<winsize*>(&winp));](https://github.com/microsoft/node-pty/blob/8bdbd712f40acb939dbab1def8a1c3a815254f74/src/unix/pty.cc#L478).
The line:
```log
2024-11-06 22:28:02.211 [trace] node-pty.IPty#onData 251 252 253 254 255 256
```
marks the first instance of text truncation. In theย `node-pty`ย project, when the master side of the pseudo-terminal receives data, it triggers theย [publicย getย onData(): IEvent<string>ย { returnย this._onData.event; }](https://github.com/microsoft/node-pty/blob/8bdbd712f40acb939dbab1def8a1c3a815254f74/src/terminal.ts#L44)ย event, allowing subscribed listeners to capture the incoming data. Inย `vscode`, theย `onData`ย event is subscribed to, and the captured data is logged viaย [this._logService.trace('node-pty.IPty#onData', data);](https://github.com/microsoft/vscode/blob/5cae08d2afa91f2703d1f5f3e4dd8ad358501424/src/vs/platform/terminal/node/terminalProcess.ts#L320).
Notably, I observed two occurrences of theย `gdate`ย timestamp output fromย `.zshrc`ย in theย `onData`ย logs:
```log
2024-11-06 22:28:02.153 [trace] node-pty.IPty#onData 2024-11-06 22:28:02.152
2024-11-06 22:28:05.175 [trace] node-pty.IPty#onData 2024-11-06 22:28:05.174
```
Additionally, after the shell finished loadingย `.zshrc`, there was another instance of truncated output:
```log
2024-11-06 22:28:05.229 [trace] node-pty.IPty#onData 2
2024-11-06 22:28:05.229 [trace] node-pty.IPty#onData 5
2024-11-06 22:28:05.230 [trace] node-pty.IPty#onData 5
2024-11-06 22:28:05.230 [trace] node-pty.IPty#onData
2024-11-06 22:28:05.230 [trace] node-pty.IPty#onData 2
2024-11-06 22:28:05.230 [trace] node-pty.IPty#onData 5
2024-11-06 22:28:05.230 [trace] node-pty.IPty#onData 6
2024-11-06 22:28:05.230 [trace] node-pty.IPty#onData
```
Since the pseudo-terminalโs slave side in theย `node-pty`ย project is configured withย `ECHO`ย mode enabled ([`term->c_lflagย = ICANONย | ISIGย | IEXTENย | ECHOย | ECHOEย | ECHOKย | ECHOKEย | ECHOCTL;`](https://github.com/microsoft/node-pty/blob/8bdbd712f40acb939dbab1def8a1c3a815254f74/src/unix/pty.cc#L321)), the echo back atย `2024-11-06 22:28:02.211`ย corresponds to the data being written to the slave's buffer by the master. At this point, the shell on the slave side is still initializingย `.zshrc`ย and has not consumed any data from the buffer.
Once the shell finishes initialization and begins consuming data from the slave's buffer, this data is echoed back to the slave and subsequently captured by the master, triggering the correspondingย `onData`ย events.
## Conclusion
In summary, on my macOS system, the issue of commands exceeding 1024 characters being truncated in the terminal stems from how Visual Studio Code interacts with the pseudo-terminal viaย `node-pty`. During shell initialization, which involves blocking processes, VSCode continues to write data to the slave side of the pseudo-terminal. Once the buffer reaches its maximum capacity, any additional data is discarded. When the shell resumes, it can only read as much data as fits within the buffer's limit, resulting in truncated commands. | bug,confirmation-pending,terminal-process | low | Critical |
2,644,274,830 | PowerToys | Image resizer reducing to tiny 52 x 64 resolution randomly, when others resize correctly at 3840 x 2160 setting. | ### Microsoft PowerToys version
0.86.0
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
Image Resizer
### Steps to reproduce
Typically when resizing bulk images, randomly an image will be reduced to a tiny thumbnail/icon size, not the size selected.
Selected resize operation on a folder full of images (and nothing else), with anywhere between 5 to 100 images, however the number of images selected doesn't appear relevant.
I've only noticed this when resizing to 3840 x 2160 (I don't resize to less then this resolution, to know if the issue exists at other sizes also).
which typically brings the images down to a resolution of 2160 x 2880 or 2880 x 2160 (Portrait/Landscape as applicable).
Selected/Active settings are only:
Make pictures smaller but not larger
and
Overwrite files
Source images are always from photos taken on the same phone, with source image resolution of 4000 x 3000 / 3000 x 4000.
### โ๏ธ Expected Behavior
Images should all reduce to the selected size, as all the other images from the same source do.
### โ Actual Behavior
Typically when resizing bulk images, randomly an image will be reduced to a tiny thumbnail/icon size, not the size selected.
Most images will be resized as expected, however sometime any random image will resize right down to 69 x 52 or 52x69 pixels (if source image was landscape/portrait).
Restoring that incorrectly resized file and carrying out the same resize on that one original image will result in it working correctly.
Likewise restoring the entire folder back to original and repeating the same resize on the group of images originally selected; might result in a completely different file being incorrectly resized, or even no files being resized incorrectly and the operation performing as expected.
I have gone back through older folders of images from a month earlier to see if this had affected other prior resized photos, and noticed there were several tiny file sizes, indicating the same small thumbnail images and subsequent issue has been present for at least a month or more (so issue is probably existing on previous versions of PowerToys/Image Resizer too).
Whilst looking at these file sizes as a quick reference to decide where the Image Resizer has worked correctly, I noticed even when images are being resized in resolution (ending up at the desired image size), some file sizes are remaining similar to the original image source (around 4-6MB, while the bulk of the images resize down to around 1-2MB).
Repeating the resize function on these images wont reduce them at this point, since they're already below the original resolution, but for some reason those images are still larger. I can copy those source images back over and repeat on the same large images and they will resize to similar 1-2MB files as expected - I'm not sure if this is similar to the above behaviour where the images end up tiny instead, but seems to have the same random selection of images when it occurs.
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,644,326,127 | rust | Rustdoc: Very long doc compile times (95%+ taken by render_html) | I've recently tried to build rustdocs in one of our projects, and I've noticed some very odd 1h+ compile times for the docs (see the attached profile) with a majority of the time taken by the render_html item in the flamegraph. Also, during that time only a single core is pinned to 100%, even though multiple cores are available. (I don't know if rustdoc is multithreaded in that stage so if it is not that should be as expected)
My question now is: Is this expected behavior? If not, or if you need more information, I can try to create a minimum example that is able to reproduce this issue if that is something someone wants :)

[prof file](https://drive.google.com/file/d/1gE3GFRhPKDnTq4n3-6h0NeJBxtC9xVEE/view?usp=sharing)
(since the .mm_profdata file is too big I uploaded it to google drive instead :) ) | T-rustdoc,I-compiletime,S-needs-repro | low | Minor |
2,644,339,529 | rust | module-level `rustfmt::skip` fails with confusing error | https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=73768e89fe43b087349454c8cf2bf2c9
i would have expected / liked to disable rustfmt for the entire module instead of needing to place it on every item.
also, the error seems pretty confusing. | A-attributes,A-diagnostics,T-lang,T-compiler,C-bug,D-confusing,A-tool-attributes | low | Critical |
2,644,487,086 | godot | AnimatableBody2D animates only one property when sync_to_physics is enabled (using AnimationPlayer) | ### Tested versions
Godot v4.3.stable
### System information
macOS 14.6.1 - Vulkan (Forward+) - integrated Apple M2 Pro - Apple M2 Pro (10 Threads)
### Issue description
1. I am animating a `AnimatableBody2D` using an `AnimationPlayer`
2. The `AnimatableBody2D` is not moving unless I turn **off** `sync_to_physics`
3. I tried with bezier curve animation and without, no change
4. At some point it animated at least partially, now (with no conscious change) it doesn't animate at all
5. Since it does animate with `sync_to_physics` turned off I assume it's not an error on my side
If anybody has any ideas, please. I have no clue anymore ...
**Potentially related issues:**
- #76685
- #58269
### Steps to reproduce
I am not able to reproduce it from scratch, but I extracted the problem from my larger project:
1. Open and run the project (with visible collision shapes)
2. Shape should move, but doesn't ๐ญ
3. Turn off `sync_to_physics` on to animatable_body_2d and run scene
4. It now moves ๐ค
### Minimal reproduction project (MRP)
[bug-animating-animatable-body-zip.zip](https://github.com/user-attachments/files/17681220/bug-animating-animatable-body-zip.zip)
| bug,topic:physics,needs testing,topic:animation,topic:2d | low | Critical |
2,644,489,801 | vscode | Dialog "Copilot wants to sign-in" should not show | 1. Have GH Copilot installed.
2. Sign out of all GH accounts in VS Code
3. Notice the Chat Welcome View asking you to Sign In to GH. Click it
4. There is a dialog if I will allow GitHub Copilot Chat to sign in using GitHub

This dialog should not show because the user clicks on an action to Sign In.
fyi @bpasero | under-discussion,authentication | low | Minor |
2,644,502,370 | tauri | [bug] Xcode Build Failed (bun: command not found) | ### Describe the bug
I have a working Tauri + Sveltekit example project that was setup using the official Tauri docs. This project was setup using bun. When running `tauri ios dev`, the project properly runs on the IOS xcode simulation. Running `tauri ios dev --open` xcode successfully opens, but when starting the project from within Xcode I get `Command PhaseScriptExecution failed with a nonzero exit code`. Image of the error below:
<img width="1716" alt="Screen Shot 2024-11-08 at 9 18 35 AM" src="https://github.com/user-attachments/assets/73b6427b-dc79-4756-92c6-7eb535b1b35a">
I can confirm that bun is properly installed on my computer and within my path, this seems to be that Xcode cannot find bun when being ran. I tried configuring xcode to use my root working directory still with no luck. I can't seem to edit this generated sh file to include the path either.
### Reproduction
Create a new tauri sveltekit project from this documentation using bun: https://v2.tauri.app/start/create-project/
Be on MacOS and try opening the Xcode project and then building the Xcode project.
### Expected behavior
Xcode should succesfully build the project and open the simulation as if it was being ran from the `tauri ios dev` command.
### Full `tauri info` output
```text
[โ] Environment
- OS: Mac OS 12.6.4 arm64 (X64)
โ Xcode Command Line Tools: installed
โ rustc: 1.82.0 (f6e511eec 2024-10-15)
โ cargo: 1.82.0 (8f40fc59f 2024-08-21)
โ rustup: 1.27.1 (54dd3d00f 2024-04-24)
โ Rust toolchain: stable-aarch64-apple-darwin (default)
- node: 22.11.0
- yarn: 1.22.17
- npm: 10.9.0
- bun: 1.1.0
- deno: deno 2.0.4
[-] Packages
- tauri ๐ฆ: 2.0.6
- tauri-build ๐ฆ: 2.0.2
- wry ๐ฆ: 0.46.3
- tao ๐ฆ: 0.30.5
- @tauri-apps/api ๎: 2.0.3
- @tauri-apps/cli ๎: 2.0.5
[-] Plugins
- tauri-plugin-shell ๐ฆ: 2.0.2
- @tauri-apps/plugin-shell ๎: 2.0.1
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../build
- devUrl: http://localhost:1420/
- framework: Svelte
- bundler: Vite
[-] iOS
- Developer Teams: None
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,status: needs triage | low | Critical |
2,644,515,326 | react-native | [Android]ย Modal - statusBarTranslucent behaves differently, modal content is pushed up when keyboard open even if set to true | ### Description
We are using `Modal` component in our app. We usually have its content (dialog) centered, but in one case where text input is included, we have the content pushed up by some pixels to prevent keyboard overlapping it, so it always displays in the top half of the screen.
We want it to behave the same on android and iOS. With setting `statusBarTranslucent` to `true` we were able to achieve the consistent behavior on both platforms in previous React Native versions (last one 0.74), because with this setting the modal content didn't react to keyboard visibitily and was able to preserve its position. After upgrading to 0.76.1, the same code started to behave differently. The modal content is now being pushed up automatically when keyboard is open, even if `statusBarTranslucent` is set to `true` on android. We want to keep it at the same position regardless of keyboard visibility.
### Steps to reproduce
```function App(): React.JSX.Element {
const [showModal, setShowModal] = useState(false)
return (
<SafeAreaView style={styles.container}>
<Button title={"This is React Native 0.76.1"} onPress={() =>setShowModal(true)} />
<Modal visible={showModal} transparent statusBarTranslucent={true}>
<View style={{flexGrow: 1, justifyContent: "center", alignItems: "center", flexDirection: 'column'}}>
<View style={{ padding: 20, backgroundColor: "white"}}>
<TextInput placeholder={"Click here to write"}/>
<Button title={"Close"} onPress={() => setShowModal(false)}/>
</View>
</View>
</Modal>
</SafeAreaView>
)
}
const styles = StyleSheet.create({
container: {
backgroundColor: '#ecf0f1',
flex: 1,
justifyContent: 'center',
padding: 8,
},
});
export default App;`
### React Native Version
0.76.1
### Affected Platforms
Runtime - Android
### Output of `npx react-native info`
```text
System:
OS: macOS 15.1
CPU: (10) arm64 Apple M2 Pro
Memory: 119.13 MB / 32.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 20.16.0
path: ~/.nvm/versions/node/v20.16.0/bin/node
Yarn:
version: 1.22.19
path: /opt/homebrew/bin/yarn
npm:
version: 10.8.1
path: ~/.nvm/versions/node/v20.16.0/bin/npm
Watchman:
version: 2024.09.16.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods: Not Found
SDKs:
iOS SDK:
Platforms:
- DriverKit 24.1
- iOS 18.1
- macOS 15.1
- tvOS 18.1
- visionOS 2.1
- watchOS 11.1
Android SDK: Not Found
IDEs:
Android Studio: 2022.2 AI-222.4459.24.2221.10121639
Xcode:
version: 16.1/16B40
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.10
path: /usr/bin/javac
Ruby:
version: 3.3.1
path: /Users/olivertylsar/.rvm/rubies/ruby-3.3.1/bin/ruby
npmPackages:
"@react-native-community/cli":
installed: 15.0.0
wanted: 15.0.0
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.1
wanted: 0.76.1
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: false
iOS:
hermesEnabled: Not found
newArchEnabled: false
```
### Stacktrace or Logs
```text
There is no crash nor failure.
```
### Reproducer
https://github.com/olivertylsar/rn-modal-android
### Screenshots and Videos
RN 0.74.5 with desired and expected behavior:
https://github.com/user-attachments/assets/3020b0af-6220-4a86-a77f-98dada50828c
RN 0.76.1 with actual behavior:
https://github.com/user-attachments/assets/79bc4935-c04c-4202-9d23-a0b0fe21008a
| Component: Modal,Component: StatusBar,API: Keyboard | low | Critical |
2,644,539,147 | vscode | Issue reporter fails without telling the user why if content is too large | Repro:
1. Fill issue reporter with 100k of context
2. Try create an issue on GitHub.
I'm also seeing this FYI

Trying to do an internal copilot issue report which adds a bunch of context. | bug,issue-reporter | low | Minor |
2,644,578,115 | flutter | Not correctly rendering SVG with pattern | ### What package does this bug report belong to?
flutter_svg
### What target platforms are you seeing this bug on?
Windows
### Have you already upgraded your packages?
Yes
### Dependency versions
<details><summary>pubspec.lock</summary>
```lock
# Generated by pub
# See https://dart.dev/tools/pub/glossary#lockfile
packages:
args:
dependency: transitive
description:
name: args
sha256: bf9f5caeea8d8fe6721a9c358dd8a5c1947b27f1cfaa18b39c301273594919e6
url: "https://pub.dev"
source: hosted
version: "2.6.0"
async:
dependency: transitive
description:
name: async
sha256: "947bfcf187f74dbc5e146c9eb9c0f10c9f8b30743e341481c1e2ed3ecc18c20c"
url: "https://pub.dev"
source: hosted
version: "2.11.0"
boolean_selector:
dependency: transitive
description:
name: boolean_selector
sha256: "6cfb5af12253eaf2b368f07bacc5a80d1301a071c73360d746b7f2e32d762c66"
url: "https://pub.dev"
source: hosted
version: "2.1.1"
characters:
dependency: transitive
description:
name: characters
sha256: "04a925763edad70e8443c99234dc3328f442e811f1d8fd1a72f1c8ad0f69a605"
url: "https://pub.dev"
source: hosted
version: "1.3.0"
clock:
dependency: transitive
description:
name: clock
sha256: cb6d7f03e1de671e34607e909a7213e31d7752be4fb66a86d29fe1eb14bfb5cf
url: "https://pub.dev"
source: hosted
version: "1.1.1"
collection:
dependency: transitive
description:
name: collection
sha256: ee67cb0715911d28db6bf4af1026078bd6f0128b07a5f66fb2ed94ec6783c09a
url: "https://pub.dev"
source: hosted
version: "1.18.0"
cupertino_icons:
dependency: "direct main"
description:
name: cupertino_icons
sha256: ba631d1c7f7bef6b729a622b7b752645a2d076dba9976925b8f25725a30e1ee6
url: "https://pub.dev"
source: hosted
version: "1.0.8"
fake_async:
dependency: transitive
description:
name: fake_async
sha256: "511392330127add0b769b75a987850d136345d9227c6b94c96a04cf4a391bf78"
url: "https://pub.dev"
source: hosted
version: "1.3.1"
flutter:
dependency: "direct main"
description: flutter
source: sdk
version: "0.0.0"
flutter_lints:
dependency: "direct dev"
description:
name: flutter_lints
sha256: "3f41d009ba7172d5ff9be5f6e6e6abb4300e263aab8866d2a0842ed2a70f8f0c"
url: "https://pub.dev"
source: hosted
version: "4.0.0"
flutter_svg:
dependency: "direct main"
description:
name: flutter_svg
sha256: "578bd8c508144fdaffd4f77b8ef2d8c523602275cd697cc3db284dbd762ef4ce"
url: "https://pub.dev"
source: hosted
version: "2.0.14"
flutter_test:
dependency: "direct dev"
description: flutter
source: sdk
version: "0.0.0"
http:
dependency: transitive
description:
name: http
sha256: b9c29a161230ee03d3ccf545097fccd9b87a5264228c5d348202e0f0c28f9010
url: "https://pub.dev"
source: hosted
version: "1.2.2"
http_parser:
dependency: transitive
description:
name: http_parser
sha256: "2aa08ce0341cc9b354a498388e30986515406668dbcc4f7c950c3e715496693b"
url: "https://pub.dev"
source: hosted
version: "4.0.2"
leak_tracker:
dependency: transitive
description:
name: leak_tracker
sha256: "3f87a60e8c63aecc975dda1ceedbc8f24de75f09e4856ea27daf8958f2f0ce05"
url: "https://pub.dev"
source: hosted
version: "10.0.5"
leak_tracker_flutter_testing:
dependency: transitive
description:
name: leak_tracker_flutter_testing
sha256: "932549fb305594d82d7183ecd9fa93463e9914e1b67cacc34bc40906594a1806"
url: "https://pub.dev"
source: hosted
version: "3.0.5"
leak_tracker_testing:
dependency: transitive
description:
name: leak_tracker_testing
sha256: "6ba465d5d76e67ddf503e1161d1f4a6bc42306f9d66ca1e8f079a47290fb06d3"
url: "https://pub.dev"
source: hosted
version: "3.0.1"
lints:
dependency: transitive
description:
name: lints
sha256: "976c774dd944a42e83e2467f4cc670daef7eed6295b10b36ae8c85bcbf828235"
url: "https://pub.dev"
source: hosted
version: "4.0.0"
matcher:
dependency: transitive
description:
name: matcher
sha256: d2323aa2060500f906aa31a895b4030b6da3ebdcc5619d14ce1aada65cd161cb
url: "https://pub.dev"
source: hosted
version: "0.12.16+1"
material_color_utilities:
dependency: transitive
description:
name: material_color_utilities
sha256: f7142bb1154231d7ea5f96bc7bde4bda2a0945d2806bb11670e30b850d56bdec
url: "https://pub.dev"
source: hosted
version: "0.11.1"
meta:
dependency: transitive
description:
name: meta
sha256: bdb68674043280c3428e9ec998512fb681678676b3c54e773629ffe74419f8c7
url: "https://pub.dev"
source: hosted
version: "1.15.0"
path:
dependency: transitive
description:
name: path
sha256: "087ce49c3f0dc39180befefc60fdb4acd8f8620e5682fe2476afd0b3688bb4af"
url: "https://pub.dev"
source: hosted
version: "1.9.0"
path_parsing:
dependency: transitive
description:
name: path_parsing
sha256: "883402936929eac138ee0a45da5b0f2c80f89913e6dc3bf77eb65b84b409c6ca"
url: "https://pub.dev"
source: hosted
version: "1.1.0"
petitparser:
dependency: transitive
description:
name: petitparser
sha256: c15605cd28af66339f8eb6fbe0e541bfe2d1b72d5825efc6598f3e0a31b9ad27
url: "https://pub.dev"
source: hosted
version: "6.0.2"
sky_engine:
dependency: transitive
description: flutter
source: sdk
version: "0.0.99"
source_span:
dependency: transitive
description:
name: source_span
sha256: "53e943d4206a5e30df338fd4c6e7a077e02254531b138a15aec3bd143c1a8b3c"
url: "https://pub.dev"
source: hosted
version: "1.10.0"
stack_trace:
dependency: transitive
description:
name: stack_trace
sha256: "73713990125a6d93122541237550ee3352a2d84baad52d375a4cad2eb9b7ce0b"
url: "https://pub.dev"
source: hosted
version: "1.11.1"
stream_channel:
dependency: transitive
description:
name: stream_channel
sha256: ba2aa5d8cc609d96bbb2899c28934f9e1af5cddbd60a827822ea467161eb54e7
url: "https://pub.dev"
source: hosted
version: "2.1.2"
string_scanner:
dependency: transitive
description:
name: string_scanner
sha256: "556692adab6cfa87322a115640c11f13cb77b3f076ddcc5d6ae3c20242bedcde"
url: "https://pub.dev"
source: hosted
version: "1.2.0"
term_glyph:
dependency: transitive
description:
name: term_glyph
sha256: a29248a84fbb7c79282b40b8c72a1209db169a2e0542bce341da992fe1bc7e84
url: "https://pub.dev"
source: hosted
version: "1.2.1"
test_api:
dependency: transitive
description:
name: test_api
sha256: "5b8a98dafc4d5c4c9c72d8b31ab2b23fc13422348d2997120294d3bac86b4ddb"
url: "https://pub.dev"
source: hosted
version: "0.7.2"
typed_data:
dependency: transitive
description:
name: typed_data
sha256: f9049c039ebfeb4cf7a7104a675823cd72dba8297f264b6637062516699fa006
url: "https://pub.dev"
source: hosted
version: "1.4.0"
vector_graphics:
dependency: transitive
description:
name: vector_graphics
sha256: "773c9522d66d523e1c7b25dfb95cc91c26a1e17b107039cfe147285e92de7878"
url: "https://pub.dev"
source: hosted
version: "1.1.14"
vector_graphics_codec:
dependency: transitive
description:
name: vector_graphics_codec
sha256: "2430b973a4ca3c4dbc9999b62b8c719a160100dcbae5c819bae0cacce32c9cdb"
url: "https://pub.dev"
source: hosted
version: "1.1.12"
vector_graphics_compiler:
dependency: transitive
description:
name: vector_graphics_compiler
sha256: "26d520739b7c6b5d2a2b3274427874a8390831fd4cd5bb8cfbd7d913477d3a2e"
url: "https://pub.dev"
source: hosted
version: "1.1.14"
vector_math:
dependency: transitive
description:
name: vector_math
sha256: "80b3257d1492ce4d091729e3a67a60407d227c27241d6927be0130c98e741803"
url: "https://pub.dev"
source: hosted
version: "2.1.4"
vm_service:
dependency: transitive
description:
name: vm_service
sha256: "5c5f338a667b4c644744b661f309fb8080bb94b18a7e91ef1dbd343bed00ed6d"
url: "https://pub.dev"
source: hosted
version: "14.2.5"
web:
dependency: transitive
description:
name: web
sha256: cd3543bd5798f6ad290ea73d210f423502e71900302dde696f8bff84bf89a1cb
url: "https://pub.dev"
source: hosted
version: "1.1.0"
xml:
dependency: transitive
description:
name: xml
sha256: b015a8ad1c488f66851d762d3090a21c600e479dc75e68328c52774040cf9226
url: "https://pub.dev"
source: hosted
version: "6.5.0"
sdks:
dart: ">=3.5.4 <4.0.0"
flutter: ">=3.22.0"
```
</details>
### Steps to reproduce
1. Create an App using flutter svg
2. Try to render an svg with a pattern with a rect inside
### Expected results
Display it correctly
### Actual results
# In Flutter

# In Browser

### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
import 'package:flutter_svg/flutter_svg.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
theme: ThemeData(
colorScheme: ColorScheme.fromSeed(seedColor: Colors.deepPurple),
useMaterial3: true,
),
home: Scaffold(
appBar: AppBar(
backgroundColor: Theme.of(context).colorScheme.inversePrimary,
title: const Text("Flutter Demo Page"),
),
body: Center(
child: SvgPicture.string(
"""
<svg width="200" height="100" xmlns="http://www.w3.org/2000/svg">
<defs>
<pattern id="vertical-stripes" width="27.5" height="20" patternUnits="userSpaceOnUse">
<rect width="5" height="20" fill="#0099cc" opacity="1"/>
</pattern>
</defs>
<ellipse cx="100" cy="50" rx="90" ry="45" stroke="#00A3D8" stroke-width="4" fill="url(#vertical-stripes)" />
</svg>
""",
),
),
),
);
}
}
```
</details>
### Screenshots or Videos
<details open>
<summary>Screenshots / Video demonstration</summary>
# In Flutter

# In Browser

</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[โ] Flutter (Channel stable, 3.24.4, on Microsoft Windows [Version 10.0.22631.4317], locale de-DE)
โข Flutter version 3.24.4 on channel stable at C:\Users\tortoise\AppData\Roaming\flutter
โข Upstream repository https://github.com/flutter/flutter.git
โข Framework revision 603104015d (2 weeks ago), 2024-10-24 08:01:25 -0700
โข Engine revision db49896cf2
โข Dart version 3.5.4
โข DevTools version 2.37.3
[โ] Windows Version (Installed version of Windows is version 10 or higher)
[โ] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
โข Android SDK at C:\Users\tortoise\AppData\Local\Android\sdk
โข Platform android-35, build-tools 35.0.0
โข Java binary at: C:\Program Files\Android\Android Studio\jbr\bin\java
โข Java version OpenJDK Runtime Environment (build 17.0.11+0--11852314)
โข All Android licenses accepted.
[โ] Chrome - develop for the web
โข Chrome at C:\Program Files\Google\Chrome\Application\chrome.exe
[โ] Visual Studio - develop Windows apps (Visual Studio Community 2022 17.11.5)
โข Visual Studio at C:\Program Files\Microsoft Visual Studio\2022\Community
โข Visual Studio Community 2022 version 17.11.35327.3
โข Windows 10 SDK version 10.0.22621.0
[โ] Android Studio (version 2024.1)
โข Android Studio at C:\Program Files\Android\Android Studio
โข Flutter plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/9212-flutter
โข Dart plugin can be installed from:
๐จ https://plugins.jetbrains.com/plugin/6351-dart
โข Java version OpenJDK Runtime Environment (build 17.0.11+0--11852314)
[โ] VS Code (version 1.95.1)
โข VS Code at C:\Users\tortoise\AppData\Local\Programs\Microsoft VS Code
โข Flutter extension version 3.100.0
[โ] Connected device (3 available)
โข Windows (desktop) โข windows โข windows-x64 โข Microsoft Windows [Version 10.0.22631.4317]
โข Chrome (web) โข chrome โข web-javascript โข Google Chrome 130.0.6723.92
โข Edge (web) โข edge โข web-javascript โข Microsoft Edge 130.0.2849.68
[โ] Network resources
โข All expected network resources are available.
โข No issues found!
```
</details>
| package,has reproducible steps,team-engine,found in release: 3.24,found in release: 3.27,p: flutter_svg | low | Critical |
2,644,595,424 | PowerToys | PowerRename - Remove Highlighting for Deselected Files and Folders | ### Description of the new feature / enhancement
When a file or folder is unchecked in PowerRename, remove its highlighting and grey out its proposed name change in the "Renamed" column.
### Scenario when this would be used?
When a file name matches a search query in PowerRename, the corresponding row is highlighted and the proposed name change is shown in bright blue. This highlighting is often very helpful, but it can fail when files are deselected in the checkbox column. Deselected files are still highlighted as if their names will be changed, making it difficult to review the proposed changes.
Here's an example:
> 
It appears at first glance that both files will be renamed, making it harder to confirm that the proposed changes are correct. This often causes incorrect name changes to appear in the "Renamed" column, so every highlighted line needs to be double-checked. This is a minor issue in my example photo, but it's a significant problem when working with a larger number of files. If the deselected files were greyed out, the "Renamed" column would only show the changes that will _actually_ be applied.
I'd appreciate it greatly if this were implemented. Thank you!
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,644,716,917 | react | [Compiler Bug]: useMemo does not provide stable results across renders for specific functions when compiler enabled | ### What kind of issue is this?
- [X] React Compiler core (the JS output is incorrect, or your app works incorrectly after optimization)
- [ ] babel-plugin-react-compiler (build issue installing or using the Babel plugin)
- [ ] eslint-plugin-react-compiler (build issue installing or using the eslint plugin)
- [ ] react-compiler-healthcheck (build issue installing or using the healthcheck script)
### Link to repro
https://playground.react.dev/#N4Igzg9grgTgxgUxALhAejQAgMIBsCWcA1vgHYDmmALgBYKYK4IC2CpVmA7vrrpmXBgs2HWvTgFimONHYAdUhkwBDUgBNMAMwjxxEZgAcIpEZiEBaIeoQwFSgMoD6AN2W4oCAIwr1mV+4QAJhUhTCgwMkoAA3CEAFkWCCiufFpMOyxVBkMqAE9MNQQDNkLSOHzlGBhlXIAaajp8sBpoXA1SCA44GlVyBAA6DMwAdVSaHH0DHhsGUmUAIyY1ev8PYO7ehDBB0gUEAA8jGA5CzWUoXA5NKDKqfGNMONyAQQMDAAoASkxgBUxMJRgKgLJgqQQQMBgMwlGxgP7SYxAvxuDzeAC8YTA8US7y+mDRAD5HspaP1qup9F96gBtAC6nwA3Ap4UoOhwgSD6MpwZDodYYFDOFgZIZpjB0qR-kp5lAOPhBWNoF1JmKmZKAcKspwdEQQrINOFIhKpcLERxqasgrUAPq0-GYhD2YFUBC476E4mk8lqSnfADUmE8n0+8JkpCRluCGNiCWYEDd+KJcRJNDJqh9zDxAc8NPpapZwskJAoKV40MsMNs6rDSOpAiErHY9SxVAAkmUGyI7dGsU6Sa6AAyM5nqoRUWCSgA8anwzkwxjwhCIaOAeI9LfbgmE7He+ET-Ew2c+AF8CRIl8hgPXt1RjysUV5L5bPHfkQFAk+H4Fj5O0DPnASarHiAx5AA
### Repro steps
1. Write a `useMemo` function that uses Math.random() AND some other value or function, for example:
- `const val = useMemo(() => Math.random() + 0, []);`
- `const val = useMemo(() => Math.floor(Math.random()), []);`
2. Cause the component to re-render
3. The resulting value will be re-computed on re-render
In associated repro link, clicking the component increments a counter which triggers re-render. Without compiler both memoized values are stable; with compiler, only the first is.
### How often does this bug happen?
Every time
### What version of React are you using?
18.3.1
### What version of React Compiler are you using?
19.0.0-beta-8a03594-20241020 | Type: Bug,Status: Unconfirmed,Component: Optimizing Compiler | low | Critical |
2,644,732,885 | tauri | [feat] build without signing cli flag | ### Describe the problem
Myself, and especially contributors to a Tauri-based app, do not want/have the signing keys when testing. However, when the various environment variables are not set, the build just fails.
There is no quick way to build a non-signed bundle of an app.
### Describe the solution you'd like
```bash
tauri build --no-sign
```
### Alternatives considered
Currently, the workaround is to have something like a `tauri.conf.dev.json` file with values like:
```json
"bundle": { "windows": null }
```
Then, use `tauri build --config src-tauri/tauri.conf.dev.json`.
I just find it odd there is no flag in the cli to disable signing.
### Additional context
It is helpful to go through the whole workflow of installing, manually opening, checking the closing and re-opening behaviour, checking uninstalling (if all expected files are removed). Also, with things like the updater plugin, checking the automated update/restart workflow. | type: feature request | low | Minor |
2,644,755,126 | transformers | Neftune computation is probably wrong with packed training | ### System Info
v4.46.2
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Neftune computation is probably wrong with packed training because the scaling factor is `alpha/sqrt(lenght*d)`. The length is the packed length there:
https://github.com/huggingface/transformers/blob/a06a0d12636756352494b99b5b264ac9955bc735/src/transformers/trainer_utils.py#L126-L149
### Expected behavior
Should take into account the size of each sentence during computation. | trainer,Feature request,bug | low | Major |
2,644,798,530 | rust | Tracking Issue for sparc_target_feature | <!--
NOTE: For library features, please use the "Library Tracking Issue" template instead.
Thank you for creating a tracking issue! ๐ Tracking issues are for tracking a
feature from implementation to stabilisation. Make sure to include the relevant
RFC for the feature if it has one. Otherwise provide a short summary of the
feature and link any relevant PRs or issues, and remove any sections that are
not relevant to the feature.
Remember to add team labels to the tracking issue.
For a language team feature, this would e.g., be `T-lang`.
Such a feature should also be labeled with e.g., `F-my_feature`.
This label is used to associate issues (e.g., bugs and design questions) to the feature.
-->
This is a tracking issue for SPARC architecture specific part of https://github.com/rust-lang/rust/issues/44839 (RFC 2045 (rust-lang/rfcs#2045)).
The feature gate for the issue is `#![feature(sparc_target_feature)]`.
### About tracking issues
Tracking issues are used to record the overall progress of implementation.
They are also used as hubs connecting to other relevant issues, e.g., bugs or open design questions.
A tracking issue is however *not* meant for large scale discussion, questions, or bug reports about a feature.
Instead, open a dedicated issue for the specific matter and add the relevant feature gate label.
Discussion comments will get marked as off-topic or deleted.
Repeated discussions on the tracking issue may lead to the tracking issue getting locked.
### Steps
<!--
Include each step required to complete the feature. Typically this is a PR
implementing a feature, followed by a PR that stabilises the feature. However
for larger features an implementation could be broken up into multiple PRs.
-->
- [x] Implementation https://github.com/rust-lang/rust/pull/132552
- [ ] Stabilization PR ([see instructions on rustc-dev-guide][stabilization-guide])
[stabilization-guide]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#stabilization-pr
[doc-guide]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#documentation-prs
[nightly-style-procedure]: https://github.com/rust-lang/style-team/blob/main/nightly-style-procedure.md
[Style Guide]: https://github.com/rust-lang/rust/tree/master/src/doc/style-guide
### Unresolved Questions
<!--
Include any open questions that need to be answered before the feature can be
stabilised.
-->
- [ ] Behavior of `v8plus` target feature is LLVM version dependent until LLVM 20 becomes minimal LLVM version (https://github.com/rust-lang/rust/issues/132585#issuecomment-2453926257)
- [ ] Enabling `v8plus` target feature without `v9` target feature should be rejected. (AFAIK, there is no existing mechanism to represent this.)
https://github.com/rust-lang/rust/blob/c059eb77504b638bc53b486ef7151cedb7a7ef03/tests/ui/abi/sparcv8plus.rs#L37-L39
Or, enable `v9` target feature automatically when `v8plus` target feature enabled.
### Implementation history
<!--
Include a list of all the PRs that were involved in implementing the feature.
-->
- `v9`, `v8plus`, `leoncasa`: https://github.com/rust-lang/rust/pull/132552
---
@rustbot label +O-SPARC +A-target-feature | O-SPARC,C-tracking-issue,A-target-feature | low | Critical |
2,644,818,649 | flutter | Move Python formatting to a supported formatter | We currently format all Python code using `yapf`. This is done via the `tools/yapf.sh` and `tools/yapf.bat` wrapper scripts.
Unfortunately, `yapf` is abandoned and only works with Python versions up to and including Python 3.10. As OS and local Python installations are upgraded, this will become more and more problematic. On some Linux distributions side-by-side python installations are unsupported, for example.
## Workarounds
Those on macOS can work around this by installing [homebrew](https://brew.sh) then:
```sh
brew install python@3.10
```
## Related
* https://github.com/flutter/engine/pull/55905 | team,engine,P2,c: tech-debt,team-engine,triaged-engine | low | Major |
2,644,821,936 | PowerToys | Key Remap NOT WORKING | ### Microsoft PowerToys version
0.86.0
### Installation method
WinGet
### Running as admin
Yes
### Area(s) with issue?
Keyboard Manager
### Steps to reproduce
Remap a shortcut
### โ๏ธ Expected Behavior
Remaped shortcut WORKs like before.
### โ Actual Behavior
None of the remapped shortcuts are working.
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,644,890,573 | flutter | [Impeller] libImpeller: Make it easier to trap on OpenGL errors. | @lyceel was running into OpenGL errors in their embedder and was hoping to find where the error happened.
Impeller has this functionality build-in in [`IMPELLER_DEBUG-unopt`](https://github.com/flutter/engine/blob/6ef97e92f0bd147d91ebf7f23ae536572a999294/impeller/renderer/backend/gles/proc_table_gles.h#L75) modes already. But, this is undocumented and a poor experience for users of prebuilts.
Theoretically, the embedder can do something similar but the trampoline setup is tedious (each proc needs one) and error prone (the embedder doesn't know which proc to expect).
As a debugging option in the interop toolkit, Impeller could give the caller an option to react to OpenGL errors. This is expected to come in increasingly handy as Impeller & embedders try not to trample each others contexts.
Implementation Notes: We should not modify `proc_table_gles.h` or add additional flags to the HAL. Checks of the flag values will just cause more overhead and make the proc table more complicated. Instead, the interop toolkit can just chain trampolines together to achieve this functionality. That way, when checks are not necessary, proc table dispatch has no overhead. | P3,e: impeller,team-engine,triaged-engine,e: libimpeller | low | Critical |
2,644,905,226 | react | [DevTools Bug] getCommitTree(): Invalid commit "7" for root "1". There are only "7" commits. | ### Website or app
private
### Repro steps
profile re-renders click on individual re-renders. Clicked on the last re-render and it crashed instead of displaying info.
### How often does this bug happen?
Only once
### DevTools package (automated)
react-devtools-extensions
### DevTools version (automated)
6.0.0-d66fa02a30
### Error message (automated)
getCommitTree(): Invalid commit "7" for root "1". There are only "7" commits.
### Error call stack (automated)
```text
ve/<@moz-extension://beb769c0-41a5-4f80-ba39-12fab791919f/build/main.js:1:1159236
CommitFlamegraphAutoSizer@moz-extension://beb769c0-41a5-4f80-ba39-12fab791919f/build/main.js:1:1404914
renderWithHooks@moz-extension://beb769c0-41a5-4f80-ba39-12fab791919f/build/main.js:1:52244
updateFunctionComponent@moz-extension://beb769c0-41a5-4f80-ba39-12fab791919f/build/main.js:1:81718
beginWork@moz-extension://beb769c0-41a5-4f80-ba39-12fab791919f/build/main.js:1:95937
performUnitOfWork@moz-extension://beb769c0-41a5-4f80-ba39-12fab791919f/build/main.js:1:154622
workLoopSync@moz-extension://beb769c0-41a5-4f80-ba39-12fab791919f/build/main.js:1:154498
renderRootSync@moz-extension://beb769c0-41a5-4f80-ba39-12fab791919f/build/main.js:1:154249
performWorkOnRoot@moz-extension://beb769c0-41a5-4f80-ba39-12fab791919f/build/main.js:1:149824
performSyncWorkOnRoot@moz-extension://beb769c0-41a5-4f80-ba39-12fab791919f/build/main.js:1:164884
flushSyncWorkAcrossRoots_impl@moz-extension://beb769c0-41a5-4f80-ba39-12fab791919f/build/main.js:1:163205
processRootScheduleInMicrotask@moz-extension://beb769c0-41a5-4f80-ba39-12fab791919f/build/main.js:1:163662
1519/ensureRootIsScheduled/<@moz-extension://beb769c0-41a5-4f80-ba39-12fab791919f/build/main.js:1:162805
```
### Error component stack (automated)
```text
CommitFlamegraphAutoSizer@moz-extension://beb769c0-41a5-4f80-ba39-12fab791919f/build/main.js:1:1404715
div@unknown:0:0
div@unknown:0:0
div@unknown:0:0
SettingsModalContextController@moz-extension://beb769c0-41a5-4f80-ba39-12fab791919f/build/main.js:1:1297733
wl<@moz-extension://beb769c0-41a5-4f80-ba39-12fab791919f/build/main.js:1:1536567
fa@moz-extension://beb769c0-41a5-4f80-ba39-12fab791919f/build/main.js:1:1315530
div@unknown:0:0
div@unknown:0:0
ThemeProvider@moz-extension://beb769c0-41a5-4f80-ba39-12fab791919f/build/main.js:1:1318230
portaledContent/<@moz-extension://beb769c0-41a5-4f80-ba39-12fab791919f/build/main.js:1:1318420
div@unknown:0:0
div@unknown:0:0
div@unknown:0:0
ThemeProvider@moz-extension://beb769c0-41a5-4f80-ba39-12fab791919f/build/main.js:1:1318230
TimelineContextController@moz-extension://beb769c0-41a5-4f80-ba39-12fab791919f/build/main.js:1:1395885
ProfilerContextController@moz-extension://beb769c0-41a5-4f80-ba39-12fab791919f/build/main.js:1:1387636
TreeContextController@moz-extension://beb769c0-41a5-4f80-ba39-12fab791919f/build/main.js:1:1209953
SettingsContextController@moz-extension://beb769c0-41a5-4f80-ba39-12fab791919f/build/main.js:1:1238133
ModalDialogContextController@moz-extension://beb769c0-41a5-4f80-ba39-12fab791919f/build/main.js:1:1375334
DevTools_DevTools@moz-extension://beb769c0-41a5-4f80-ba39-12fab791919f/build/main.js:1:1544294
```
### GitHub query string (automated)
```text
https://api.github.com/search/issues?q=getCommitTree(): Invalid commit for root . There are only commits. in:title is:issue is:open is:public label:"Component: Developer Tools" repo:facebook/react
```
| Type: Bug,Status: Unconfirmed,Component: Developer Tools | medium | Critical |
2,644,916,510 | go | proposal: net/http/cookiejar: add Jar.Clear | ### Proposal Details
Currently, since a jar's entries are not exposed to clear all cookies in a client's `Jar` in a thread-safe manner would require(from my understanding):
- Wrapping all calls to the client in a read lock, then write locking as the client's `Jar` is replaced with a new empty jar.
- Keeping track of all URLs accessed in a thread-safe manner, getting the cookies for each of them, and overwriting each of those cookies with expired ones,
- Creating a Jar wrapping type storing a pointer to the jar that has an additional lock on each of the interface functions, and also a third Clear function that sets the pointer to the new jar. This is probably the most reasonable option.
Neither of those options are especially convenient, so I propose cookiejar adds a `Clear()` method that I will open a PR for. I am *not* proposing this is added to `net/http`'s `CookieJar` as that would be a breaking change. | Proposal | low | Major |
2,644,930,406 | deno | deno publish using version field from package.json | Hi,
I have a package (https://jsr.io/@seriousme/opifex) that I publish on JSR , denoland and on NPM.
Since NPM requires the package.json and NodeJS requires JS instead of TS the exports in my package.json differ from the exports in my deno.json. I also do not want deno tools to comment/fail on my generate code in the /dist folder. So as I understand it I need both the package.json and the deno.json.
When publishing a new version I now need to update the version in both package.json as well as deno.json.
I could write my own script to update deno.json on an npm version update but it would be nice if `deno publish` could do without `version` in deno.json if it finds a version in package.json, or if there would be some other easy way to keep them both in sync.
Kind regards,
Hans
| suggestion,publish | low | Major |
2,645,046,310 | rust | improved help message for rustdoc::broken_intra_doc_links | Currently, the only suggestion rustdoc gives is to simply escape `[` and `]` with backslashes, or if there is a similarly-named item, to link to that instead.
there are a few other common errors that it could catch though:
* [ ] code snippets (like `arr[idx+3]`) should be surrounded by backticks instead of escaping each bracket individually
* [ ] if there is a similarly-named [link reference definition](https://spec.commonmark.org/0.31.2/#link-reference-definitions), it should suggest referencing that (probable typo)
* [ ] if the type that is being linked to is outside the current documentation bundle (e.g. linking to a type in an alternative library for the purpose of comparison, or linking to a non-rust type), then it should recommend adding a link definition item.
inspired by discussion on #132748 | T-rustdoc,C-enhancement,A-intra-doc-links | low | Critical |
2,645,047,210 | kubernetes | NUMA-aware memory manager and Topology Manager policy of "restricted" results in UnexpectedAdmissionError | ### What happened?
While trying to reproduce https://github.com/kubernetes/kubernetes/issues/128669 I spun up a VM to test 1.31.2 via minikube and I think I might have uncovered a new and different bug.
I had 8GB of allocatable memory on each of two NUMA nodes. Key kubelet args were:
-cpu-manager-policy=static --kube-reserved=memory=1Gi --memory-manager-policy=Static --reserved-cpus=0,4 --reserved-memory=0:memory=1Gi;1:memory=1Gi --system-reserved=memory=1Gi --topology-manager-policy=restricted
I was able to create the first pod with 1cpu and 256Mi of memory, but when I tried to create the second pod with 1cpu and 9Gi of memory (to force it to allocate memory from both NUMA nodes) it errored out unexpectedly:
cfriesen@debian:~$ minikube kubectl -- get pod kube-mgrr-2 -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"kube-mgrr-2","namespace":"default"},"spec":{"containers":[{"image":"gcr.io/kubernetes-e2e-test-images/resource-consumer:1.4","imagePullPolicy":"IfNotPresent","name":"kube-mgrr-2","resources":{"limits":{"cpu":1,"memory":"9Gi"}}}]}}
creationTimestamp: "2024-11-08T19:34:55Z"
name: kube-mgrr-2
namespace: default
resourceVersion: "1028"
uid: 2e769357-b700-49fa-96dc-2416a0379cb9
spec:
containers:
- image: gcr.io/kubernetes-e2e-test-images/resource-consumer:1.4
imagePullPolicy: IfNotPresent
name: kube-mgrr-2
resources:
limits:
cpu: "1"
memory: 9Gi
requests:
cpu: "1"
memory: 9Gi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-27rcb
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: minikube
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: kube-api-access-27rcb
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
status:
message: 'Pod was rejected: Allocate failed due to [memorymanager] failed to find
NUMA nodes to extend the current topology hint, which is unexpected'
phase: Failed
reason: UnexpectedAdmissionError
startTime: "2024-11-08T19:34:55Z"
Kubelet logs were:
Nov 08 19:34:55 minikube kubelet[2326]: I1108 19:34:55.632981 2326 scope_container.go:75] "TopologyHints" hints={} pod="default/kube-mgrr-2" containerName="kube-mgrr-2"
Nov 08 19:34:55 minikube kubelet[2326]: I1108 19:34:55.633153 2326 policy_static.go:541] "TopologyHints generated" pod="default/kube-mgrr-2" containerName="kube-mgrr-2" cpuHints=[{"NUMANodeAffinity":1,"Preferred":true},{"NUMANodeAffinity":2,"Preferred":true},{"NUMANodeAffinity":3,"Preferred":false}]
Nov 08 19:34:55 minikube kubelet[2326]: I1108 19:34:55.633187 2326 scope_container.go:75] "TopologyHints" hints={"cpu":[{"NUMANodeAffinity":1,"Preferred":true},{"NUMANodeAffinity":2,"Preferred":true},{"NUMANodeAffinity":3,"Preferred":false}]} pod="default/kube-mgrr-2" containerName="kube-mgrr-2"
Nov 08 19:34:55 minikube kubelet[2326]: I1108 19:34:55.633254 2326 scope_container.go:75] "TopologyHints" hints={} pod="default/kube-mgrr-2" containerName="kube-mgrr-2"
Nov 08 19:34:55 minikube kubelet[2326]: I1108 19:34:55.633276 2326 policy.go:71] "Hint Provider has no preference for NUMA affinity with any resource"
Nov 08 19:34:55 minikube kubelet[2326]: I1108 19:34:55.633289 2326 policy.go:71] "Hint Provider has no preference for NUMA affinity with any resource"
Nov 08 19:34:55 minikube kubelet[2326]: I1108 19:34:55.633304 2326 scope_container.go:83] "ContainerTopologyHint" bestHint={"NUMANodeAffinity":1,"Preferred":true}
Nov 08 19:34:55 minikube kubelet[2326]: I1108 19:34:55.633330 2326 scope_container.go:50] "Best TopologyHint" bestHint={"NUMANodeAffinity":1,"Preferred":true} pod="default/kube-mgrr-2" containerName="kube-mgrr-2"
Nov 08 19:34:55 minikube kubelet[2326]: I1108 19:34:55.633342 2326 scope_container.go:56] "Topology Affinity" bestHint={"NUMANodeAffinity":1,"Preferred":true} pod="default/kube-mgrr-2" containerName="kube-mgrr-2"
Nov 08 19:34:55 minikube kubelet[2326]: I1108 19:34:55.633391 2326 policy_static.go:303] "Static policy: Allocate" pod="default/kube-mgrr-2" containerName="kube-mgrr-2"
Nov 08 19:34:55 minikube kubelet[2326]: I1108 19:34:55.633407 2326 policy_static.go:352] "Topology Affinity" pod="default/kube-mgrr-2" containerName="kube-mgrr-2" affinity={"NUMANodeAffinity":1,"Preferred":true}
Nov 08 19:34:55 minikube kubelet[2326]: I1108 19:34:55.633421 2326 policy_static.go:392] "AllocateCPUs" numCPUs=1 socket="01"
Nov 08 19:34:55 minikube kubelet[2326]: I1108 19:34:55.633496 2326 state_mem.go:88] "Updated default CPUSet" cpuSet="0,3-7"
Nov 08 19:34:55 minikube kubelet[2326]: I1108 19:34:55.635778 2326 policy_static.go:424] "AllocateCPUs" result="2"
Nov 08 19:34:55 minikube kubelet[2326]: I1108 19:34:55.635830 2326 state_mem.go:80] "Updated desired CPUSet" podUID="2e769357-b700-49fa-96dc-2416a0379cb9" containerName="kube-mgrr-2" cpuSet="2"
Nov 08 19:34:55 minikube kubelet[2326]: I1108 19:34:55.638325 2326 policy_static.go:106] "Allocate" pod="default/kube-mgrr-2" containerName="kube-mgrr-2"
Nov 08 19:34:55 minikube kubelet[2326]: I1108 19:34:55.638367 2326 policy_static.go:123] "Got topology affinity" pod="default/kube-mgrr-2" podUID="2e769357-b700-49fa-96dc-2416a0379cb9" containerName="kube-mgrr-2" hint={"NUMANodeAffinity":1,"Preferred":true}
Nov 08 19:34:55 minikube kubelet[2326]: E1108 19:34:55.638418 2326 memory_manager.go:257] "Allocate error" err="[memorymanager] failed to find NUMA nodes to extend the current topology hint"
Nov 08 19:34:55 minikube kubelet[2326]: I1108 19:34:55.638450 2326 kubelet.go:2306] "Pod admission denied" podUID="2e769357-b700-49fa-96dc-2416a0379cb9" pod="default/kube-mgrr-2" reason="UnexpectedAdmissionError" message="Allocate failed due to [memorymanager] failed to find NUMA nodes to extend the current topology hint, which is unexpected"
I modified the second pod to request '200m' worth of CPU rather than a whole CPU, and the pod started up as expected.
### What did you expect to happen?
The pod should have started up with one exclusive CPU and memory from both NUMA nodes.
### How can we reproduce it (as minimally and precisely as possible)?
Set the memory manager policy to "Static" and topology manager policy to "restricted". On a two-NUMA-node worker node create a smallish Pod with a single exclusive CPU (request/limit both 1 cpu) that easily fits on one NUMA node worth of memory. Create a second Pod in the Guaranteed QoS class with a big enough memory request that it cannot fit on one NUMA node, with cpu request/limit both set to 1 cpu.
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
cfriesen@debian:~$ minikube kubectl -- version
Client Version: v1.31.2
Kustomize Version: v5.4.2
Server Version: v1.31.2
```
</details>
### Cloud provider
<details>
n/a
</details>
### OS version
<details>
```console
# On Linux:
cfriesen@debian:~$ cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
NAME="Debian GNU/Linux"
VERSION_ID="12"
VERSION="12 (bookworm)"
VERSION_CODENAME=bookworm
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
$ uname -a
cfriesen@debian:~$ uname -a
Linux debian 6.1.0-26-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.112-1 (2024-09-30) x86_64 GNU/Linux
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/node,priority/important-longterm,triage/accepted | low | Critical |
2,645,134,251 | PowerToys | Words are inverted in text extractor when I extract | ### Microsoft PowerToys version
0.86.0
### Installation method
Microsoft Store
### Running as admin
No
### Area(s) with issue?
TextExtractor
### Steps to reproduce
Text before I extract:

Text after I extract:
Function Axis Horizontal
### โ๏ธ Expected Behavior
It should extract it: Horizontal Axis Function
### โ Actual Behavior
Function Axis Horizontal
### Other Software
_No response_ | Issue-Bug,Needs-Triage,Needs-Team-Response,Product-Text Extractor | low | Minor |
2,645,152,392 | pytorch | Compiling `as_strided` call on a view input errors. | ### ๐ Describe the bug
Not sure whether this kind of thing is not supposed to be supported, but trying to get to the base tensor of a view using `as_strided` operation fails when trying to `torch.compile` it.
```python
def foo(x):
v = x.as_strided((10,), (1,), storage_offset=5)
v.add_(1)
return v
def args():
base = torch.arange(20)
x = base[:10]
return (x,)
>>> print(foo(*args()))
tensor([ 6, 7, 8, 9, 10, 11, 12, 13, 14, 15])
>>> print(torch.compile()(foo)(*args()))
Traceback (most recent call last):
File "examples/test.py", line 14, in <module>
print(torch.compile(foo)(*args()))
File "torch/_dynamo/eval_frame.py", line 556, in _fn
return fn(*args, **kwargs)
File "torch/_dynamo/convert_frame.py", line 1423, in __call__
return self._torchdynamo_orig_callable(
File "torch/_dynamo/convert_frame.py", line 1208, in __call__
result = self._inner_convert(
File "torch/_dynamo/convert_frame.py", line 549, in __call__
return _compile(
File "torch/_dynamo/convert_frame.py", line 977, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "torch/_dynamo/convert_frame.py", line 708, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "torch/_utils_internal.py", line 95, in wrapper_function
return function(*args, **kwargs)
File "torch/_dynamo/convert_frame.py", line 743, in _compile_inner
out_code = transform_code_object(code, transform)
File "torch/_dynamo/bytecode_transformation.py", line 1348, in transform_code_object
transformations(instructions, code_options)
File "torch/_dynamo/convert_frame.py", line 233, in _fn
return fn(*args, **kwargs)
File "torch/_dynamo/convert_frame.py", line 662, in transform
tracer.run()
File "torch/_dynamo/symbolic_convert.py", line 2909, in run
super().run()
File "torch/_dynamo/symbolic_convert.py", line 1115, in run
while self.step():
File "torch/_dynamo/symbolic_convert.py", line 1027, in step
self.dispatch_table[inst.opcode](self, inst)
File "torch/_dynamo/symbolic_convert.py", line 3100, in RETURN_VALUE
self._return(inst)
File "torch/_dynamo/symbolic_convert.py", line 3085, in _return
self.output.compile_subgraph(
File "torch/_dynamo/output_graph.py", line 1143, in compile_subgraph
self.compile_and_call_fx_graph(tx, list(reversed(stack_values)), root)
File "torch/_dynamo/output_graph.py", line 1414, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "torch/_dynamo/output_graph.py", line 1463, in call_user_compiler
return self._call_user_compiler(gm)
File "torch/_dynamo/output_graph.py", line 1512, in _call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
File "torch/_dynamo/output_graph.py", line 1493, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "torch/_dynamo/repro/after_dynamo.py", line 130, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
File "torch/__init__.py", line 2294, in __call__
return compile_fx(model_, inputs_, config_patches=self.config)
File "torch/_inductor/compile_fx.py", line 1707, in compile_fx
return aot_autograd(
File "torch/_dynamo/backends/common.py", line 72, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
File "torch/_functorch/aot_autograd.py", line 1102, in aot_module_simplified
compiled_fn = dispatch_and_compile()
File "torch/_functorch/aot_autograd.py", line 1078, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
File "torch/_functorch/aot_autograd.py", line 526, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
File "torch/_functorch/aot_autograd.py", line 634, in _create_aot_dispatcher_function
fw_metadata = run_functionalized_fw_and_collect_metadata(
File "torch/_functorch/_aot_autograd/collect_metadata_analysis.py", line 197, in inner
flat_f_outs = f(*flat_f_args)
File "torch/_functorch/_aot_autograd/traced_function_transforms.py", line 875, in functional_call
out = PropagateUnbackedSymInts(mod).run(
File "torch/fx/interpreter.py", line 167, in run
self.env[node] = self.run_node(node)
File "torch/fx/experimental/symbolic_shapes.py", line 6560, in run_node
result = super().run_node(n)
File "torch/fx/interpreter.py", line 228, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
File "torch/fx/interpreter.py", line 332, in call_method
return getattr(self_obj, target)(*args_tail, **kwargs)
File "torch/_subclasses/functional_tensor.py", line 545, in __torch_dispatch__
outs_unwrapped = func._op_dk(
File "torch/utils/_stats.py", line 21, in wrapper
return fn(*args, **kwargs)
File "torch/_subclasses/fake_tensor.py", line 1271, in __torch_dispatch__
return self.dispatch(func, types, args, kwargs)
File "torch/_subclasses/fake_tensor.py", line 1813, in dispatch
return self._cached_dispatch_impl(func, types, args, kwargs)
File "torch/_subclasses/fake_tensor.py", line 1372, in _cached_dispatch_impl
output = self._dispatch_impl(func, types, args, kwargs)
File "torch/_subclasses/fake_tensor.py", line 2297, in _dispatch_impl
r = func(*args, **kwargs)
File "torch/_ops.py", line 723, in __call__
return self._op(*args, **kwargs)
File "torch/_prims_common/wrappers.py", line 291, in _fn
result = fn(*args, **kwargs)
File "torch/_refs/__init__.py", line 2704, in as_strided_scatter
return prims.as_strided_scatter(input, src, size, stride, storage_offset_int)
File "torch/_ops.py", line 723, in __call__
return self._op(*args, **kwargs)
File "torch/_library/fake_impl.py", line 95, in meta_kernel
return fake_impl_holder.kernel(*args, **kwargs)
File "torch/_library/utils.py", line 31, in __call__
return self.func(*args, **kwargs)
File "torch/library.py", line 1186, in inner
return func(*args, **kwargs)
File "torch/_library/custom_ops.py", line 588, in fake_impl
return self._abstract_fn(*args, **kwargs)
File "torch/_prims/__init__.py", line 1704, in _as_strided_scatter_meta
torch._check(
File "torch/__init__.py", line 1615, in _check
_check_with(RuntimeError, cond, message)
File "torch/__init__.py", line 1597, in _check_with
raise error_type(message_evaluated)
torch._dynamo.exc.BackendCompilerFailed: backend='inductor' raised:
RuntimeError: as_strided_scatter: sizes [10], strides [1], storage offset 5 and itemsize 8 requiring a storage size of 120 are out of bounds for storage of size 80
While executing %add_ : [num_users=0] = call_method[target=add_](args = (%v, 1), kwargs = {})
Original traceback:
File "examples/test.py", line 5, in foo
v.add_(1)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
```
### Versions
PyTorch version: 2.6.0a0+git362ca54
Is debug build: True
CUDA used to build PyTorch: 12.6
ROCM used to build PyTorch: N/A
cc @ezyang @chauhang @penguinwu @eellison @zou3519 @bdhirsh @yf225 | low priority,triaged,oncall: pt2,module: fakeTensor,module: pt2-dispatcher | low | Critical |
2,645,159,057 | flutter | [flutter_svg] Wrong display of SVG | ### Steps to reproduce
1. Grab the first SVG from the Search-to-Close demo of https://shapeshifter.design (attached)
2. Create a SvgPicture that loads this SVG
3. Observe the display (attached)

### Expected results
Correct rendering
### Actual results
Wrong rendering
### Code sample
<details open><summary>Code sample</summary>
```dart
SvgPicture.asset('assets/frame0.svg')
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
_No response_
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
P:\Flutter\bin\flutter.bat doctor --verbose
[โ] Flutter (Channel stable, 3.24.4, on Microsoft Windows [Version 10.0.22631.4317], locale en-US)
โข Flutter version 3.24.4 on channel stable at P:\Flutter
โข Upstream repository https://github.com/flutter/flutter.git
โข Framework revision 603104015d (2 weeks ago), 2024-10-24 08:01:25 -0700
โข Engine revision db49896cf2
โข Dart version 3.5.4
โข DevTools version 2.37.3
[โ] Windows Version (Installed version of Windows is version 10 or higher)
[โ] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
โข Android SDK at p:\Android_SDK
โข Platform android-34, build-tools 34.0.0
โข ANDROID_SDK_ROOT = p:\Android_SDK
โข Java binary at: c:\Program Files\Android\Android Studio\jbr\bin\java
โข Java version OpenJDK Runtime Environment (build 21.0.3+-12282718-b509.11)
โข All Android licenses accepted.
[X] Chrome - develop for the web (Cannot find Chrome executable at .\Google\Chrome\Application\chrome.exe)
! Cannot find Chrome. Try setting CHROME_EXECUTABLE to a Chrome executable.
[โ] Visual Studio - develop Windows apps (Visual Studio Community 2022 17.11.5)
โข Visual Studio at C:\Program Files\Microsoft Visual Studio\2022\Community
โข Visual Studio Community 2022 version 17.11.35327.3
โข Windows 10 SDK version 10.0.22621.0
[โ] Android Studio (version 2024.2)
โข Android Studio at C:\Program Files\Android\Android Studio
โข Flutter plugin can be installed from:
https://plugins.jetbrains.com/plugin/9212-flutter
โข Dart plugin can be installed from:
https://plugins.jetbrains.com/plugin/6351-dart
โข Java version OpenJDK Runtime Environment (build 21.0.3+-12282718-b509.11)
[โ] Connected device (2 available)
โข Windows (desktop) โข windows โข windows-x64 โข Microsoft Windows [Version 10.0.22631.4317]
โข Edge (web) โข edge โข web-javascript โข Microsoft Edge 130.0.2849.80
[โ] Network resources
โข All expected network resources are available.
! Doctor found issues in 1 category.
```
</details>
| package,team-ecosystem,has reproducible steps,P2,triaged-ecosystem,found in release: 3.24,found in release: 3.27,p: flutter_svg | low | Minor |
2,645,169,108 | node | Incorrect version for `PerformanceMark` and `PerformanceMeasure` of `perf_hooks` module | ### Affected URL(s)
https://nodejs.org/docs/latest/api/perf_hooks.html
### Description of the problem
The doc says [`PerformanceMark`](https://nodejs.org/docs/latest/api/perf_hooks.html#class-performancemark) and [`PerformanceMeasure`](https://nodejs.org/docs/latest/api/perf_hooks.html#class-performancemeasure) are added in v18.2.0, v16.17.0; this is not true (the version is added in docs changes happens in https://github.com/nodejs/node/pull/44483)
`PerformanceMark` is implemented and can be accessed via `perf_hooks` module in https://github.com/nodejs/node/pull/37136, in [v16.0.0](https://nodejs.org/zh-cn/blog/release/v16.0.0)
`PerformanceMeasure` is also implemented in https://github.com/nodejs/node/pull/37136, in [v16.0.0](https://nodejs.org/zh-cn/blog/release/v16.0.0); bu can be accessed via `perf_hooks` module in https://github.com/nodejs/node/pull/39297, in [v16.7.0](https://nodejs.org/zh-cn/blog/release/v16.7.0)
---
manually test in local node runtime:


---
see also https://github.com/mdn/browser-compat-data/pull/25008 | doc | low | Major |
2,645,177,620 | kubernetes | Integration tests do not have gitVersion information | Originally raised by @pohly. https://kubernetes.slack.com/archives/C0EG7JC6T/p1730965005279489
https://github.com/kubernetes/kubernetes/blob/530278b1ded93c5416ce1badfb6b7b1ac475694a/staging/src/k8s.io/apiserver/pkg/endpoints/deprecation/deprecation.go#L74-L77
Current major and minor are both zero, so (all?) non-GA APIs are considered deprecated. I wonder whether that return true should be a return false.
This occurs because [k8s.io/component-base/version](http://k8s.io/component-base/version) depends on build flags to inject the git version. This injection does not happen when using go test manually (of course) but also not when using make test (might be an oversight that can be fixed).
version.DefaultKubeBinaryVersion might be a better source of the major/minor version but we do have an issue to remove it in the future https://github.com/kubernetes/kubernetes/issues/126686
/cc @BenTheElder
/sig testing
| sig/api-machinery,sig/testing,sig/release,sig/architecture,triage/accepted | medium | Minor |
2,645,180,460 | PowerToys | Context menu to fix 3D model orientation | ### Description of the new feature / enhancement
Windows natively has an explorer context menu button to rotate images 90 degrees clockwise or counterclockwise without having to enter a separate image editor program.
Now that Powertoys supports previews of 3d models, it would be nice to have a similar context button to rotate an STL file 90 degrees in x, y, or z, for quickly fixing badly-oriented models so the preview image points in the right direction.
### Scenario when this would be used?
Powertoys supports previews of 3D model files, but often these files are oriented oddly or upside-down / backward, making the preview image useless. Would be handy to have a tool to quickly correct the orientation of the file so model previews are pointed in the right direction.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,645,203,685 | flutter | SemanticsBinding.instance.ensureSemantics causes inconsistent state for mobile | ### Steps to reproduce
```dart
import 'package:flutter/material.dart';
import 'package:flutter/semantics.dart';
void main() {
runApp(const TabBarDemo());
SemanticsBinding.instance.ensureSemantics();
}
class TabBarDemo extends StatelessWidget {
const TabBarDemo({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
home: DefaultTabController(
length: 3,
child: Scaffold(
appBar: AppBar(
bottom: const TabBar(
tabs: [
Tab(icon: Icon(Icons.directions_car)),
Tab(icon: Icon(Icons.directions_transit)),
Tab(icon: Icon(Icons.directions_bike)),
],
),
title: const Text('Tabs Demo'),
),
body: const TabBarView(
children: [
Icon(Icons.directions_car),
Icon(Icons.directions_transit),
Icon(Icons.directions_bike),
],
),
),
),
);
}
}
```
1. launch the app in android or ios without any assistive technologies like voiceover or talkback
2. after the app is launched, turn on voiceover or talkback
### Actual results
the talkback and voiceover can't recognize the app
Upon closer investigation this is due to that the enable flag in the engine is not flipped by calls to ensureSemantics. It will causes the engine shell to drop all semantics update.
Since the semantics update is sequential update, the accessibility tree in mobile embedding can't be constructed even after the voiceover or talkback is turned on later.
| team-accessibility | low | Minor |
2,645,234,983 | node | Missing doc for `PerformanceResourceTiming.{initiatorType, nextHopProtocol, responseStart, deliveryType and responseStatus}` | ### Affected URL(s)
https://nodejs.org/docs/latest/api/perf_hooks.html
### Description of the problem
the `initiatorType`, `nextHopProtocol`, `responseStart`, `deliveryType` and `responseStatus` fields of `PerformanceResourceTiming` are missing from documentaion but are supported by node, they need to be documented
see https://github.com/mdn/browser-compat-data/pull/25010 for more details | doc | low | Major |
2,645,252,454 | pytorch | FSDP Hybrid Shard worse loss than than Full Shard | ### ๐ Describe the bug
When using HYBRID_SHARD instead of FULL_SHARD on PyTorch 2.4.1 the loss of our model behaves similarly to when it is being trained on one node (despite training on two). When using FULL_SHARD, the loss behaves as expected when training on two nodes (so twice the effective batch size). Additionally, the problem also happens with FULL_SHARD when on PyTorch 2.5. See the screenshot below

The device meshes we use are as follows:
```
if self.strategy.sharding_strategy in [ShardingStrategy._HYBRID_SHARD_ZERO2, ShardingStrategy.HYBRID_SHARD]:
self.device_mesh = init_device_mesh("cuda", (world_size//8, 8), mesh_dim_names=("replicate", "shard"))
else:
self.device_mesh = init_device_mesh("cuda", (world_size,))
```
### Versions
https://gist.github.com/zaptrem/799d0c99c9e69067eb937ab1a55f1c69
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @zhaojuanmao @mrshenli @rohan-varma @chauhang | oncall: distributed,module: fsdp | low | Critical |
2,645,256,533 | PowerToys | [PowerRename] Regex help in UI: Add ^, $, .* | ### Description of the new feature / enhancement
Adding the following syntax to the regex help flyout in the PowerRename window will be helpful for users that have not so much knowledge about regex:
- `^` = Start of the string.
- `$` = End of the string.
- `.*` = Any number of any characters. (In addition to `.` syntax explaination.)
### Scenario when this would be used?
We sometimes get issues asking how to add text to the start or end of a file name without replacing explicit characters. (Example issue: #35797)
### Supporting information
_No response_ | Idea-Enhancement,Help Wanted,Good first issue,Status-In progress,Area-User Interface,Priority-3,Cost-Small | low | Major |
2,645,257,584 | vscode | webviewPanel.viewColumn set incorrectly |
Type: <b>Bug</b>
in an extension, call
```
const webviewPanel=vscode.window.createWebviewPanel(id, title, vscode.ViewColumn.Beside);
```
note that webviewPanel.viewColumn returns undefined
internally the viewColumn is kept as vscode.ViewColumn.Beside = -2, and the accessor returns undefined when it is less than 0.
The documentation for ViewColumn.Beside has this:
A *symbolic* editor column representing the column to the side of the active one. This value can be used when opening editors, but the *resolved* {@link TextEditor.viewColumn viewColumn}-value of editors will always be `One`, `Two`, `Three`,... or `undefined` but never `Beside`.
I accept that it doesn't explicitly mention WebviewPanel, but I think it should be consistent with TextEditor.
VS Code version: Code 1.95.1 (65edc4939843c90c34d61f4ce11704f09d3e5cb6, 2024-10-31T05:14:54.222Z)
OS version: Windows_NT x64 10.0.22621
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|AMD Ryzen Threadripper 1950X 16-Core Processor (32 x 3394)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|63.89GB (42.44GB free)|
|Process Argv|-n --crash-reporter-id 22b24386-c25b-404a-a2ae-9ae86dee0ec7|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (7)</summary>
Extension|Author (truncated)|Version
---|---|---
amazon-q-vscode|ama|1.34.0
npm-intellisense|chr|1.4.5
vscode-eslint|dba|3.0.10
thumbnails|iso|0.1.4
js-debug-nightly|ms-|2024.11.417
clangformat|sea|2.0.2
cursor-align|yo1|2.0.2
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
a9j8j154:30646983
962ge761:30959799
pythongtdpath:30769146
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
01bff139:31013167
dvdeprecation:31068756
dwnewjupyter:31046869
newcmakeconfigv2:31071590
impr_priority:31102340
nativerepl2:31139839
refactort:31108082
pythonrstrctxt:31112756
cf971741:31144450
iacca1:31171482
notype1cf:31157160
5fd0e150:31155592
dwcopilot:31170013
j44ff735:31175163
```
</details>
<!-- generated by issue reporter --> | bug,help wanted,webview | low | Critical |
2,645,258,169 | ui | [bug]: RTL Direction is not supported | ### Describe the bug
radix-ui -> Supports Right to Left direction.
tailwindcss -> Supports Right to Left direction "rtl:"
shadcn-ui -> ?
Most of component not support right to left direction
In every new project I need to make modifications to the component
At least half a billion people deal with right to left direction
Do you think it is worth making this library more awesome and support RTL?
### Affected component/components
All Components
### How to reproduce
---
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
MacOS, Chrome
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,645,321,248 | godot | Number int or float field export variables do not apply when running the game unless you click off or push enter beforehand | ### Tested versions
v4.4.dev4.official [36e6207bb], v4.4.dev3.official [f4af8201b]
### System information
Godot v4.4.dev4 - Windows 10.0.19045 - Multi-window, 9 monitors - OpenGL 3 (Compatibility) - NVIDIA GeForce RTX 3070 (NVIDIA; 32.0.15.6590) - 11th Gen Intel(R) Core(TM) i9-11900K @ 3.50GHz (16 threads)
### Issue description
Number export variables fields do not accept the change before running the game. I understand that pushing enter or clicking out of that field will accept the value. I just figured it should auto accept or apply it when you click to run the game.
### Steps to reproduce
On a field of some parameter or a number export variable field.
Change the field, but do not push enter or click in another field.
Once the value is changed run the game.
It will not accept the value unless enter or clicked off.
https://github.com/user-attachments/assets/d87621e5-6ece-4872-84ad-5d28f0eb7723
### Minimal reproduction project (MRP)
[mrp_apply_number_int_and_float_field_entry_edit_before_running_game.zip](https://github.com/user-attachments/files/17684826/mrp_apply_number_int_and_float_field_entry_edit_before_running_game.zip)
| discussion,topic:editor,usability,topic:gui | low | Minor |
2,645,328,864 | svelte | developer docs | ### Describe the problem
There are parts of Svelte I don't understand since the rewrite. This makes it harder for me to contribute, but mostly just harder for me to understand how Svelte interacts with SvelteKit, to have productive conversations with other maintainers, etc.
### Describe the proposed solution
Write some basic developer documentation. If the only thing covered were the anchor comments I would be happy enough :smile: I'd really like to understand how hydration, dom creation, and transitions work and interact with those comments.
### Importance
would make my life easier | documentation | low | Major |
2,645,346,844 | flutter | [web] Shift+Tab breaks when starting on browser navigation bar. | It seems that Flutter web doesn't traverse focus in the right order when it misses the "Shift" keydown event.
This happens when the user attempts to reverse focus from the UI elements of the browser, like the address bar, back into the Flutter app.
1. Run the sample code (below)
2. Focus the browser address bar
3. Press Shift+Tab until focus reaches flutter.
* **Expected:** Flutter should traverse focus in reverse order (FaB -> button -> ...)
* **Actual:** Flutter traverses focus in normal order (Button -> FaB -> ...)
<details>
<summary>Code: <tt>main.dart.js</tt></summary>
This is just the default app with a button added to the center column. To make the effect even more noticeable, add more buttons :)
```dart
import 'package:flutter/widgets.dart';
void main() {
runApp(MyApp());
}
import 'package:flutter/material.dart';
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
theme: ThemeData(
colorScheme: ColorScheme.fromSeed(seedColor: Colors.deepPurple),
useMaterial3: true,
),
home: const MyHomePage(title: 'Flutter Demo Home Page'),
);
}
}
class MyHomePage extends StatefulWidget {
const MyHomePage({super.key, required this.title});
final String title;
@override
State<MyHomePage> createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
int _counter = 0;
void _incrementCounter() {
setState(() {
_counter++;
});
}
void _resetCounter() {
setState(() {
_counter = 0;
});
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
backgroundColor: Theme.of(context).colorScheme.inversePrimary,
title: Text(widget.title),
),
body: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
const Text(
'You have pushed the button this many times:',
),
Text(
'$_counter',
style: Theme.of(context).textTheme.headlineMedium,
),
TextButton.icon(
onPressed: _resetCounter,
label: Text('Reset'),
icon: Icon(Icons.delete_rounded),
)
],
),
),
floatingActionButton: FloatingActionButton(
onPressed: _incrementCounter,
tooltip: 'Increment',
child: const Icon(Icons.add),
),
);
}
}
```
</details> | framework,platform-web,has reproducible steps,P2,team-web,triaged-web,found in release: 3.24,found in release: 3.27 | low | Major |
2,645,396,415 | ollama | Support importing vision models from Safetensors in `ollama create` | ### What is the issue?
I tried to import finetuned llama-3.2-11b-vision, but I got "Error: unsupported architecture."
In order to make sure my model is not the problem, I downloaded [meta-llama/Llama-3.2-11B-Vision-Instruct](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct) from Huggingface.
I copied modelfile from `Ollama show llama3.2-vision --modelfile`.
Then edited the modelfile and pointed `FROM` to the downloaded model from HF.
When I run `ollama create llama-vision -f llama-vision.modelfile`, I get this:
```bash
transferring model data 100%
converting model
Error: unsupported architecture
```
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.4.0 | feature request,create | low | Critical |
2,645,427,550 | node | Worker Threads feature are experimental between v10.5.0 and v11.7.0 | ### Affected URL(s)
https://nodejs.org/docs/latest/api/worker_threads.html
### Description of the problem
Worker Threads feature are experiment between v10.5.0 and v11.7.0 and must enable by passing `--experimental-worker` runtime flag; currently the doc mention no info about this
see https://github.com/mdn/browser-compat-data/pull/25012 for more detail | doc | low | Major |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.