id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k ⌀ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,611,473,502 | godot | Button node registers press when disabled | ### Tested versions
I'm only using 4.3
### System information
Godot v4.3.stable - Windows 10.0.19045 - GLES3 (Compatibility) - NVIDIA GeForce GTX 970 (NVIDIA; 32.0.15.6094) - AMD Ryzen 5 3600 6-Core Processor (12 Threads)
### Issue description
Button is not fully inactive when disabled
You can still click a disabled button and then register a release after it's enabled again without registering a button down signal
### Steps to reproduce
Button node -> set to disabled
Press the button and hold down the mouse - NOTE: No down signal is sent (as expected), but the button still registered it's been pressed (which it shouldn't)
Keep the disabled button pressed
Wait for timer to enable the button
Release button - button up signal IS sent (even though it wasn't pressed)
Button node -> set to disabled
Hold down the mouse outside the button - NOTE: No down signal is sent (as expected)
Keep the mouse pressed
Wait for timer to enable the button
Move mouse over button
Release button - button up signal is NOT sent (as expected)
### Minimal reproduction project (MRP)
[button-bug-report.zip](https://github.com/user-attachments/files/17506970/button-bug-report.zip) | bug,topic:input,topic:gui | low | Critical |
2,611,511,636 | kubernetes | Allow different orders in OrderedReady podManagementPolicy | ### What would you like to be added?
On top of the existing [OrderedReady](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#orderedready-pod-management) policy for a StatefulSet, I'd like to be able to pick minor variants to the order, like a descending rather than ascending order.
### Why is this needed?
When the pods in the StatefulSet come with significant state (for example clustered databases where each pod carries its own volume with one shard of the data), there can be significant differences between the first pod and the last, and therefore it can become important to pick in which order they should be deployed. In my case, the older pods (with alphabetically lower names) hold older data, while newer pods have newer data and should ideally come online first. Reversing the ordered followed by the scheduler would allow my workload to become mostly functional again in minutes rather than hours, after a major maintenance such as a kubernetes version upgrade to the node.
I've considered workarounds like [custom schedulers](https://kubernetes.io/docs/tasks/extend-kubernetes/configure-multiple-schedulers/) but they seem overkill and aren't always supported natively by most helm charts out there.
https://github.com/kubernetes/kubernetes/issues/88362 was withdrawn back in 2020 but it was more about interdependencies between pods. | kind/feature,sig/apps,lifecycle/stale,needs-triage | low | Minor |
2,611,601,879 | ui | [bug]: In button components section, on click of login button navigates to 404 page could not be found page. | ### Describe the bug
In Button components docs section, on click of the login button the page navigates to **/login** and shows 404 Page could not be found error.


### Affected component/components
Button
### How to reproduce
1. Go to Button components in docs section
2. Scroll down to login button
3. Click on login button
4. You will see 404 page could not be found
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Ubuntu 20
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,611,643,355 | pytorch | Strange recompilations on torch 2.5 + FSDP + UNet | ### 🐛 Describe the bug
Simple compilation of UNet model works fine, but FSDP-wrapped UNet gets recompiled on every block. In real setup cache-size limit is rapidly reached.
Code:
```
import argparse
import os
from contextlib import nullcontext
from typing import List, Optional
import torch
import torch.distributed as dist
import torch.nn.functional as F
from einops import rearrange
from torch import nn
from torch.distributed.fsdp import FullyShardedDataParallel as FSDP
from torch.distributed.fsdp import MixedPrecision, ShardingStrategy
from torch.distributed.fsdp.sharded_grad_scaler import ShardedGradScaler
from torch.distributed.fsdp.wrap import ModuleWrapPolicy
from torch.nn import RMSNorm
from torch.nn.parallel import DistributedDataParallel
from tqdm.auto import tqdm
torch._dynamo.config.inline_inbuilt_nn_modules = False
torch._dynamo.config.optimize_ddp = False
def setup(rank, world_size):
os.environ["MASTER_ADDR"] = "localhost"
os.environ["MASTER_PORT"] = "12355"
torch.cuda.set_device(rank)
dist.init_process_group("nccl", rank=rank, world_size=world_size)
def cleanup():
dist.destroy_process_group()
def zero_module(module):
"""
Zero out the parameters of a module and return it.
"""
for p in module.parameters():
p.detach().zero_()
return module
class SpatialToSeq(nn.Module):
def __init__(self):
super().__init__()
def forward(self, x, *args, **kwargs):
b, c, h, w = x.shape
return x.permute(0, 2, 3, 1).view(b, h * w, c)
class SeqToSpatial(nn.Module):
def __init__(self):
super().__init__()
def forward(self, x, *args, **kwargs):
b, n, c = x.shape
spatial_dim = int(n**0.5)
return x.permute(0, 2, 1).view(b, c, spatial_dim, spatial_dim)
class SelfAttention(nn.Module):
def __init__(self, input_dim: int, out_dim: int, d_head: int):
super().__init__()
self.input_dim = input_dim
self.out_dim = out_dim
self.d_head = d_head
self.n_heads = self.out_dim // self.d_head
self.d_attn = self.out_dim
self.pre_norm = nn.LayerNorm(input_dim)
self.qkv_proj = nn.Linear(input_dim, 3 * self.d_attn, bias=False)
self.q_norm = RMSNorm(self.d_attn, eps=1e-6)
self.k_norm = RMSNorm(self.d_attn, eps=1e-6)
self.to_out = nn.Linear(self.d_attn, self.out_dim)
def forward(self, x: torch.Tensor, cond: Optional[torch.Tensor] = None, cond_mask: Optional[torch.Tensor] = None):
x = self.pre_norm(x)
q, k, v = self.qkv_proj(x).chunk(dim=-1, chunks=3)
q = self.q_norm(q)
k = self.k_norm(k)
q = rearrange(q, "b n (h d) -> b h n d", h=self.n_heads)
k = rearrange(k, "b n (h d) -> b h n d", h=self.n_heads)
v = rearrange(v, "b n (h d) -> b h n d", h=self.n_heads)
out = F.scaled_dot_product_attention(q, k, v)
out = rearrange(out, "b h n d -> b n (h d)", h=self.n_heads)
out = self.to_out(out)
return out
class CrossAttention(nn.Module):
def __init__(self, input_dim: int, cond_dim: int, out_dim: int, d_head: int):
super().__init__()
self.input_dim = input_dim
self.cond_dim = cond_dim
self.out_dim = out_dim
self.d_head = d_head
self.n_heads = self.out_dim // self.d_head
self.d_attn = self.out_dim
self.pre_norm = nn.LayerNorm(input_dim)
self.cond_pre_norm = nn.LayerNorm(cond_dim)
self.q_proj = nn.Linear(input_dim, self.d_attn, bias=False)
self.kv_proj = nn.Linear(cond_dim, 2 * self.d_attn, bias=False)
self.q_norm = RMSNorm(self.d_attn, eps=1e-6)
self.k_norm = RMSNorm(self.d_attn, eps=1e-6)
self.to_out = nn.Linear(self.d_attn, self.out_dim)
def forward(self, x: torch.Tensor, cond: torch.Tensor, cond_mask: Optional[torch.Tensor] = None):
x = self.pre_norm(x)
cond = self.cond_pre_norm(cond)
q = self.q_proj(x)
k, v = self.kv_proj(cond).chunk(dim=-1, chunks=2)
q = self.q_norm(q)
k = self.k_norm(k)
q = rearrange(q, "b n (h d) -> b h n d", h=self.n_heads)
k = rearrange(k, "b n (h d) -> b h n d", h=self.n_heads)
v = rearrange(v, "b n (h d) -> b h n d", h=self.n_heads)
if cond_mask is not None:
cond_mask = cond_mask.unsqueeze(1).unsqueeze(1)
out = F.scaled_dot_product_attention(q, k, v, attn_mask=cond_mask)
out = rearrange(out, "b h n d -> b n (h d)", h=self.n_heads)
out = self.to_out(out)
return out
class Upsample(nn.Module):
def __init__(self):
super().__init__()
def forward(self, x: torch.Tensor, *args, **kwargs):
x = F.interpolate(x, scale_factor=2, mode="nearest")
return x
class Downsample(nn.Module):
def __init__(self):
super().__init__()
self.op = nn.AvgPool2d(kernel_size=2, stride=2)
def forward(self, x: torch.Tensor, *args, **kwargs):
return self.op(x)
class Sequential(nn.Sequential):
def forward(self, x, *args, **kwargs):
for layer in self:
x = layer(x, *args, **kwargs)
return x
class ResBlock(nn.Module):
def __init__(
self,
channels: int,
dropout: float,
out_channels: Optional[int] = None,
mid_channels: Optional[int] = None,
use_conv: bool = False,
up: bool = False,
down: bool = False,
norm_groups: int = 32,
):
super().__init__()
self.channels = channels
self.dropout = dropout
self.out_channels = out_channels or channels
self.mid_channels = mid_channels or self.out_channels
self.use_conv = use_conv
conv_block = [nn.SiLU(), nn.Conv2d(channels, self.mid_channels, 3, padding=1)]
self.in_layers = nn.ModuleList([nn.GroupNorm(num_channels=channels, num_groups=norm_groups), *conv_block])
self.in_layers_len = len(self.in_layers)
self.updown = up or down
if up:
self.h_upd = Upsample()
self.x_upd = Upsample()
elif down:
self.h_upd = Downsample()
self.x_upd = Downsample()
else:
self.h_upd = self.x_upd = nn.Identity()
# override num group for shrinked model
norm_groups = max(norm_groups * self.mid_channels // self.out_channels, 1)
self.out_layers = nn.ModuleList(
[
nn.GroupNorm(num_channels=self.mid_channels, num_groups=norm_groups),
nn.SiLU(),
nn.Dropout(p=dropout),
zero_module(nn.Conv2d(self.mid_channels, self.out_channels, 3, padding=1)),
]
)
self.out_layers_len = len(self.out_layers)
if use_conv:
self.skip_connection = nn.Conv2d(channels, self.out_channels, 1)
else:
if self.out_channels == channels:
self.skip_connection = nn.Identity()
else:
self.skip_connection = nn.Conv2d(channels, self.out_channels, 1)
def forward(self, x: torch.Tensor, *args, **kwargs):
h = x
for i in range(self.in_layers_len - 1):
h = self.in_layers[i](h)
if self.updown:
h = self.h_upd(h)
x = self.x_upd(x)
h = self.in_layers[self.in_layers_len - 1](h)
for i in range(self.out_layers_len):
h = self.out_layers[i](h)
out = self.skip_connection(x) + h
return out
class UNet(nn.Module):
def __init__(self, in_dim: int, cond_dim: int, channels: List[int], attns: List[int], middle_attns: int = 0):
super().__init__()
assert len(attns) == len(channels) - 1
self.in_dim = in_dim
self.down_blocks = nn.ModuleList([])
self.up_blocks = nn.ModuleList([])
ch = channels[0]
in_chs = [ch]
self.in_block = nn.Conv2d(in_dim, channels[0], kernel_size=3, padding=1)
for i, (ch, out_ch) in enumerate(zip(channels[:-1], channels[1:])):
layer = [ResBlock(ch, 0.0, out_ch, out_ch)]
if attns[i] > 0:
layer.append(SpatialToSeq())
for _ in range(attns[i]):
layer.append(SelfAttention(out_ch, out_ch, 64))
layer.append(CrossAttention(out_ch, cond_dim, out_ch, 64))
layer.append(SeqToSpatial())
layer.append(ResBlock(out_ch, 0.0, out_ch, out_ch, down=True))
self.down_blocks.append(Sequential(*layer))
in_chs.append(out_ch)
layer = [ResBlock(out_ch, 0.0, out_ch, out_ch)]
if middle_attns > 0:
layer.append(SpatialToSeq())
for _ in range(middle_attns):
layer.append(SelfAttention(out_ch, out_ch, 64))
layer.append(CrossAttention(out_ch, cond_dim, out_ch, 64))
layer.append(SeqToSpatial())
layer.append(ResBlock(out_ch, 0.0, out_ch, out_ch))
self.middle_block = Sequential(*layer)
for i, (ch1, ch2) in enumerate(zip(channels[::-1][:-1], channels[::-1][1:])):
i = len(attns) - 1 - i
ch = ch1 + in_chs.pop()
out_ch = ch2
layer = [ResBlock(ch, 0.0, out_ch, out_ch)]
if attns[i] > 0:
layer.append(SpatialToSeq())
for _ in range(attns[i]):
layer.append(SelfAttention(out_ch, out_ch, 64))
layer.append(CrossAttention(out_ch, cond_dim, out_ch, 64))
layer.append(SeqToSpatial())
layer.append(ResBlock(out_ch, 0.0, out_ch, out_ch, up=True))
self.up_blocks.append(Sequential(*layer))
self.out_block = zero_module(nn.Conv2d(out_ch, in_dim, kernel_size=3, padding=1))
def forward(self, x: torch.Tensor, cond: torch.Tensor, cond_mask: Optional[torch.Tensor] = None):
res = []
x = self.in_block(x)
for layer in self.down_blocks:
x = layer(x, cond, cond_mask)
res.append(x)
x = self.middle_block(x, cond, cond_mask)
for layer in self.up_blocks:
x = torch.cat([x, res.pop()], dim=1)
x = layer(x, cond, cond_mask)
x = self.out_block(x)
return x
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument("--batch_size", type=int, default=32)
parser.add_argument("--num_iterations", type=int, default=200)
parser.add_argument("--use_ddp", action="store_true")
parser.add_argument("--use_fsdp", action="store_true")
parser.add_argument("--use_compile", action="store_true")
parser.add_argument("--use_controlnet", action="store_true")
parser.add_argument("--disable_fa2", action="store_true")
args = parser.parse_args()
return args
def main(rank, world_size, args):
setup(rank, world_size)
assert not (args.use_ddp and args.use_fsdp)
device = torch.device(f"cuda:{rank}")
dtype = torch.float16
cond_dim = 1024
cond_len = 128
model = UNet(4, cond_dim, [128, 256, 512, 512], [2, 2, 2], 2).to(device)
if args.use_compile:
print("Trying compile.")
model.compile(mode="default", dynamic=False)
if args.use_fsdp:
model = FSDP(
module=model,
device_id=rank,
use_orig_params=args.use_compile or args.use_controlnet,
sharding_strategy=ShardingStrategy.HYBRID_SHARD,
forward_prefetch=True,
limit_all_gathers=True,
auto_wrap_policy=ModuleWrapPolicy({nn.Sequential}),
mixed_precision=MixedPrecision(
param_dtype=dtype,
buffer_dtype=dtype,
reduce_dtype=dtype,
),
)
loss_amp_context = torch.amp.autocast("cuda", dtype=dtype, enabled=True)
model_amp_context = nullcontext()
scaler = ShardedGradScaler(enabled=dtype == torch.float16)
else:
if args.use_ddp:
model = DistributedDataParallel(
model, broadcast_buffers=False, gradient_as_bucket_view=True, find_unused_parameters=False
)
loss_amp_context = torch.amp.autocast("cuda", dtype=dtype, enabled=True)
model_amp_context = loss_amp_context
scaler = torch.amp.GradScaler("cuda", enabled=dtype == torch.float16)
optimizer = torch.optim.Adam(model.parameters(), lr=1e-5, betas=(0.9, 0.98))
iterator = range(args.num_iterations)
if rank == 0:
iterator = tqdm(iterator, total=args.num_iterations)
for _ in iterator:
x = torch.randn(args.batch_size, 4, 64, 64, device=device)
cond = torch.randn(args.batch_size, cond_len, cond_dim, device=device)
cond_mask = torch.randn(args.batch_size, cond_len, device=device) > 0
with model_amp_context:
out = model(x, cond, cond_mask)
with loss_amp_context:
loss = F.mse_loss(x, out)
loss_test = loss.clone() # Ensure local loss is not changed by allreduce
torch.distributed.all_reduce(loss_test) # Check if any gpu has NaN loss
if rank == 0:
iterator.set_description(f"Loss: {loss_test.item()}")
if torch.isnan(loss_test):
raise ValueError("NaN loss.")
scaler.scale(loss).backward()
scaler.step(optimizer)
scaler.update()
optimizer.zero_grad()
cleanup()
if __name__ == "__main__":
args = parse_args()
world_size = torch.cuda.device_count()
torch.multiprocessing.freeze_support()
if world_size == 1:
main(0, world_size, args)
else:
torch.multiprocessing.spawn(fn=main, args=(world_size, args), nprocs=world_size, join=True)
```
Command:
```
TORCH_LOGS=recompiles CUDA_VISIBLE_DEVICES=4,6 python compile_debug.py --use_fsdp --use_compile
```
Output:
> Trying compile.
> Trying compile.
> 0%| | 0/200 [00:00<?, ?it/s][rank0]:W1024 16:27:32.344000 1770485 site-packages/torch/_logging/_internal.py:1081] [0/0] Profiler function <class 'torch.autograd.profiler.record_function'> will be ignored
> [rank1]:W1024 16:27:32.344000 1770486 site-packages/torch/_logging/_internal.py:1081] [0/0] Profiler function <class 'torch.autograd.profiler.record_function'> will be ignored
> [rank0]:V1024 16:27:39.578000 1770485 site-packages/torch/_dynamo/guards.py:2813] [1/1] [__recompiles] Recompiling function forward in /home/user/compile_debug.py:150
> [rank0]:V1024 16:27:39.578000 1770485 site-packages/torch/_dynamo/guards.py:2813] [1/1] [__recompiles] triggered by the following guard failure(s):
> [rank0]:V1024 16:27:39.578000 1770485 site-packages/torch/_dynamo/guards.py:2813] [1/1] [__recompiles] - 1/0: tensor 'L['x']' size mismatch at index 1. expected 128, actual 256
> [rank1]:V1024 16:27:39.655000 1770486 site-packages/torch/_dynamo/guards.py:2813] [1/1] [__recompiles] Recompiling function forward in /home/user/compile_debug.py:150
> [rank1]:V1024 16:27:39.655000 1770486 site-packages/torch/_dynamo/guards.py:2813] [1/1] [__recompiles] triggered by the following guard failure(s):
> [rank1]:V1024 16:27:39.655000 1770486 site-packages/torch/_dynamo/guards.py:2813] [1/1] [__recompiles] - 1/0: tensor 'L['x']' size mismatch at index 1. expected 128, actual 256
> [rank1]:V1024 16:27:44.482000 1770486 site-packages/torch/_dynamo/guards.py:2813] [1/2] [__recompiles] Recompiling function forward in /home/user/compile_debug.py:150
> [rank1]:V1024 16:27:44.482000 1770486 site-packages/torch/_dynamo/guards.py:2813] [1/2] [__recompiles] triggered by the following guard failure(s):
> [rank1]:V1024 16:27:44.482000 1770486 site-packages/torch/_dynamo/guards.py:2813] [1/2] [__recompiles] - 1/1: tensor 'L['x']' size mismatch at index 1. expected 256, actual 512
> [rank1]:V1024 16:27:44.482000 1770486 site-packages/torch/_dynamo/guards.py:2813] [1/2] [__recompiles] - 1/0: tensor 'L['x']' size mismatch at index 1. expected 128, actual 512
> [rank0]:V1024 16:27:44.483000 1770485 site-packages/torch/_dynamo/guards.py:2813] [1/2] [__recompiles] Recompiling function forward in /home/user/compile_debug.py:150
> [rank0]:V1024 16:27:44.483000 1770485 site-packages/torch/_dynamo/guards.py:2813] [1/2] [__recompiles] triggered by the following guard failure(s):
> [rank0]:V1024 16:27:44.483000 1770485 site-packages/torch/_dynamo/guards.py:2813] [1/2] [__recompiles] - 1/1: tensor 'L['x']' size mismatch at index 1. expected 256, actual 512
> [rank0]:V1024 16:27:44.483000 1770485 site-packages/torch/_dynamo/guards.py:2813] [1/2] [__recompiles] - 1/0: tensor 'L['x']' size mismatch at index 1. expected 128, actual 512
> [rank0]:V1024 16:27:48.856000 1770485 site-packages/torch/_dynamo/guards.py:2813] [1/3] [__recompiles] Recompiling function forward in /home/user/compile_debug.py:150
> [rank0]:V1024 16:27:48.856000 1770485 site-packages/torch/_dynamo/guards.py:2813] [1/3] [__recompiles] triggered by the following guard failure(s):
> [rank0]:V1024 16:27:48.856000 1770485 site-packages/torch/_dynamo/guards.py:2813] [1/3] [__recompiles] - 1/2: tensor 'L['x']' size mismatch at index 2. expected 16, actual 8
> [rank0]:V1024 16:27:48.856000 1770485 site-packages/torch/_dynamo/guards.py:2813] [1/3] [__recompiles] - 1/1: tensor 'L['x']' size mismatch at index 1. expected 256, actual 512
> [rank0]:V1024 16:27:48.856000 1770485 site-packages/torch/_dynamo/guards.py:2813] [1/3] [__recompiles] - 1/0: tensor 'L['x']' size mismatch at index 1. expected 128, actual 512
> [rank1]:V1024 16:27:48.877000 1770486 site-packages/torch/_dynamo/guards.py:2813] [1/3] [__recompiles] Recompiling function forward in /home/user/compile_debug.py:150
> [rank1]:V1024 16:27:48.877000 1770486 site-packages/torch/_dynamo/guards.py:2813] [1/3] [__recompiles] triggered by the following guard failure(s):
> [rank1]:V1024 16:27:48.877000 1770486 site-packages/torch/_dynamo/guards.py:2813] [1/3] [__recompiles] - 1/2: tensor 'L['x']' size mismatch at index 2. expected 16, actual 8
> [rank1]:V1024 16:27:48.877000 1770486 site-packages/torch/_dynamo/guards.py:2813] [1/3] [__recompiles] - 1/1: tensor 'L['x']' size mismatch at index 1. expected 256, actual 512
> [rank1]:V1024 16:27:48.877000 1770486 site-packages/torch/_dynamo/guards.py:2813] [1/3] [__recompiles] - 1/0: tensor 'L['x']' size mismatch at index 1. expected 128, actual 512
> [rank0]:V1024 16:27:53.546000 1770485 site-packages/torch/_dynamo/guards.py:2813] [1/4] [__recompiles] Recompiling function forward in /home/user/compile_debug.py:150
> [rank0]:V1024 16:27:53.546000 1770485 site-packages/torch/_dynamo/guards.py:2813] [1/4] [__recompiles] triggered by the following guard failure(s):
> [rank0]:V1024 16:27:53.546000 1770485 site-packages/torch/_dynamo/guards.py:2813] [1/4] [__recompiles] - 1/3: tensor 'L['x']' size mismatch at index 1. expected 512, actual 1024
> [rank0]:V1024 16:27:53.546000 1770485 site-packages/torch/_dynamo/guards.py:2813] [1/4] [__recompiles] - 1/2: tensor 'L['x']' size mismatch at index 1. expected 512, actual 1024
> [rank0]:V1024 16:27:53.546000 1770485 site-packages/torch/_dynamo/guards.py:2813] [1/4] [__recompiles] - 1/1: tensor 'L['x']' size mismatch at index 1. expected 256, actual 1024
> [rank0]:V1024 16:27:53.546000 1770485 site-packages/torch/_dynamo/guards.py:2813] [1/4] [__recompiles] - 1/0: tensor 'L['x']' size mismatch at index 1. expected 128, actual 1024
> [rank1]:V1024 16:27:53.910000 1770486 site-packages/torch/_dynamo/guards.py:2813] [1/4] [__recompiles] Recompiling function forward in /home/user/compile_debug.py:150
> [rank1]:V1024 16:27:53.910000 1770486 site-packages/torch/_dynamo/guards.py:2813] [1/4] [__recompiles] triggered by the following guard failure(s):
> [rank1]:V1024 16:27:53.910000 1770486 site-packages/torch/_dynamo/guards.py:2813] [1/4] [__recompiles] - 1/3: tensor 'L['x']' size mismatch at index 1. expected 512, actual 1024
> [rank1]:V1024 16:27:53.910000 1770486 site-packages/torch/_dynamo/guards.py:2813] [1/4] [__recompiles] - 1/2: tensor 'L['x']' size mismatch at index 1. expected 512, actual 1024
> [rank1]:V1024 16:27:53.910000 1770486 site-packages/torch/_dynamo/guards.py:2813] [1/4] [__recompiles] - 1/1: tensor 'L['x']' size mismatch at index 1. expected 256, actual 1024
> [rank1]:V1024 16:27:53.910000 1770486 site-packages/torch/_dynamo/guards.py:2813] [1/4] [__recompiles] - 1/0: tensor 'L['x']' size mismatch at index 1. expected 128, actual 1024
> [rank0]:V1024 16:27:58.484000 1770485 site-packages/torch/_dynamo/guards.py:2813] [1/5] [__recompiles] Recompiling function forward in /home/user/compile_debug.py:150
> [rank0]:V1024 16:27:58.484000 1770485 site-packages/torch/_dynamo/guards.py:2813] [1/5] [__recompiles] triggered by the following guard failure(s):
> [rank0]:V1024 16:27:58.484000 1770485 site-packages/torch/_dynamo/guards.py:2813] [1/5] [__recompiles] - 1/4: tensor 'L['x']' size mismatch at index 2. expected 8, actual 16
> [rank0]:V1024 16:27:58.484000 1770485 site-packages/torch/_dynamo/guards.py:2813] [1/5] [__recompiles] - 1/3: tensor 'L['x']' size mismatch at index 1. expected 512, actual 1024
> [rank0]:V1024 16:27:58.484000 1770485 site-packages/torch/_dynamo/guards.py:2813] [1/5] [__recompiles] - 1/2: tensor 'L['x']' size mismatch at index 1. expected 512, actual 1024
> [rank0]:V1024 16:27:58.484000 1770485 site-packages/torch/_dynamo/guards.py:2813] [1/5] [__recompiles] - 1/1: tensor 'L['x']' size mismatch at index 1. expected 256, actual 1024
> [rank0]:V1024 16:27:58.484000 1770485 site-packages/torch/_dynamo/guards.py:2813] [1/5] [__recompiles] - 1/0: tensor 'L['x']' size mismatch at index 1. expected 128, actual 1024
> [rank1]:V1024 16:27:58.940000 1770486 site-packages/torch/_dynamo/guards.py:2813] [1/5] [__recompiles] Recompiling function forward in /home/user/compile_debug.py:150
> [rank1]:V1024 16:27:58.940000 1770486 site-packages/torch/_dynamo/guards.py:2813] [1/5] [__recompiles] triggered by the following guard failure(s):
> [rank1]:V1024 16:27:58.940000 1770486 site-packages/torch/_dynamo/guards.py:2813] [1/5] [__recompiles] - 1/4: tensor 'L['x']' size mismatch at index 2. expected 8, actual 16
> [rank1]:V1024 16:27:58.940000 1770486 site-packages/torch/_dynamo/guards.py:2813] [1/5] [__recompiles] - 1/3: tensor 'L['x']' size mismatch at index 1. expected 512, actual 1024
> [rank1]:V1024 16:27:58.940000 1770486 site-packages/torch/_dynamo/guards.py:2813] [1/5] [__recompiles] - 1/2: tensor 'L['x']' size mismatch at index 1. expected 512, actual 1024
> [rank1]:V1024 16:27:58.940000 1770486 site-packages/torch/_dynamo/guards.py:2813] [1/5] [__recompiles] - 1/1: tensor 'L['x']' size mismatch at index 1. expected 256, actual 1024
> [rank1]:V1024 16:27:58.940000 1770486 site-packages/torch/_dynamo/guards.py:2813] [1/5] [__recompiles] - 1/0: tensor 'L['x']' size mismatch at index 1. expected 128, actual 1024
> [rank0]:V1024 16:28:03.509000 1770485 site-packages/torch/_dynamo/guards.py:2813] [1/6] [__recompiles] Recompiling function forward in /home/user/compile_debug.py:150
> [rank0]:V1024 16:28:03.509000 1770485 site-packages/torch/_dynamo/guards.py:2813] [1/6] [__recompiles] triggered by the following guard failure(s):
> [rank0]:V1024 16:28:03.509000 1770485 site-packages/torch/_dynamo/guards.py:2813] [1/6] [__recompiles] - 1/5: tensor 'L['x']' size mismatch at index 1. expected 1024, actual 512
> [rank0]:V1024 16:28:03.509000 1770485 site-packages/torch/_dynamo/guards.py:2813] [1/6] [__recompiles] - 1/4: tensor 'L['x']' size mismatch at index 1. expected 1024, actual 512
> [rank0]:V1024 16:28:03.509000 1770485 site-packages/torch/_dynamo/guards.py:2813] [1/6] [__recompiles] - 1/3: tensor 'L['x']' size mismatch at index 2. expected 8, actual 32
> [rank0]:V1024 16:28:03.509000 1770485 site-packages/torch/_dynamo/guards.py:2813] [1/6] [__recompiles] - 1/2: tensor 'L['x']' size mismatch at index 2. expected 16, actual 32
> [rank0]:V1024 16:28:03.509000 1770485 site-packages/torch/_dynamo/guards.py:2813] [1/6] [__recompiles] - 1/1: tensor 'L['x']' size mismatch at index 1. expected 256, actual 512
> [rank0]:V1024 16:28:03.509000 1770485 site-packages/torch/_dynamo/guards.py:2813] [1/6] [__recompiles] - 1/0: tensor 'L['x']' size mismatch at index 1. expected 128, actual 512
> [rank1]:V1024 16:28:04.811000 1770486 site-packages/torch/_dynamo/guards.py:2813] [1/6] [__recompiles] Recompiling function forward in /home/user/compile_debug.py:150
> [rank1]:V1024 16:28:04.811000 1770486 site-packages/torch/_dynamo/guards.py:2813] [1/6] [__recompiles] triggered by the following guard failure(s):
> [rank1]:V1024 16:28:04.811000 1770486 site-packages/torch/_dynamo/guards.py:2813] [1/6] [__recompiles] - 1/5: tensor 'L['x']' size mismatch at index 1. expected 1024, actual 512
> [rank1]:V1024 16:28:04.811000 1770486 site-packages/torch/_dynamo/guards.py:2813] [1/6] [__recompiles] - 1/4: tensor 'L['x']' size mismatch at index 1. expected 1024, actual 512
> [rank1]:V1024 16:28:04.811000 1770486 site-packages/torch/_dynamo/guards.py:2813] [1/6] [__recompiles] - 1/3: tensor 'L['x']' size mismatch at index 2. expected 8, actual 32
> [rank1]:V1024 16:28:04.811000 1770486 site-packages/torch/_dynamo/guards.py:2813] [1/6] [__recompiles] - 1/2: tensor 'L['x']' size mismatch at index 2. expected 16, actual 32
> [rank1]:V1024 16:28:04.811000 1770486 site-packages/torch/_dynamo/guards.py:2813] [1/6] [__recompiles] - 1/1: tensor 'L['x']' size mismatch at index 1. expected 256, actual 512
> [rank1]:V1024 16:28:04.811000 1770486 site-packages/torch/_dynamo/guards.py:2813] [1/6] [__recompiles] - 1/0: tensor 'L['x']' size mismatch at index 1. expected 128, actual 512
> Loss: 1.9992671012878418: 0%| | 0/200 [00:38<?, ?it/s]/home/user/anaconda3/envs/torch25/lib/python3.10/site-packages/torch/autograd/graph.py:825: UserWarning: cuDNN SDPA backward got grad_output.strides() != output.strides(), attempting to materialize a grad_output with matching strides... (Triggered internally at ../aten/src/ATen/native/cudnn/MHA.cpp:674.)
> return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
> /home/user/anaconda3/envs/torch25/lib/python3.10/site-packages/torch/autograd/graph.py:825: UserWarning: cuDNN SDPA backward got grad_output.strides() != output.strides(), attempting to materialize a grad_output with matching strides... (Triggered internally at ../aten/src/ATen/native/cudnn/MHA.cpp:674.)
> return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
> Loss: 2.003182888031006: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 200/200 [01:25<00:00, 2.34it/s]
### Versions
PyTorch version: 2.5.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.30.0
Libc version: glibc-2.35
Python version: 3.10.13 | packaged by conda-forge | (main, Dec 23 2023, 15:36:39) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.4.210-39.1-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A800-SXM4-80GB
GPU 1: NVIDIA A800-SXM4-80GB
GPU 2: NVIDIA A800-SXM4-80GB
GPU 3: NVIDIA A800-SXM4-80GB
GPU 4: NVIDIA A800-SXM4-80GB
GPU 5: NVIDIA A800-SXM4-80GB
GPU 6: NVIDIA A800-SXM4-80GB
GPU 7: NVIDIA A800-SXM4-80GB
Nvidia driver version: 550.54.14
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 120
On-line CPU(s) list: 0-119
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7662 64-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 120
Stepping: 0
BogoMIPS: 3992.45
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr wbnoinvd arat npt nrip_save umip rdpid arch_capabilities
Virtualization: AMD-V
L1d cache: 7.5 MiB (120 instances)
L1i cache: 7.5 MiB (120 instances)
L2 cache: 60 MiB (120 instances)
L3 cache: 1.9 GiB (120 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-29
NUMA node1 CPU(s): 30-59
NUMA node2 CPU(s): 60-89
NUMA node3 CPU(s): 90-119
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==5.0.4
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] open-clip-torch==2.24.0
[pip3] pytorch-warmup==0.1.1
[pip3] torch==2.5.0
[pip3] torch-fidelity==0.3.0
[pip3] torch-model-archiver==0.11.1
[pip3] torch-tb-profiler==0.4.3
[pip3] torch-workflow-archiver==0.2.14
[pip3] torchaudio==2.5.0
[pip3] torchdata==0.7.1
[pip3] torchmetrics==1.4.0.post0
[pip3] torchsde==0.2.6
[pip3] torchserve==0.11.1
[pip3] torchvision==0.20.0
[pip3] triton==3.1.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] open-clip-torch 2.24.0 pypi_0 pypi
[conda] torch 2.5.0 pypi_0 pypi
[conda] torch-fidelity 0.3.0 pypi_0 pypi
[conda] torch-model-archiver 0.11.1 pypi_0 pypi
[conda] torch-tb-profiler 0.4.3 pypi_0 pypi
[conda] torch-workflow-archiver 0.2.14 pypi_0 pypi
[conda] torchaudio 2.5.0 pypi_0 pypi
[conda] torchdata 0.7.1 pypi_0 pypi
[conda] torchmetrics 1.4.0.post0 pypi_0 pypi
[conda] torchsde 0.2.6 pypi_0 pypi
[conda] torchserve 0.11.1 pypi_0 pypi
[conda] torchvision 0.20.0 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @zhaojuanmao @mrshenli @rohan-varma @chauhang @penguinwu @ezyang @bobrenjc93 @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @rec | oncall: distributed,triaged,module: fsdp,oncall: pt2,module: dynamic shapes,module: dynamo,pt2d-triage-nov2024 | low | Critical |
2,611,695,665 | flutter | removeRoute unresolved future | ### Steps to reproduce
In some scenarios, developers may want to use `removeRoute()` instead of `pop()` to remove a nested route which is not the first one. By calling `removeRoute()` alone, all codes which await a call to `push()` will never end : the future will never be resolved.
```dart
final Object? result = await Navigator.of(context).push() // By using removeRoute(), this will never end
```
To fix this issue, developper should maybe call `route.didPop(any_data_or_null)` after any `removeRoute(route)` call. I didn't know if this solution is 'perfect' and 'enough'. That's why i created this issue : to point an unresolved behavior and get a solution.
Documentation well mention this limitation but didn't provide any solution to release the futures.

My potential solution is to add after `Navigator.of(context).removeRoute(route)` a call to `route.didPop()`. If i'm right and didPop is not a bad idea, maybe the documentation should be updated to inform developers about this.
1. Launch app
2. Click on "open test page"
3. Click on "Remove current route"
4. Observe console logs
### Expected results
In logs :
```
Open test page
Test page is closed: null
```
### Actual results
In logs :
```
Open test page
```
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() {
runApp(
const MaterialApp(
home: Home(),
),
);
}
class Home extends StatelessWidget {
const Home({super.key});
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: const Text('Home'),
),
body: Center(
child: SingleChildScrollView(
child: Column(
mainAxisSize: MainAxisSize.min,
children: <Widget>[
ElevatedButton(
onPressed: () async {
print("Open test page");
final Object? result = await Navigator.of(context).push(
MaterialPageRoute(
settings: const RouteSettings(name: TestRoute.routeName),
builder: (BuildContext context) => const TestRoute(),
),
);
print("Test page is closed: $result");
},
child: const Text('Open test page'),
),
],
),
),
),
);
}
}
class TestRoute extends StatelessWidget {
const TestRoute({super.key});
static const String routeName = 'test-route';
Route<dynamic>? currentRoute(BuildContext context) {
Route<dynamic>? result;
Navigator.of(context).popUntil((Route<dynamic> route) {
result = route;
return true;
});
return result;
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: const Text('Test page'),
),
body: Center(
child: ElevatedButton(
onPressed: () {
final Route<dynamic> route = currentRoute(context)!;
Navigator.of(context).removeRoute(route);
},
child: const Text('Remove current route'),
),
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
Not needed
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.24.3, on macOS 15.0.1 24A348 darwin-arm64, locale fr-FR)
• Flutter version 3.24.3 on channel stable at /Users/earminjon/fvm/versions/3.24.3
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 2663184aa7 (6 weeks ago), 2024-09-11 16:27:48 -0500
• Engine revision 36335019a8
• Dart version 3.5.3
• DevTools version 2.37.3
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at /Users/earminjon/Library/Android/Sdk
• Platform android-35, build-tools 35.0.0
• ANDROID_HOME = /Users/earminjon/Library/Android/Sdk
• Java binary at: /Users/earminjon/Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 16.0)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16A242d
• CocoaPods version 1.14.3
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.1)
• Android Studio at /Users/earminjon/Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
[✓] Android Studio (version 2023.2)
• Android Studio at /Users/perso/Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.9+0-17.0.9b1087.7-11185874)
[✓] IntelliJ IDEA Ultimate Edition (version 2024.2.3)
• IntelliJ at /Users/earminjon/Applications/IntelliJ IDEA Ultimate.app
• Flutter plugin version 82.0.3
• Dart plugin version 242.22855.32
[✓] Connected device (4 available)
• sdk gphone64 arm64 (mobile) • emulator-5554 • android-arm64 • Android 15 (API 35) (emulator)
• macOS (desktop) • macos • darwin-arm64 • macOS 15.0.1 24A348 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 15.0.1 24A348 darwin-arm64
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| framework,f: routes,has reproducible steps,P2,workaround available,team-framework,triaged-framework,found in release: 3.24,found in release: 3.27 | low | Minor |
2,611,750,056 | godot | OS.has_feature doesn't work with custom features and Small Deploy | ### Tested versions
- Reproduced in v3.6.stable.official [de2f0f147]
### System information
Ubuntu Linux
### Issue description
When debugging in an android device with "Small Deploy with Network Filesystem" debug option checked, all calls to has_feature("some_custom_feature") returns false;
Without this option checked, it works as expected.
### Steps to reproduce
- create android export;
- create custom feature;
- check Debug > Deploy with remote debug
- check Debug > Small Deploy with Network Filesystem
- run on android device
- call OS.has_feature("some_feature") > returns False, should return True
- call OS.has_feature("Android") > returns True, as expected
### Minimal reproduction project (MRP)
[test_custom_feature.zip](https://github.com/user-attachments/files/17508374/test_custom_feature.zip)
| bug,needs testing,topic:export | low | Critical |
2,611,762,721 | Python | adding optimized code for largest rectangle histogram | ### Feature description
You are given an array of integers heights where heights[i] represents the height of a bar. The width of each bar is 1.
Return the area of the largest rectangle that can be formed among the bars.
giving optimized solution for this problem using stacks
kindly assign me this issue | enhancement | medium | Minor |
2,611,772,853 | pytorch | autograd.Function HOP (maybe others) have name conflict when lifting freevars | We should land https://github.com/pytorch/pytorch/pull/129817
I think someone ran into this internally ([internal ref](https://fb.workplace.com/groups/1075192433118967/permalink/1530131980958341/))
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @rec @ydwu4 @bdhirsh @yf225 | triaged,oncall: pt2,module: dynamo,module: higher order operators,module: pt2-dispatcher,dynamo-autograd-function | low | Minor |
2,611,790,060 | PowerToys | Dark Mode Overlay (DMO) | ### Description of the new feature / enhancement
1. Shall detect the current active window /app and invert colours for those apps that does not provide any dark mode feature by default and can be toggled on or off by shortcut keys combination.
2. A later version might be smarter (detects icons and images and does not apply the filter on them)
3. It should be lightweight as much as possible inorder not to impact the performance of the affected apps similar to the Windows current feature color invert which by experience, has a little to no impact on performance , however it should also work on individual windows
4. Feature can be extended to record user preferences per process name and remember the last decision taken by the user so if the app was overlayed by Night OWL , then windows was restarted , When the app is opened again it should be in dark/inverted colour mode
5. Feature can be further extended to allow the user to select a color pallete for the colors used for overlaying (It shouldn't be Black or White but can be grey for example )
6. Feature should provide the option to be automatically started up upon booting of windows similar to Powertoys Awake feature
### Scenario when this would be used?
1. I am an Embedded software developer with such a bad luck that my toolchain is 99% white and since I am a night OWL , I don't like the Idea of shining white light while working on my poor eyes (shout out to all apps that implement a native dark mode or theming options!)
2. Since these tools have no intention to implement dark mode feature soon , I have been relying on other 3rd party tools that kinnda do the job , but always have a missing piece (app performance is affected , license is costly especially for coroprate environments , does not detect icons or images , Kinnda force u to subscribe or use other utilities that are actually not needed and their functionality is not related to dark mode ,... )
3. Windows deserve a smarter opensource tool for Night OWLs like me
### Supporting information
_No response_ | Needs-Triage | low | Major |
2,611,805,755 | rust | rustdoc: Show tag / badge / icon / emoji in the sidebar for unsafe, deprecated and unstable items | Currently unsafe, deprecated and unstable items are shown with different labels in a module's list of items:

I think it would be nice to also add these to the sidebar for types and traits.
Maybe this requires icons for "Deprecated" and "Experimental" as well to not make the sidebar to wide. | C-enhancement,A-rustdoc-ui,T-rustdoc-frontend | low | Major |
2,611,825,896 | flutter | Unexpected Behavior When Terminating Flutter Commands with Ctrl + C | ### Type of Request
bug
### Infrastructure Environment
System Type: 64-bit operating system, x64-based processor.
Operating System: Windows 11 Pro.
Terminal Application: Command Prompt / PowerShell.
Frequency of the Issue: This has never worked for me since I started using Flutter.

flutter doctor result if needed:

### What is happening?
When running a Flutter command in the terminal, if the command starts a task and I press Ctrl + C, the terminal prompts me with:
**Terminate batch job (Y/N)?**
- If I press Y, the batch job stops as expected.
- If I press N, the prompt appears to ignore my input, and the batch job still stops unexpectedly, rather than continuing execution.
### Steps to reproduce
1. Open a terminal window.
2. Execute any Flutter command that starts a long-running task (e.g., flutter run, flutter build, etc.).
3. While the task is running, press Ctrl + C.
4. When prompted with Terminate batch job (Y/N)? press N.
5. Observe that the batch job stops instead of continuing.
### Expected results
When pressing N in response to the prompt, the Flutter command should continue executing as intended. | P3,team-tool,triaged-tool | low | Critical |
2,611,863,549 | rust | Can't find `thiserror_impl` crate with `-Z sanitizer` |
I tried this code:
```rust
// (empty lib.rs)
```
Cargo.toml:
```toml
[package]
name = "nightly-cant-find-crate-test"
version = "0.1.0"
edition = "2021"
[dependencies]
thiserror = "1.0.65"
```
Compiling with `env RUSTFLAGS="-Z sanitizer=address" cargo +nightly build`
I expected to see this happen: Library compiles
Instead, this happened: Rust fails to find the crate `thiserror_impl`, even though it's a transitive dependency:
```
> env RUSTFLAGS="-Z sanitizer=address" cargo +nightly build
Compiling proc-macro2 v1.0.89
Compiling unicode-ident v1.0.13
Compiling thiserror v1.0.65
Compiling quote v1.0.37
Compiling syn v2.0.85
Compiling thiserror-impl v1.0.65
error[E0463]: can't find crate for `thiserror_impl`
--> /home/col/.cargo/registry/src/index.crates.io-6f17d22bba15001f/thiserror-1.0.65/src/lib.rs:278:9
|
278 | pub use thiserror_impl::*;
| ^^^^^^^^^^^^^^ can't find crate
For more information about this error, try `rustc --explain E0463`.
```
The crate builds without errors if I omit the sanitizer option, or replace it with something else like `-Zub-checks`. It also obviously builds with stable Rust.
### Meta
`rustc --version --verbose`:
```
> rustup run nightly rustc --version
rustc 1.84.0-nightly (4f2f477fd 2024-10-23)
```
```
> cargo --version
cargo 1.82.0 (8f40fc59f 2024-08-21)
```
| T-compiler,A-sanitizers,C-bug,D-confusing | low | Critical |
2,611,865,300 | react | Bug: duplicate keys cause an extra element to render, while the console shows the correct number of items | When rendering a list with duplicate keys, React incorrectly renders an extra item, which is interactable, despite the console logging the correct number of items. This behavior occurs even though the data source has the correct number of items.
React version: ^18.0.0
## Steps To Reproduce
1. `$ npx create-react-app my-app`
2. `$ cd my-app`
3. Replace the contents of App.js with the code listed below
4. `$ npm run start`
5. Open the page in Google Chrome or Edge or any browser
6. Click one of the "Add record" button
```js
import { useState } from "react";
const initialRecords = [
{ id: 1, name: "John Wick", email: "john@wick" },
{ id: 2, name: "Hermione Granger", email: "hermione@granger" },
{ id: 3, name: "Harry Potter", email: "harry@potter" },
{ id: 3, name: "ID Dup", email: "id@duplicate" },
];
export default function App() {
const [records, setRecords] = useState(initialRecords);
const handleAddClick = () => {
const newRecord = { id: 4, name: "New Record", email: "new@record" };
setRecords((prev) => [newRecord, ...prev]);
};
return (
<div>
<button onClick={handleAddClick}>Add record</button>
{records.map(({ id, name, email }) => (
<div key={id}>
<p>{id}</p>
<p>{email}</p>
<p>{name}</p>
</div>
))}
</div>
);
}
```
Link to code example: https://codesandbox.io/p/sandbox/ylr42w
<!--
Please provide a CodeSandbox (https://codesandbox.io/s/new), a link to a
repository on GitHub, or provide a minimal code example that reproduces the
problem. You may provide a screenshot of the application if you think it is
relevant to your bug report. Here are some tips for providing a minimal
example: https://stackoverflow.com/help/mcve.
-->
## The current behavior
1. React renders an extra (phantom) item that duplicates an existing item (due to the duplicate key). And the count of extra item (id: 3) keeps increasing on every click.
2. This extra item is fully interactable (e.g., it responds to click events).
3. The console logs the correct number of items (5), but 6 items appear on the screen
## The expected behavior
1. The number of rendered items should match the actual number of items in the data array, regardless of key duplication.
2. If keys are not unique, React should throw a warning or handle it predictably (e.g., not render an extra item). | Status: Unconfirmed,Resolution: Stale | low | Critical |
2,611,874,859 | PowerToys | Mouse Without Borders - Clearer communication of connection status | ### Description of the new feature / enhancement
Currently the connection status of PCs in MWB is communicated only (?) via colorful frame around the icon on the setup page. The meaning of these colors is only explained on the "further information" page of MWB.
Unless the connection is the obvious "green" (which means "working", so nobody cares about the details), trouble shooting is very tedious. At least the tooltip information of a specific PC icon should not just repeat the host name (which is literally printed on the button anyway), but the connection status a clear text. Preferably the connection status should also be printed directly underneath the icon if it's anything other than "Connected".
### Scenario when this would be used?
Whenever the connection does not work as expected, as much information is needed for trouble shooting as possible. A few colorful pixels with an obscure color code mapping table is not that.
It took me several days of trying to "fix" a perfectly working network, because when I thought I was looking at a red "error" frame, it turns out, it was a brown "invalid key" frame, which I only noticed through actually checking the hex code of the frame!
### Supporting information
_No response_ | Needs-Triage | low | Critical |
2,611,877,864 | rust | "single tab mode" for rustdoc search | currently, when performing a single-element search (i.e. a name-based search), there are 3 tabs:
1. In Names
2. In Parameters
3. In Return Types
I basically never use the other two, instead explicitly using type-based search with `->` if i want that info.
These extra tabs add visual clutter and also slow down the searching process.
I propose an option in the settings panel to disable these extra tabs.
related: https://github.com/rust-lang/rust/issues/131156 | C-enhancement,T-rustdoc-frontend | low | Major |
2,611,879,874 | vscode | Chat: Support custom attachments for participants | Right now, in Insiders, the currently-opened editor is included as an attachment in chat requests. The user can easily disable it by clicking.
Certain participants (thinking about `@azure` particularly) would benefit from being able to default-attach their own items in this list. For example, for `@azure`, attaching `azure.yaml` by default (if present) would help with a multitude of user questions. We can attach it automatically today, but we cannot make it show up in the attachments list and allow the user to detach it easily for queries where it is unhelpful.

Implementation-wise, perhaps if, after typing `@azure`, a method can be called to resolve additional attachments?
/cc @isidorn | api,under-discussion,panel-chat | low | Major |
2,611,885,230 | pytorch | RuntimeError: HalfTensor is not supported | ### 🐛 Describe the bug
I am getting the following error in the FX Graph Mode Quantization QAT
Traceback (most recent call last):
File “”, line 1, in
File “/home/HassanBinHaroon/anaconda3/envs/FX_Graph_Mode_Env/lib/python3.9/site-packages/torch/fx/graph_module.py”, line 738, in call_wrapped
return self._wrapped_call(self, *args, **kwargs)
File “/home/HassanBinHaroon/anaconda3/envs/FX_Graph_Mode_Env/lib/python3.9/site-packages/torch/fx/graph_module.py”, line 316, in call
raise e
File “/home/HassanBinHaroon/anaconda3/envs/FX_Graph_Mode_Env/lib/python3.9/site-packages/torch/fx/graph_module.py”, line 303, in call
return super(self.cls, obj).call(*args, **kwargs) # type: ignore[misc]
File “/home/HassanBinHaroon/anaconda3/envs/FX_Graph_Mode_Env/lib/python3.9/site-packages/torch/nn/modules/module.py”, line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File “/home/HassanBinHaroon/anaconda3/envs/FX_Graph_Mode_Env/lib/python3.9/site-packages/torch/nn/modules/module.py”, line 1562, in _call_impl
return forward_call(*args, **kwargs)
File “<eval_with_key>.2”, line 197, in forward
activation_post_process_84 = self.activation_post_process_84(matmul); matmul = None
File “/home/HassanBinHaroon/anaconda3/envs/FX_Graph_Mode_Env/lib/python3.9/site-packages/torch/nn/modules/module.py”, line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File “/home/HassanBinHaroon/anaconda3/envs/FX_Graph_Mode_Env/lib/python3.9/site-packages/torch/nn/modules/module.py”, line 1562, in _call_impl
return forward_call(*args, **kwargs)
File “/home/HassanBinHaroon/anaconda3/envs/FX_Graph_Mode_Env/lib/python3.9/site-packages/torch/ao/quantization/fake_quantize.py”, line 199, in forward
self.activation_post_process(X.detach())
File “/home/HassanBinHaroon/anaconda3/envs/FX_Graph_Mode_Env/lib/python3.9/site-packages/torch/nn/modules/module.py”, line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File “/home/HassanBinHaroon/anaconda3/envs/FX_Graph_Mode_Env/lib/python3.9/site-packages/torch/nn/modules/module.py”, line 1562, in _call_impl
return forward_call(*args, **kwargs)
File “/home/HassanBinHaroon/anaconda3/envs/FX_Graph_Mode_Env/lib/python3.9/site-packages/torch/ao/quantization/observer.py”, line 1230, in forward
self.reset_histogram(x, min_val, max_val)
File “/home/HassanBinHaroon/anaconda3/envs/FX_Graph_Mode_Env/lib/python3.9/site-packages/torch/ao/quantization/observer.py”, line 1203, in reset_histogram
torch.histc(
RuntimeError: HalfTensor is not supported
### Versions
# Define quantization configuration for QAT
act_static_qconfig = FakeQuantize.with_args(
observer=HistogramObserver.with_args(
quant_min=0,
quant_max=255,
dtype=torch.quint8,
qscheme=torch.per_tensor_affine,
)
)
global_qconfig = QConfig(
activation=act_static_qconfig,
weight=torch.quantization.default_fused_per_channel_wt_fake_quant
)
# Set global quantization config for the whole model
qconfig_map = QConfigMapping().set_global(global_qconfig)
# Prepare the model for QAT
fx_model = prepare_qat_fx(resnet18, qconfig_map, torch.rand(1, 3, 256, 448).to(device))
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim | oncall: quantization | low | Critical |
2,611,888,528 | svelte | Bound constants are `null` in `onDestroy` | ### Describe the bug
When a parent does `bind:x` on a child's `export const x` and attempts to use this value in the `onDestroy` callback, the value of `x` is `null`.
This is a breaking change while using Svelte 4 syntax, since `x` would previously keep the same value.
### Reproduction
https://svelte.dev/playground/e7886cdc76d14523bec2b84ea5ace46f?version=5.1.0
### Logs
_No response_
### System Info
```shell
Not relevant
```
### Severity
blocking an upgrade | documentation | low | Critical |
2,611,926,399 | pytorch | Failed to load image Python extension: libc10_cuda.so: cannot open shared object file: No such file or directory warn(f"Failed to load image Python extension: {e}") | ### 🐛 Describe the bug
/home/lmh/.local/lib/python3.10/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: libc10_cuda.so: cannot open shared object file: No such file or directory
warn(f"Failed to load image Python extension: {e}")
Detected kernel version 5.4.0, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher.
>>> Tensorboard logs saved to: /data1/lzq/results_guided/outputs/spl/Moire/[spl]_[Moire]_[rev_1]_[2024-10-24]_[23.11.26.736310]_[GPU_1]_[amax]/tb/train
>>> Tensorboard logs saved to: /data1/lzq/results_guided/outputs/spl/Moire/[spl]_[Moire]_[rev_1]_[2024-10-24]_[23.11.26.736310]_[GPU_1]_[amax]/tb/val
Traceback (most recent call last):
File "/home/lmh/.local/bin/accelerate", line 8, in <module>
sys.exit(main())
File "/home/lmh/.local/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py", line 48, in main
args.func(args)
File "/home/lmh/.local/lib/python3.10/site-packages/accelerate/commands/launch.py", line 1168, in launch_command
simple_launcher(args)
File "/home/lmh/.local/lib/python3.10/site-packages/accelerate/commands/launch.py", line 763, in simple_launcher
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['/opt/anaconda3/bin/python', 'main.py', '--affine', '--l1loss', '--adaloss', '--perloss', '--dont_calc_mets_at_all', '--log2file', '--data_path', '/data1/lzq/Moire_512', '--data_name', 'Moire', '--train_dir', 'train', '--test_dir', 'val', '--moire_dir', 'moire', '--clean_dir', 'clear', '--batch_size', '2', '--T_0', '50', '--epochs', '100', '--init_weights']' returned non-zero exit status 1.
Exucuse me. I always meet this error.My python is Python 3.8.13 and torch/torchvision version is as following.I install torchvision and torch many times but the error is still here.

### Versions


cc @seemethere @malfet @osalpekar @atalman | needs reproduction,module: binaries,triaged | low | Critical |
2,611,946,704 | deno | `deno info DEPENDENCY` | To have info on a dependency
- [x] Sub-dependencies
- [x] Bundle Size
- [ ] Information from NPM / JSR manifest (description, author, which registry (NPM/JSR), link to registry + repository, ...)
- [ ] Versions (which is installed, which are available, also consider pre-releases, ...)
- [ ] Security (CVE) information
- [ ] Maybe? : Past installed version (from lock files)
- [ ] Maybe? : accepting multiple dependencies parameter | needs info | low | Minor |
2,611,957,405 | opencv | Add YAML 2.0 support for FileStorage | ### Describe the feature and motivation
Rationale:
Yaml 2.0 format is used by python by default. It means that configs/data in Yaml format mabe shared between OpenCV C++/Java/Python code and pure Python scrips.
### Additional context
_No response_ | feature,category: core | low | Minor |
2,611,958,280 | deno | `deno fmt` might not respects include/exclude options in workspace settings | > This is also a bug with `deno fmt`. I have a workspace setup where I have this fmt config:
>
>
>
> ```json
>
> "fmt": {
>
> "include": ["*.ts", "*.tsx"],
>
> "exclude": ["*.d.ts", "./server/*"],
>
> "lineWidth": 120
>
> },
>
> ```
>
>
>
> This still tries to process JS, CSS and even JSON files. However, when I also exclude the directory which contains those files (my Next JS build directory), it works fine. The include statement just shouldn't work with those files to begin with.
_Originally posted by @dmint789 in [#26308](https://github.com/denoland/deno/issues/26308#issuecomment-2432081009)_ | deno fmt,needs discussion,workspaces | low | Critical |
2,612,000,417 | flutter | Some mac bots are hanging while calling flutter doctor | ### Type of Request
bug
### Infrastructure Environment
LUCI
### What is happening?
Bots [build868-m9](https://chromium-swarm.appspot.com/bot?id=build868-m9) and [build855-m9](https://chromium-swarm.appspot.com/bot?id=build855-m9) are hanging for 20 minutes when calling `flutter doctor`
```console
[2024-10-24 06:15:19.676942] [STDOUT] Executing "/Volumes/Work/s/w/ir/x/w/rc/tmpuio_rvgf/flutter sdk/bin/flutter doctor -v --ci --debug-logs-dir=/Volumes/Work/s/w/ir/x/w/rc/flutter_logs_dir" in "/Volumes/Work/s/w/ir/x/w/rc/tmpuio_rvgf/flutter sdk/dev/devicelab" with environment {BOT: true, LANG: en_US.UTF-8}
[2024-10-24 06:35:13.613327] [STDOUT] stdout: [!] Flutter (Channel [user-branch], 3.27.0-1.0.pre.213, on macOS 14.7 23H124 darwin-x64, locale en-US)
```
```console
[2024-10-23 20:23:11.578822] [STDOUT] Executing "/Volumes/Work/s/w/ir/x/w/rc/tmp4tz9g_sr/flutter sdk/bin/flutter doctor -v --ci --debug-logs-dir=/Volumes/Work/s/w/ir/x/w/rc/flutter_logs_dir" in "/Volumes/Work/s/w/ir/x/w/rc/tmp4tz9g_sr/flutter sdk/dev/devicelab" with environment {BOT: true, LANG: en_US.UTF-8}
[2024-10-23 20:46:43.953580] [STDOUT] stdout: [!] Flutter (Channel [user-branch], 3.27.0-1.0.pre.207, on macOS 14.7 23H124 darwin-x64, locale en-US)
```
https://luci-milo.appspot.com/ui/p/flutter/builders/try/Mac%20build_android_host_app_with_module_source/28/infra
### Steps to reproduce
Presubmits fail.
### Expected results
I expect presumbits to not flake. | team-infra,P1,triaged-infra | medium | Critical |
2,612,000,710 | next.js | Get an error when upgrading to next v15 and icon.svg in the app folder | ### Link to the code that reproduces this issue
https://github.com/Cache-Hit-Shanghai/pikebaohomepage
### To Reproduce
1. Download the project from the link above.
1. Upgrade next.js to v15.
1. npm run dev
### Current vs. Expected behavior
# current
next-app-loader?name=app%2Fpage&page=%2Fpage&appPaths=%2Fpage&pagePath=private-next-app-dir%2Fpage.jsx&appDir=C%3A%5CUsers%5C14S%5CDownloads%5Ccode%5Cpikebaohomepage%5Capp&pageExtensions=tsx&pageExtensions=ts&pageExtensions=jsx&pageExtensions=js&rootDir=C%3A%5CUsers%5C14S%5CDownloads%5Ccode%5Cpikebaohomepage&isDev=true&tsconfigPath=tsconfig.json&basePath=&assetPrefix=&nextConfigOutput=&preferredRegion=&middlewareConfig=e30%3D!:63
Error: Image import "next-metadata-image-loader?type=icon&segment=&basePath=&pageExtensions=tsx&pageExtensions=ts&pageExtensions=jsx&pageExtensions=js!C:\Users\14S\Downloads\code\pikebaohomepage\app\icon.svg?__next_metadata__" is not a valid image file. The image may be corrupted or an unsupported format.
# expected
No error as with next v14.
### Provide environment information
```bash
Win11
next.js v15
node v23
```
### Which area(s) are affected? (Select all that apply)
Not sure
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local)
### Additional context
_No response_ | bug | low | Critical |
2,612,006,431 | deno | `deno import` | To import manifests from other package managers
(tasks, dependencies, ...)
See [`pnpm import`](https://pnpm.io/cli/import) | suggestion | low | Minor |
2,612,026,092 | angular | Encapsulated scoped :host-context() transformation | ### Which @angular/* package(s) are the source of the bug?
compiler
### Is this a regression?
No
### Description
Hello!
I've a question about Angular's CSS transformation, creating shadow DOM for encapsulated components.
`:host-context(.bar)` is being transformed into `.bar[a-host], .bar [a-host]` by Angular.
`:host-context(.foo .bar)` is currently being transformed into `.foo .bar [a-host], .foo .bar[a-host]`
Both look consistent.
I'd expect then, that `.foo :host-context(.bar)` would lead to the same result as in the second example: `.foo .bar [a-host], .foo .bar[a-host]` . But it is being transformed into `.foo .bar[a-host], .bar [a-host]`. The second set of selectors isn't scoped by .foo.
Is there a reason for that or is this a bug?
### Please provide a link to a minimal reproduction of the bug
https://stackblitz.com/edit/stackblitz-starters-5ckjwy?file=src%2Fmain.css
### Please provide the exception or error you saw
_No response_
### Please provide the environment you discovered this bug in (run `ng version`)
Stackblitz' output (repro above):
```
Angular CLI: 18.2.7
Node: 18.20.3
Package Manager: npm 10.2.3
OS: linux x64
Angular: 18.2.7
... animations, cli, common, compiler, compiler-cli, core, forms
... platform-browser, router
Package Version
---------------------------------------------------------
@angular-devkit/architect 0.1802.7
@angular-devkit/build-angular 18.2.7
@angular-devkit/core 18.2.7
@angular-devkit/schematics 18.2.7
@schematics/angular 18.2.7
rxjs 7.8.1
typescript 5.5.4
zone.js 0.14.10
```
### Anything else?
In the Stackblitz example the output is being transformed into (the second selector is not scoped by the left-hand side `html` tag):
```
html body[_nghost-ng-c1419874505], body [_nghost-ng-c1419874505] {
color: aqua;
}
``` | area: core,area: compiler,core: stylesheets | low | Critical |
2,612,026,456 | ollama | after some time idle / phone standby , getting to the termux ollama run cmd makes it restart the dl from 0 | ### What is the issue?
so i know ollama can resume downloads
but the following issue happened to me now the second time on a different model dl
i run ollama run model
it downloads ..
i can switch apps , switch back to termux ollama , no problem
but after some screen of time i return to termux and see it just began from 0 again ...
### OS
Linux
### GPU
Other
### CPU
Other
### Ollama version
0.3.14 | bug,needs more info | low | Major |
2,612,044,110 | electron | [Linux - Plasma6 - KDialog] dialog.ShowSaveDialog() -> Filename not carried over into dialog. | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
33.0.0
### What operating system(s) are you using?
Other Linux
### Operating System Version
Arch Linux
### What arch are you using?
x64
### Last Known Working Electron version
_No response_
### Expected Behavior
Filename passed into function should be shown in dialog.
### Actual Behavior
Filename is empty.
### Testcase Gist URL
https://gist.github.com/b4798bed4fa77f0c62947534c7738fab
### Additional Information
Found on Signal Desktop App. - Opened an issue there, but could not be resolved, as it is obviously within the electron framework.
Please see this Issue report:
[Signal Desktop App - Issue 7061](https://github.com/signalapp/Signal-Desktop/issues/7061)
| platform/linux,bug :beetle:,has-repro-gist,33-x-y,34-x-y | low | Critical |
2,612,055,374 | ui | [bug]: Installation fails with Next.js 15 and/or React 19 | ### Describe the bug
While initializing a new Next.js project with shadcn-ui using `npx shadcn@latest init`, the installation fails when attempting to install dependencies. The error occurs because @radix-ui/react-icons has a peer dependency requirement for "react@^16.x || ^17.x || ^18.x", but the project is using React 19.0.0-rc-69d4b800-20241021.
Error message:
npm error ERESOLVE unable to resolve dependency tree
npm error Found: react@19.0.0-rc-69d4b800-20241021
npm error Could not resolve dependency:
npm error peer react@"^16.x || ^17.x || ^18.x" from @radix-ui/react-icons@1.3.0
Current environment:
- Next.js 15
- React 19.0.0-rc-69d4b800-20241021
- npm (latest version)
- shadcn-ui (latest version)
The installation works fine with React 18, suggesting that @radix-ui/react-icons needs to be updated to support React 19 release candidates.
Potential solutions:
1. Update @radix-ui/react-icons peer dependencies to include React 19
2. Add a note in the documentation about React 19 compatibility
3. Add a version check in the CLI to warn users about React 19 compatibility issues
### Affected component/components
shadcn-ui installation fails
### How to reproduce
```
Steps to reproduce:
1. Create a new Next.js project with experimental features:
```bash
npx create-next-app@latest my-app --typescript --tailwind --app
```
2. During the setup, select 'yes' for App Router and other default options
3. Navigate to the project directory:
bash
cd my-app
4. Try to initialize shadcn-ui:
bash
npx shadcn@latest init
5. Select configuration options:
- Style: New York
- Base color: Neutral
- CSS variables: Yes
6. The installation will fail during the dependency installation step with the following error:
npm error ERESOLVE unable to resolve dependency tree
npm error Found: react@19.0.0-rc-69d4b800-20241021
npm error Could not resolve dependency:
npm error peer react@"^16.x || ^17.x || ^18.x" from @radix-ui/react-icons@1.3.0
You can verify the React version in your package.json:
{
"dependencies": {
"react": "19.0.0-rc-69d4b800-20241021",
"react-dom": "19.0.0-rc-69d4b800-20241021"
}
}
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
System Information:
Operating System:
- Windows 8
- Working in Command Prompt
Node Environment:
- Node.js version: v20.17.0
- npm version: v10.9.0
Project Dependencies:
- Next.js: 15
- React: 19.0.0-rc-69d4b800-20241021
- React DOM: 19.0.0-rc-69d4b800-20241021
- Typescript: ^5
- Tailwind CSS: ^3.4.1
CLI Versions:
- create-next-app: latest
- shadcn-ui CLI: latest (@shadcn/ui)
Additional Context:
- Fresh installation with default configurations
- Using App Router
- Project created with TypeScript and Tailwind CSS enabled
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | high | Critical |
2,612,069,955 | vscode | Devcontainer setup ad-hoc parsing fails on Rocky 8.5 | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.92.0 (Commit: b1c0a14de1414fcdaa400695b4db1c0799bc3124)
- OS Version: Rocky Linux 8.5
The `check-requirements-linux.sh` script that runs as part of a devcontainer setup wants to parse the OS version from `/etc/os-release`.
That file "is a newline-separated list of environment-like shell-compatible variable assignments" (see `man os-relase` on Linux).
The script tries to parse the value it wants [here](https://github.com/microsoft/vscode/blob/main/resources/server/bin/helpers/check-requirements-linux.sh#L34), by running:
```
OS_ID="$(cat /etc/os-release | grep -Eo 'ID=([^"]+)' | sed -n '1s/ID=//p')"
```
This looks for a key-value pair of the form `ID=value` and strips out the `ID=` part.
It assumes the pair is not of the form `ID="value"`, which appears to be permitted by the file format, and appears in the wild in Rocky 8.5 where the relevant line reads:
```
ID="rocky"
```
In that case the `grep` command will fail to match the line and the value of `OS_ID` will be empty, resulting in a non-zero exit code of 1.
This could be fixed by changing the variable assignment to:
```
OS_ID=$(source /etc/os-release && echo $ID)
```
This will run a subshell in which the values in the file get assigned to environment variables, which removes any surrounding `"` marks.
Since this runs in a subshell the current shell's environment will not pick up these variables.
Steps to Reproduce:
1. Start a devcontainer built from Rocky 8.5.
| bug,linux,good first issue,remote | low | Critical |
2,612,107,062 | deno | Documentation missing: where does `deno task` look for commands? | Version: Deno 2.0.1
### How does `deno task` find commands?
If a dependency defines a command in (`bin` in `package.json`), how does `deno task` find that command?
Is there a way to run those commands outside of `deno task`? What's deno's equivilant of `npx`?
### Concrete example
see: https://github.com/smoofra/lambdazoo
`deno task generate` runs a script called `ohm` which is defined by `@ohm-js/cli`. How does that happen?
If i run `npx ohm` in that project it seems to work too, but that populates `node_modules`, and runs a script in `node_modules/.bin/ohm` that actually runs `node` not `deno`. `deno task generate` does not. How can you run `ohm`
`deno`, not `node`?.
| question | low | Minor |
2,612,111,782 | go | proposal: x/sys/unix: add methods to covert between time.Time and unix.PtpClockTime on Linux | ### Proposal Details
Recently we have exposed the syscalls and types related to PTP (IEEE 1588) on Linux in the `unix` package.
Amongs those is a struct representing time in PTP calls, [`unix.PtpClockTime`](https://pkg.go.dev/golang.org/x/sys/unix@master#PtpClockTime). Semantically it is very similar to Unix time, differing in underlying type for the nanoseconds field and having a placeholder for future extentions.
The proposal is to add the convenience methods to carry conversion between *unix.PtpClockTime* and *time.Time*:
```
package unix
func TimeToPtpClockTime(t time.Time) PtpClockTime
func (t *PtpClockTime) Time() time.Time
func (t *PtpClockTime) Unix() (sec int64, nsec int64)
```
This shall allow to use the maths of the standard time package while avoiding
the boilerplate: write `ptpt.Time()` instead of `time.Unix(ptpt.Sec, int64(ptpt.Nsec))`,
and `unix.TimeToPtpClockTime(t)` instead of `unix.PtpClockTime{Sec: t.Unix(), Nsec: uint32(t.Nanosecond())}`.
As an example of what the boilerplate looks like in real-world code, please see facebook/time#418.
golang/sys#230 implements the proposal.
Thank you for consideration. | Proposal | low | Minor |
2,612,153,398 | pytorch | torch.dot has inconsistent support for int64 (long) | ### 🐛 Describe the bug
`torch.dot` can be calculated with tensors of `int64` dtype on CPU, but shows as not implemented for GPU ([colab](https://colab.research.google.com/drive/1oFgSRK7T6alyBJgHbWahLbdwqC5ZgnIT?usp=sharing)).
Minimal repro:
```python
import torch
out_cpu = torch.dot(torch.tensor([2, 3], dtype=torch.int64), torch.tensor([2, 1], dtype=torch.int64))
out_gpu = torch.dot(torch.tensor([2, 3], dtype=torch.int64).cuda(), torch.tensor([2, 1], dtype=torch.int64).cuda())
# RuntimeError: "dot" not implemented for 'Long'
```
### Versions
Collecting environment information...
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 15.0.0 (git@github.com:llvm/llvm-project.git 4ba6a9c9f65bbc8bd06e3652cb20fd4dfc846137)
CMake version: version 3.22.1
Libc version: glibc-2.31
Python version: 3.10.0 (default, Mar 3 2022, 09:58:08) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.15.0-122-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 555.42.02
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 39 bits physical, 48 bits virtual
CPU(s): 24
On-line CPU(s) list: 0-23
Thread(s) per core: 1
Core(s) per socket: 16
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 183
Model name: 13th Gen Intel(R) Core(TM) i7-13700KF
Stepping: 1
CPU MHz: 4118.642
CPU max MHz: 5800.0000
CPU min MHz: 800.0000
BogoMIPS: 6835.20
Virtualization: VT-x
L1d cache: 384 KiB
L1i cache: 256 KiB
L2 cache: 16 MiB
NUMA node0 CPU(s): 0-23
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] optree==0.11.0
[pip3] torch==2.4.0
[pip3] triton==3.0.0
[conda] numpy 1.26.4 py310heeff2f4_0
[conda] numpy-base 1.26.4 py310h8a23956_0
[conda] optree 0.11.0 pypi_0 pypi
[conda] torch 2.4.0 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @ptrblck @msaroufim @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano | module: cuda,triaged | low | Critical |
2,612,153,508 | pytorch | [ONNX] Support for `aten.istft` | ### 🐛 Describe the bug
When I run [this](https://github.com/tuna2134/f5-tts/blob/master/onnx/convert_vocos.py), it failed.
### Versions
Collecting environment information...
PyTorch version: 2.6.0.dev20241024+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.2.0-23ubuntu4) 13.2.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.12.7 (main, Oct 23 2024, 01:45:22) [GCC 13.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-47-generic-x86_64-with-glibc2.39
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 40 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 6
On-line CPU(s) list: 0-5
Vendor ID: GenuineIntel
Model name: QEMU Virtual CPU version 2.5+
CPU family: 15
Model: 107
Thread(s) per core: 1
Core(s) per socket: 6
Socket(s): 1
Stepping: 1
BogoMIPS: 4794.41
Flags: fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx lm constant_tsc nopl xtopology cpuid tsc_known_freq pni ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c hypervisor lahf_lm abm cpuid_fault pti bmi1 avx2 bmi2
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 192 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 24 MiB (6 instances)
L3 cache: 16 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-5
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Unknown: No mitigations
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Retpoline
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] onnx==1.17.0
[pip3] onnxscript==0.1.0.dev20241023
[pip3] torch==2.6.0.dev20241024+cpu
[pip3] torchaudio==2.5.0.dev20241024+cpu
[pip3] torchvision==0.20.0.dev20241024+cpu | module: onnx,triaged | low | Critical |
2,612,197,768 | pytorch | Add mean and var operation for Nested Tensors | ### 🚀 The feature, motivation and pitch
For torch.Tesnor it is easy to compute mean and var, but I can not find ways to compute mean and var for Nested Tensors. Nested Tensors support layer_norm operation which include mean and var operation. Is there any ways to compute mean fo Nested Tensors?
```
x = torch.randn(1, 192)
y = torch.randn(10, 192)
nested = torch.nested.nested_tensor([x, y])
nested.mean(dim=-1)#Don't support
layer_norm(nested)#support
```
### Alternatives
_No response_
### Additional context
_No response_
cc @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ | triaged,module: nestedtensor | low | Minor |
2,612,213,135 | flutter | Flutter-web text field iframed with semantics disabled refuses to let go of focus | ### Steps to reproduce
See also https://github.com/flutter/flutter/issues/157387 which requires the same conditions (iframed, semantics disabled).
1. Requisites
a. Accessibility (semantics tree) must not be enabled, and
b. Flutter-web app must be iframed.
2. Click on one of the text field widgets within the Flutter-web app.
3. Attempt to click out to the iframe host.
### Expected results
Focus should move to the iframe host (whatever you clicked).
### Actual results
The Flutter-web iframe keeps stealing the focus back as soon as it loses it, preventing interacting with the iframe host.
Workarounds:
* Click within the Flutter-web iframe out of the text field so that it loses focus. Then you're able to focus into the iframe host.
* Enabling Flutter semantics seems to resolve this issue, interestingly.
### Code sample
<details open><summary>Code sample</summary>
https://dartpad.dev/?id=074d8b35285e25643fcffbdf8f9a3b1b
1. Run this.
2. Click into the Flutter-web text field. Optionally also type some stuff in to make the focus stealing obvious.
3. Then try to click back to the DartPad code. Notice how the Flutter-web text field steals focus and selects-all its content.
4. Now, click in the Flutter-web area outside of the text field so that it loses focus.
5. Then try to click back to the DardPad code and see that it succeeds now.
```dart
import 'package:flutter/material.dart';
import 'package:flutter/rendering.dart';
void main() async {
runApp(
MaterialApp(
title: 'Test',
home: Scaffold(
body: Row(
children: [
Text('Text field: '),
SizedBox(
height: 50,
width: 400,
child: Semantics(container: true, child: const TextField()),
),
],
),
),
),
);
// Uncomment this line to enable the Semantics tree, which makes it work properly.
//SemanticsBinding.instance.ensureSemantics();
}
```
</details>
| a: text input,engine,a: accessibility,platform-web,f: focus,P2,customer: samehere,team-web,triaged-web | low | Minor |
2,612,250,954 | pytorch | [PTD][RPC] Verify RPC Tutorials contents and scripts | ### 📚 The doc issue
It has been a while these tutorials have been updated. Since there are not active developments on these, we should verify the scripts in the tutorials are still working as expected.
Getting Started with Distributed RPC Framework - https://pytorch.org/tutorials/intermediate/rpc_tutorial.html - ?
Implementing a Parameter Server Using Distributed RPC Framework - https://pytorch.org/tutorials/intermediate/rpc_param_server_tutorial.html -?
Implementing Batch RPC Processing Using Asynchronous Executions - https://pytorch.org/tutorials/intermediate/rpc_async_execution.html - ?
Combining Distributed DataParallel with Distributed RPC Framework - https://pytorch.org/tutorials/advanced/rpc_ddp_tutorial.html - ?
### Suggest a potential alternative/fix
_No response_
cc. @c-p-i-o
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wconstab @d4l3k @c-p-i-o @svekars @brycebortree @sekyondaMeta @AlannaBurke | oncall: distributed,module: docs,triaged | low | Minor |
2,612,280,385 | stable-diffusion-webui | [Bug]: ERROR: Could not find a version that satisfies the requirement pytorch_lightning==1.7.6 … | ### Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [X] The issue has been reported before but has not been fixed yet
### What happened?
Using cached omegaconf-2.2.3-py3-none-any.whl.metadata (3.9 kB)
Collecting pytorch_lightning==1.7.6 (from -r requirements_versions.txt (line 12))
Using cached pytorch_lightning-1.7.6-py3-none-any.whl.metadata (27 kB)
stderr: WARNING: Ignoring version 1.7.6 of pytorch_lightning since it has invalid metadata:
Requested pytorch_lightning==1.7.6 from https://files.pythonhosted.org/packages/f2/22/37c64bd5b426297c71ecbb01ec2d340f013556a973a2cd6cd0aa68cda1ab/pytorch_lightning-1.7.6-py3-none-any.whl (from -r requirements_versions.txt (line 12)) has invalid metadata: .* suffix can only be used with `==` or `!=` operators
torch (>=1.9.*)
~~~~~~^
Please use pip<24.1 if you need to use this version.
ERROR: Could not find a version that satisfies the requirement pytorch_lightning==1.7.6 (from versions: 0.0.2, 0.2, 0.2.2, 0.2.3, 0.2.4, 0.2.4.1, 0.2.5, 0.2.5.1, 0.2.5.2, 0.2.6, 0.3, 0.3.1, 0.3.2, 0.3.3, 0.3.4, 0.3.4.1, 0.3.5, 0.3.6, 0.3.6.1, 0.3.6.3, 0.3.6.4, 0.3.6.5, 0.3.6.6, 0.3.6.7, 0.3.6.8, 0.3.6.9, 0.4.0, 0.4.1, 0.4.2, 0.4.3, 0.4.4, 0.4.5, 0.4.6, 0.4.7, 0.4.8, 0.4.9, 0.5.0, 0.5.1, 0.5.1.2, 0.5.1.3, 0.5.2, 0.5.2.1, 0.5.3, 0.5.3.1, 0.5.3.2, 0.5.3.3, 0.6.0, 0.7.1, 0.7.3, 0.7.5, 0.7.6, 0.8.1, 0.8.3, 0.8.4, 0.8.5, 0.9.0, 0.10.0, 1.0.0, 1.0.1, 1.0.2, 1.0.3, 1.0.4, 1.0.5, 1.0.6, 1.0.7, 1.0.8, 1.1.0, 1.1.1, 1.1.2, 1.1.3, 1.1.4, 1.1.5, 1.1.6, 1.1.7, 1.1.8, 1.2.0rc0, 1.2.0rc1, 1.2.0rc2, 1.2.0, 1.2.1, 1.2.2, 1.2.3, 1.2.4, 1.2.5, 1.2.6, 1.2.7, 1.2.8, 1.2.9, 1.2.10, 1.3.0rc1, 1.3.0rc2, 1.3.0rc3, 1.3.0, 1.3.1, 1.3.2, 1.3.3, 1.3.4, 1.3.5, 1.3.6, 1.3.7, 1.3.7.post0, 1.3.8, 1.4.0rc0, 1.4.0rc1, 1.4.0rc2, 1.4.0, 1.4.1, 1.4.2, 1.4.3, 1.4.4, 1.4.5, 1.4.6, 1.4.7, 1.4.8, 1.4.9, 1.5.0rc0, 1.5.0rc1, 1.5.0, 1.5.1, 1.5.2, 1.5.3, 1.5.4, 1.5.5, 1.5.6, 1.5.7, 1.5.8, 1.5.9, 1.5.10, 1.6.0rc0, 1.6.0rc1, 1.6.0, 1.6.1, 1.6.2, 1.6.3, 1.6.4, 1.6.5, 1.7.0rc0, 1.7.0rc1, 1.7.0, 1.7.1, 1.7.2, 1.7.3, 1.7.4, 1.7.5, 1.7.6, 1.7.7, 1.8.0rc0, 1.8.0rc1, 1.8.0rc2, 1.8.0, 1.8.0.post1, 1.8.1, 1.8.2, 1.8.3, 1.8.3.post0, 1.8.3.post1, 1.8.3.post2, 1.8.4, 1.8.4.post0, 1.8.5, 1.8.5.post0, 1.8.6, 1.9.0rc0, 1.9.0, 1.9.1, 1.9.2, 1.9.3, 1.9.4, 1.9.5, 2.0.0rc0, 2.0.0, 2.0.1, 2.0.1.post0, 2.0.2, 2.0.3, 2.0.4, 2.0.5, 2.0.6, 2.0.7, 2.0.8, 2.0.9, 2.0.9.post0, 2.1.0rc0, 2.1.0rc1, 2.1.0, 2.1.1, 2.1.2, 2.1.3, 2.1.4, 2.2.0rc0, 2.2.0, 2.2.0.post0, 2.2.1, 2.2.2, 2.2.3, 2.2.4, 2.2.5, 2.3.0, 2.3.1, 2.3.2, 2.3.3, 2.4.0)
ERROR: No matching distribution found for pytorch_lightning==1.7.6
Appuyez sur une touche pour continuer...
### Steps to reproduce the problem
download an extract the zip, and run run.bat
### What should have happened?
install
### What browsers do you use to access the UI ?
Mozilla Firefox, Google Chrome
### Sysinfo
na
### Console logs
```Shell
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Commit hash: 48a15821de768fea76e66f26df83df3fddf18f4b
Installing requirements for Web UI
Traceback (most recent call last):
File "C:\Users\Shadow\Documents\sd.webui\webui\launch.py", line 324, in <module>
prepare_environment()
File "C:\Users\Shadow\Documents\sd.webui\webui\launch.py", line 273, in prepare_environment
run_pip(f"install -r {requirements_file}", "requirements for Web UI")
File "C:\Users\Shadow\Documents\sd.webui\webui\launch.py", line 106, in run_pip
return run(f'"{python}" -m pip {args} --prefer-binary{index_url_line}', desc=f"Installing {desc}", errdesc=f"Couldn't install {desc}")
File "C:\Users\Shadow\Documents\sd.webui\webui\launch.py", line 74, in run
raise RuntimeError(message)
RuntimeError: Couldn't install requirements for Web UI.
Command: "C:\Users\Shadow\Documents\sd.webui\system\python\python.exe" -m pip install -r requirements_versions.txt --prefer-binary
Error code: 1
stdout: Collecting blendmodes==2022 (from -r requirements_versions.txt (line 1))
Using cached blendmodes-2022-py3-none-any.whl.metadata (12 kB)
Collecting transformers==4.19.2 (from -r requirements_versions.txt (line 2))
Using cached transformers-4.19.2-py3-none-any.whl.metadata (73 kB)
Collecting accelerate==0.12.0 (from -r requirements_versions.txt (line 3))
Using cached accelerate-0.12.0-py3-none-any.whl.metadata (15 kB)
Requirement already satisfied: basicsr==1.4.2 in c:\users\shadow\documents\sd.webui\system\python\lib\site-packages (from -r requirements_versions.txt (line 4)) (1.4.2)
Collecting gfpgan==1.3.8 (from -r requirements_versions.txt (line 5))
Using cached gfpgan-1.3.8-py3-none-any.whl.metadata (12 kB)
Collecting gradio==3.16.2 (from -r requirements_versions.txt (line 6))
Using cached gradio-3.16.2-py3-none-any.whl.metadata (14 kB)
Collecting numpy==1.23.3 (from -r requirements_versions.txt (line 7))
Using cached numpy-1.23.3-cp310-cp310-win_amd64.whl.metadata (2.3 kB)
Collecting Pillow==9.4.0 (from -r requirements_versions.txt (line 8))
Using cached Pillow-9.4.0-cp310-cp310-win_amd64.whl.metadata (9.4 kB)
Collecting realesrgan==0.3.0 (from -r requirements_versions.txt (line 9))
Using cached realesrgan-0.3.0-py3-none-any.whl.metadata (17 kB)
Requirement already satisfied: torch in c:\users\shadow\documents\sd.webui\system\python\lib\site-packages (from -r requirements_versions.txt (line 10)) (1.13.1+cu117)
Collecting omegaconf==2.2.3 (from -r requirements_versions.txt (line 11))
Using cached omegaconf-2.2.3-py3-none-any.whl.metadata (3.9 kB)
Collecting pytorch_lightning==1.7.6 (from -r requirements_versions.txt (line 12))
Using cached pytorch_lightning-1.7.6-py3-none-any.whl.metadata (27 kB)
stderr: WARNING: Ignoring version 1.7.6 of pytorch_lightning since it has invalid metadata:
Requested pytorch_lightning==1.7.6 from https://files.pythonhosted.org/packages/f2/22/37c64bd5b426297c71ecbb01ec2d340f013556a973a2cd6cd0aa68cda1ab/pytorch_lightning-1.7.6-py3-none-any.whl (from -r requirements_versions.txt (line 12)) has invalid metadata: .* suffix can only be used with `==` or `!=` operators
torch (>=1.9.*)
~~~~~~^
Please use pip<24.1 if you need to use this version.
ERROR: Could not find a version that satisfies the requirement pytorch_lightning==1.7.6 (from versions: 0.0.2, 0.2, 0.2.2, 0.2.3, 0.2.4, 0.2.4.1, 0.2.5, 0.2.5.1, 0.2.5.2, 0.2.6, 0.3, 0.3.1, 0.3.2, 0.3.3, 0.3.4, 0.3.4.1, 0.3.5, 0.3.6, 0.3.6.1, 0.3.6.3, 0.3.6.4, 0.3.6.5, 0.3.6.6, 0.3.6.7, 0.3.6.8, 0.3.6.9, 0.4.0, 0.4.1, 0.4.2, 0.4.3, 0.4.4, 0.4.5, 0.4.6, 0.4.7, 0.4.8, 0.4.9, 0.5.0, 0.5.1, 0.5.1.2, 0.5.1.3, 0.5.2, 0.5.2.1, 0.5.3, 0.5.3.1, 0.5.3.2, 0.5.3.3, 0.6.0, 0.7.1, 0.7.3, 0.7.5, 0.7.6, 0.8.1, 0.8.3, 0.8.4, 0.8.5, 0.9.0, 0.10.0, 1.0.0, 1.0.1, 1.0.2, 1.0.3, 1.0.4, 1.0.5, 1.0.6, 1.0.7, 1.0.8, 1.1.0, 1.1.1, 1.1.2, 1.1.3, 1.1.4, 1.1.5, 1.1.6, 1.1.7, 1.1.8, 1.2.0rc0, 1.2.0rc1, 1.2.0rc2, 1.2.0, 1.2.1, 1.2.2, 1.2.3, 1.2.4, 1.2.5, 1.2.6, 1.2.7, 1.2.8, 1.2.9, 1.2.10, 1.3.0rc1, 1.3.0rc2, 1.3.0rc3, 1.3.0, 1.3.1, 1.3.2, 1.3.3, 1.3.4, 1.3.5, 1.3.6, 1.3.7, 1.3.7.post0, 1.3.8, 1.4.0rc0, 1.4.0rc1, 1.4.0rc2, 1.4.0, 1.4.1, 1.4.2, 1.4.3, 1.4.4, 1.4.5, 1.4.6, 1.4.7, 1.4.8, 1.4.9, 1.5.0rc0, 1.5.0rc1, 1.5.0, 1.5.1, 1.5.2, 1.5.3, 1.5.4, 1.5.5, 1.5.6, 1.5.7, 1.5.8, 1.5.9, 1.5.10, 1.6.0rc0, 1.6.0rc1, 1.6.0, 1.6.1, 1.6.2, 1.6.3, 1.6.4, 1.6.5, 1.7.0rc0, 1.7.0rc1, 1.7.0, 1.7.1, 1.7.2, 1.7.3, 1.7.4, 1.7.5, 1.7.6, 1.7.7, 1.8.0rc0, 1.8.0rc1, 1.8.0rc2, 1.8.0, 1.8.0.post1, 1.8.1, 1.8.2, 1.8.3, 1.8.3.post0, 1.8.3.post1, 1.8.3.post2, 1.8.4, 1.8.4.post0, 1.8.5, 1.8.5.post0, 1.8.6, 1.9.0rc0, 1.9.0, 1.9.1, 1.9.2, 1.9.3, 1.9.4, 1.9.5, 2.0.0rc0, 2.0.0, 2.0.1, 2.0.1.post0, 2.0.2, 2.0.3, 2.0.4, 2.0.5, 2.0.6, 2.0.7, 2.0.8, 2.0.9, 2.0.9.post0, 2.1.0rc0, 2.1.0rc1, 2.1.0, 2.1.1, 2.1.2, 2.1.3, 2.1.4, 2.2.0rc0, 2.2.0, 2.2.0.post0, 2.2.1, 2.2.2, 2.2.3, 2.2.4, 2.2.5, 2.3.0, 2.3.1, 2.3.2, 2.3.3, 2.4.0)
ERROR: No matching distribution found for pytorch_lightning==1.7.6
Appuyez sur une touche pour continuer...
```
### Additional information
_No response_ | asking-for-help-with-local-system-issues | low | Critical |
2,612,283,087 | langchain | Cant import any of the HuggingFaceEmbeddings because 'openssl' has no attribute 'ciphers' | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain_community.embeddings import HuggingFaceBgeEmbeddings
model_name = "BAAI/bge-small-en"
model_kwargs = {"device": "cpu"}
encode_kwargs = {"normalize_embeddings": True}
hf = HuggingFaceBgeEmbeddings(
model_name=model_name, model_kwargs=model_kwargs, encode_kwargs=encode_kwargs
)
embedding = hf.embed_query("hi this is harrison")
embedding
```
or
```
from langchain_huggingface import HuggingFaceEmbeddings
embeddings=HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")
```
### Error Message and Stack Trace (if applicable)
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
File /opt/conda/lib/python3.10/site-packages/transformers/utils/import_utils.py:1764, in _LazyModule._get_module(self, module_name)
1763 try:
-> 1764 return importlib.import_module("." + module_name, self.__name__)
1765 except Exception as e:
File /opt/conda/lib/python3.10/importlib/__init__.py:126, in import_module(name, package)
125 level += 1
--> 126 return _bootstrap._gcd_import(name[level:], package, level)
File <frozen importlib._bootstrap>:1050, in _gcd_import(name, package, level)
File <frozen importlib._bootstrap>:1027, in _find_and_load(name, import_)
File <frozen importlib._bootstrap>:1006, in _find_and_load_unlocked(name, import_)
File <frozen importlib._bootstrap>:688, in _load_unlocked(spec)
File <frozen importlib._bootstrap_external>:883, in exec_module(self, module)
File <frozen importlib._bootstrap>:241, in _call_with_frames_removed(f, *args, **kwds)
File /opt/conda/lib/python3.10/site-packages/transformers/modeling_tf_utils.py:34
33 import numpy as np
---> 34 import tensorflow as tf
35 from packaging.version import parse
File /opt/conda/lib/python3.10/site-packages/tensorflow/__init__.py:45
43 _tf2.enable()
---> 45 from tensorflow._api.v2 import __internal__
46 from tensorflow._api.v2 import __operators__
File /opt/conda/lib/python3.10/site-packages/tensorflow/_api/v2/__internal__/__init__.py:11
10 from tensorflow._api.v2.__internal__ import dispatch
---> 11 from tensorflow._api.v2.__internal__ import distribute
12 from tensorflow._api.v2.__internal__ import eager_context
File /opt/conda/lib/python3.10/site-packages/tensorflow/_api/v2/__internal__/distribute/__init__.py:8
6 import sys as _sys
----> 8 from tensorflow._api.v2.__internal__.distribute import combinations
9 from tensorflow._api.v2.__internal__.distribute import interim
File /opt/conda/lib/python3.10/site-packages/tensorflow/_api/v2/__internal__/distribute/combinations/__init__.py:8
6 import sys as _sys
----> 8 from tensorflow.python.distribute.combinations import env # line: 456
9 from tensorflow.python.distribute.combinations import generate # line: 365
File /opt/conda/lib/python3.10/site-packages/tensorflow/python/distribute/combinations.py:33
32 from tensorflow.python.client import session
---> 33 from tensorflow.python.distribute import collective_all_reduce_strategy
34 from tensorflow.python.distribute import distribute_lib
File /opt/conda/lib/python3.10/site-packages/tensorflow/python/distribute/collective_all_reduce_strategy.py:32
31 from tensorflow.python.distribute import input_util
---> 32 from tensorflow.python.distribute import mirrored_strategy
33 from tensorflow.python.distribute import multi_worker_util
File /opt/conda/lib/python3.10/site-packages/tensorflow/python/distribute/mirrored_strategy.py:34
33 from tensorflow.python.distribute import values_util
---> 34 from tensorflow.python.distribute.cluster_resolver import tfconfig_cluster_resolver
35 from tensorflow.python.distribute.v1 import input_lib as input_lib_v1
File /opt/conda/lib/python3.10/site-packages/tensorflow/python/distribute/cluster_resolver/__init__.py:27
26 from tensorflow.python.distribute.cluster_resolver.cluster_resolver import UnionClusterResolver
---> 27 from tensorflow.python.distribute.cluster_resolver.gce_cluster_resolver import GCEClusterResolver
28 from tensorflow.python.distribute.cluster_resolver.kubernetes_cluster_resolver import KubernetesClusterResolver
File /opt/conda/lib/python3.10/site-packages/tensorflow/python/distribute/cluster_resolver/gce_cluster_resolver.py:24
23 try:
---> 24 from googleapiclient import discovery # pylint: disable=g-import-not-at-top
25 from oauth2client.client import GoogleCredentials # pylint: disable=g-import-not-at-top
File /opt/conda/lib/python3.10/site-packages/googleapiclient/discovery.py:45
44 from google.auth.transport import mtls
---> 45 from google.oauth2 import service_account
47 # Third-party imports
File /opt/conda/lib/python3.10/site-packages/google/oauth2/service_account.py:77
76 from google.auth import _helpers
---> 77 from google.auth import _service_account_info
78 from google.auth import credentials
File /opt/conda/lib/python3.10/site-packages/google/auth/_service_account_info.py:20
18 import json
---> 20 from google.auth import crypt
21 from google.auth import exceptions
File /opt/conda/lib/python3.10/site-packages/google/auth/crypt/__init__.py:41
40 from google.auth.crypt import base
---> 41 from google.auth.crypt import rsa
43 try:
File /opt/conda/lib/python3.10/site-packages/google/auth/crypt/rsa.py:20
18 try:
19 # Prefer cryptograph-based RSA implementation.
---> 20 from google.auth.crypt import _cryptography_rsa
22 RSASigner = _cryptography_rsa.RSASigner
File /opt/conda/lib/python3.10/site-packages/google/auth/crypt/_cryptography_rsa.py:25
24 from cryptography.hazmat.primitives import hashes
---> 25 from cryptography.hazmat.primitives import serialization
26 from cryptography.hazmat.primitives.asymmetric import padding
File /opt/conda/lib/python3.10/site-packages/cryptography/hazmat/primitives/serialization/__init__.py:25
17 from cryptography.hazmat.primitives.serialization.base import (
18 load_der_parameters,
19 load_der_private_key,
(...)
23 load_pem_public_key,
24 )
---> 25 from cryptography.hazmat.primitives.serialization.ssh import (
26 SSHCertificate,
27 SSHCertificateBuilder,
28 SSHCertificateType,
29 SSHCertPrivateKeyTypes,
30 SSHCertPublicKeyTypes,
31 SSHPrivateKeyTypes,
32 SSHPublicKeyTypes,
33 load_ssh_private_key,
34 load_ssh_public_identity,
35 load_ssh_public_key,
36 )
38 __all__ = [
39 "BestAvailableEncryption",
40 "Encoding",
(...)
62 "load_ssh_public_key",
63 ]
File /opt/conda/lib/python3.10/site-packages/cryptography/hazmat/primitives/serialization/ssh.py:27
26 from cryptography.hazmat.primitives.asymmetric import utils as asym_utils
---> 27 from cryptography.hazmat.primitives.ciphers import (
28 AEADDecryptionContext,
29 Cipher,
30 algorithms,
31 modes,
32 )
33 from cryptography.hazmat.primitives.serialization import (
34 Encoding,
35 KeySerializationEncryption,
(...)
39 _KeySerializationEncryption,
40 )
File /opt/conda/lib/python3.10/site-packages/cryptography/hazmat/primitives/ciphers/__init__.py:11
7 from cryptography.hazmat.primitives._cipheralgorithm import (
8 BlockCipherAlgorithm,
9 CipherAlgorithm,
10 )
---> 11 from cryptography.hazmat.primitives.ciphers.base import (
12 AEADCipherContext,
13 AEADDecryptionContext,
14 AEADEncryptionContext,
15 Cipher,
16 CipherContext,
17 )
19 __all__ = [
20 "AEADCipherContext",
21 "AEADDecryptionContext",
(...)
26 "CipherContext",
27 ]
File /opt/conda/lib/python3.10/site-packages/cryptography/hazmat/primitives/ciphers/base.py:143
133 _CIPHER_TYPE = Cipher[
134 typing.Union[
135 modes.ModeWithNonce,
(...)
140 ]
141 ]
--> 143 CipherContext.register(rust_openssl.ciphers.CipherContext)
144 AEADEncryptionContext.register(rust_openssl.ciphers.AEADEncryptionContext)
AttributeError: module 'openssl' has no attribute 'ciphers'
The above exception was the direct cause of the following exception:
RuntimeError Traceback (most recent call last)
File /opt/conda/lib/python3.10/site-packages/transformers/utils/import_utils.py:1764, in _LazyModule._get_module(self, module_name)
1763 try:
-> 1764 return importlib.import_module("." + module_name, self.__name__)
1765 except Exception as e:
File /opt/conda/lib/python3.10/importlib/__init__.py:126, in import_module(name, package)
125 level += 1
--> 126 return _bootstrap._gcd_import(name[level:], package, level)
File <frozen importlib._bootstrap>:1050, in _gcd_import(name, package, level)
File <frozen importlib._bootstrap>:1027, in _find_and_load(name, import_)
File <frozen importlib._bootstrap>:1006, in _find_and_load_unlocked(name, import_)
File <frozen importlib._bootstrap>:688, in _load_unlocked(spec)
File <frozen importlib._bootstrap_external>:883, in exec_module(self, module)
File <frozen importlib._bootstrap>:241, in _call_with_frames_removed(f, *args, **kwds)
File /opt/conda/lib/python3.10/site-packages/transformers/integrations/integration_utils.py:36
34 import packaging.version
---> 36 from .. import PreTrainedModel, TFPreTrainedModel
37 from .. import __version__ as version
File <frozen importlib._bootstrap>:1075, in _handle_fromlist(module, fromlist, import_, recursive)
File /opt/conda/lib/python3.10/site-packages/transformers/utils/import_utils.py:1754, in _LazyModule.__getattr__(self, name)
1753 elif name in self._class_to_module.keys():
-> 1754 module = self._get_module(self._class_to_module[name])
1755 value = getattr(module, name)
File /opt/conda/lib/python3.10/site-packages/transformers/utils/import_utils.py:1766, in _LazyModule._get_module(self, module_name)
1765 except Exception as e:
-> 1766 raise RuntimeError(
1767 f"Failed to import {self.__name__}.{module_name} because of the following error (look up to see its"
1768 f" traceback):\n{e}"
1769 ) from e
RuntimeError: Failed to import transformers.modeling_tf_utils because of the following error (look up to see its traceback):
module 'openssl' has no attribute 'ciphers'
The above exception was the direct cause of the following exception:
RuntimeError Traceback (most recent call last)
Cell In[5], line 9
7 model_kwargs = {"device": "cpu"}
8 encode_kwargs = {"normalize_embeddings": True}
----> 9 hf = HuggingFaceBgeEmbeddings(
10 model_name=model_name, model_kwargs=model_kwargs, encode_kwargs=encode_kwargs
11 )
13 embedding = hf.embed_query("hi this is harrison")
14 embedding
File /opt/conda/lib/python3.10/site-packages/langchain_community/embeddings/huggingface.py:320, in HuggingFaceBgeEmbeddings.__init__(self, **kwargs)
310 warn_deprecated(
311 since=since,
312 removal=removal,
(...)
316 + f" {self.__class__.__name__} constructor instead.",
317 )
319 try:
--> 320 import sentence_transformers
322 except ImportError as exc:
323 raise ImportError(
324 "Could not import sentence_transformers python package. "
325 "Please install it with `pip install sentence_transformers`."
326 ) from exc
File /opt/conda/lib/python3.10/site-packages/sentence_transformers/__init__.py:10
7 import os
9 from sentence_transformers.backend import export_dynamic_quantized_onnx_model, export_optimized_onnx_model
---> 10 from sentence_transformers.cross_encoder.CrossEncoder import CrossEncoder
11 from sentence_transformers.datasets import ParallelSentencesDataset, SentencesDataset
12 from sentence_transformers.LoggingHandler import LoggingHandler
File /opt/conda/lib/python3.10/site-packages/sentence_transformers/cross_encoder/__init__.py:3
1 from __future__ import annotations
----> 3 from .CrossEncoder import CrossEncoder
5 __all__ = ["CrossEncoder"]
File /opt/conda/lib/python3.10/site-packages/sentence_transformers/cross_encoder/CrossEncoder.py:20
18 from sentence_transformers.evaluation.SentenceEvaluator import SentenceEvaluator
19 from sentence_transformers.readers import InputExample
---> 20 from sentence_transformers.SentenceTransformer import SentenceTransformer
21 from sentence_transformers.util import fullname, get_device_name, import_from_string
23 logger = logging.getLogger(__name__)
File /opt/conda/lib/python3.10/site-packages/sentence_transformers/SentenceTransformer.py:32
29 from transformers import is_torch_npu_available
30 from transformers.dynamic_module_utils import get_class_from_dynamic_module, get_relative_import_files
---> 32 from sentence_transformers.model_card import SentenceTransformerModelCardData, generate_model_card
33 from sentence_transformers.similarity_functions import SimilarityFunction
35 from . import __MODEL_HUB_ORGANIZATION__, __version__
File /opt/conda/lib/python3.10/site-packages/sentence_transformers/model_card.py:25
23 from tqdm.autonotebook import tqdm
24 from transformers import TrainerCallback
---> 25 from transformers.integrations import CodeCarbonCallback
26 from transformers.modelcard import make_markdown_table
27 from transformers.trainer_callback import TrainerControl, TrainerState
File <frozen importlib._bootstrap>:1075, in _handle_fromlist(module, fromlist, import_, recursive)
File /opt/conda/lib/python3.10/site-packages/transformers/utils/import_utils.py:1754, in _LazyModule.__getattr__(self, name)
1752 value = Placeholder
1753 elif name in self._class_to_module.keys():
-> 1754 module = self._get_module(self._class_to_module[name])
1755 value = getattr(module, name)
1756 else:
File /opt/conda/lib/python3.10/site-packages/transformers/utils/import_utils.py:1766, in _LazyModule._get_module(self, module_name)
1764 return importlib.import_module("." + module_name, self.__name__)
1765 except Exception as e:
-> 1766 raise RuntimeError(
1767 f"Failed to import {self.__name__}.{module_name} because of the following error (look up to see its"
1768 f" traceback):\n{e}"
1769 ) from e
RuntimeError: Failed to import transformers.integrations.integration_utils because of the following error (look up to see its traceback):
Failed to import transformers.modeling_tf_utils because of the following error (look up to see its traceback):
module 'openssl' has no attribute 'ciphers'
```
```
Failed to import transformers.integrations.integration_utils because of the following error (look up to see its traceback): Failed to import transformers.modeling_tf_utils because of the following error (look up to see its traceback): module 'openssl' has no attribute 'ciphers'
```
### Description
I cant import HuggingFaceBgeEmbeddings and huggingfaceembeddings for any of the available models
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Thu Jun 27 20:43:36 UTC 2024
> Python Version: 3.10.14 | packaged by conda-forge | (main, Mar 20 2024, 12:45:18) [GCC 12.3.0]
Package Information
-------------------
> langchain_core: 0.3.12
> langchain: 0.3.4
> langchain_community: 0.3.3
> langsmith: 0.1.137
> langchain_cohere: 0.3.1
> langchain_experimental: 0.3.2
> langchain_groq: 0.2.0
> langchain_openai: 0.2.3
> langchain_text_splitters: 0.3.0
Optional packages not installed
-------------------------------
> langgraph
> langserve
Other Dependencies
------------------
> aiohttp: 3.9.5
> async-timeout: 4.0.3
> cohere: 5.11.1
> dataclasses-json: 0.6.7
> groq: 0.11.0
> httpx: 0.27.0
> jsonpatch: 1.33
> numpy: 1.26.4
> openai: 1.52.2
> orjson: 3.10.4
> packaging: 24.1
> pandas: 2.2.3
> pydantic: 2.9.2
> pydantic-settings: 2.6.0
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.36
> tabulate: 0.9.0
> tenacity: 9.0.0
> tiktoken: 0.7.0
> typing-extensions: 4.12.2 | stale,investigate | low | Critical |
2,612,299,178 | flutter | Deprecate support for ia32 | Dart is considering dropping support for ia32, please see https://github.com/dart-lang/sdk/issues/49969 for more context.
This issue is to ensure Flutter goes through a deprecation process before removing support for ia32.
List of items to consider
- [x] remove support for jit release mode which is tracked here https://github.com/flutter/flutter/issues/151610
- [x] make an announcement to external developers about the intent to drop support for ia32, this could be done by having Flutter tools issue a warning whenever the x86 emulator is used (Please see https://github.com/flutter/flutter/issues/158953)
- [ ] switch Flutter tools to use x86_64 by default (switch to use a different ABI, apparently AS: 31 and above ABI uses x86_64)
- [ ] After one beta cycle of issuing the warning, remove support for ia32 in flutter tools
//cc @bkonyi | P1,team-tool,triaged-tool | medium | Major |
2,612,304,607 | flutter | Test runner appears to run `test/cupertino/tab_scaffold_test.dart` 14K times | Example log:
https://logs.chromium.org/logs/flutter/buildbucket/cr-buildbucket/8733174319067180289/+/u/read_logs/tmpfg_tnxrl
In the middle of the run, you can see the following:
```txt
unhandled error during finalization of test:
/Volumes/Work/s/w/ir/x/w/flutter/packages/flutter/test/cupertino/tab_scaffold_test.dart
TestDeviceException(Shell subprocess crashed with segmentation fault.)
#0 FlutterTesterTestDevice.finished (package:flutter_tools/src/test/flutter_tester_device.dart:251:5)
<asynchronous suspension>
#1 FlutterTesterTestDevice.kill (package:flutter_tools/src/test/flutter_tester_device.dart:233:5)
<asynchronous suspension>
#2 FlutterPlatform._startTest.<anonymous closure> (package:flutter_tools/src/test/flutter_platform.dart:555:9)
<asynchronous suspension>
#3 FlutterPlatform._startTest (package:flutter_tools/src/test/flutter_platform.dart:617:11)
<asynchronous suspension>
```
I am guessing due to this crash state, the test runner gets into a bad state which causes thousands of misattributed lines.
Filing against `tool` (I don't believe this is specific to our own test infrastructure). | tool,c: tech-debt,team-tool | low | Critical |
2,612,305,177 | flutter | Make `flutter analyze --suggestions` agnostic to patch versions | Context https://github.com/flutter/flutter/pull/156502#discussion_r1803810604
Our Gradle/Java/AGP compat checking code includes max known and supported Gradle and AGP versions. We should probably make the code that handles these versions agnostic to patch versions, in my opinion. | platform-android,P1,team-android,triaged-android | medium | Minor |
2,612,324,294 | svelte | Flash of content if a `fly` in-transition started immediately after out-transition | ### Describe the bug
Most of the time if the element is shown immediately after it's out-transition ended it momentarily flashes in its final state in DOM before the in-transition kicks in.
Doesn't matter if the element is shown in `onoutroend`, or in [$effect teardown in use](https://svelte.dev/docs/svelte/use)
Spamming `await tick()` everywhere doesn't help at all, but wrapping the code that starts the in-transition in `setTimeout(..., 0)` does. Makes me think about task queue starvation by microtask queue, but I don't understand JS at the level that deep.
Bug reproduced on Brave Browser: 129.1.70.126, desktop Safari 16.6.1 and mobile Safari. The same code on Svelte v4 works correctly.
### Reproduction
[REPL link @5.2.3](https://svelte.dev/playground/87db0d4896204488a5e490d29e9a3437?version=5.2.3)
Same code [works fine on 4.2.19](https://svelte.dev/playground/eb2e676081f6449e9116fe801a4e5cf1?version=4.2.19)
### Logs
_No response_
### System Info
```shell
System:
OS: macOS 11.7.10
CPU: (4) x64 Intel(R) Core(TM) i7-4650U CPU @ 1.70GHz
Memory: 274.89 MB / 8.00 GB
Shell: 3.6.1 - /usr/local/bin/fish
Binaries:
Node: 22.7.0 - ~/.local/share/nvm/v22.7.0/bin/node
Yarn: 1.22.19 - /usr/local/bin/yarn
npm: 10.8.2 - ~/.local/share/nvm/v22.7.0/bin/npm
Browsers:
Brave Browser: 129.1.70.126
Chrome: 126.0.6478.63
Safari: 16.6.1
npmPackages:
svelte: ^5.0.0 => 5.0.5
```
### Severity
annoyance | bug,transition/animation | low | Critical |
2,612,328,994 | PowerToys | awake not observing the powertoys setting of keep awake indefinitely | ### Microsoft PowerToys version
0.85.1
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
Awake
### Steps to reproduce
[PowerToysReport_2024-10-24-14-11-09.zip](https://github.com/user-attachments/files/17511188/PowerToysReport_2024-10-24-14-11-09.zip)
enable awake
select keep awake indefinitely
enable keep screen on
### ✔️ Expected Behavior
screen should not sleep
### ❌ Actual Behavior
screen sleeps
### Other Software
_No response_ | Issue-Bug,Needs-Triage,Needs-Team-Response | low | Major |
2,612,361,234 | pytorch | `torch.distributed._state_dict_utils._broadcast_tensors` does not properly support CPU tensors. | ### 🐛 Describe the bug
When using `torch.distributed._state_dict_utils._broadcast_tensors`, it is possible for tensors which need to be broadcasted to live on the CPU (such as with a CPU offloaded optimizer state dict), however the current pytorch 2.5 implementation does not allow for this because the NCCL collectives require the broadcasted tensor to be on a cuda device.
This came about by using Pytorch Lightning with CPU offloading on the optimizer states (via FSDP2). A min reproducible example with internal calls would look like this.
```python
import os
import torch
import torch.distributed as dist
import torch.distributed._state_dict_utils
import torch.multiprocessing as mp
def run(rank, size):
"""Distributed function to be implemented later."""
torch.distributed._state_dict_utils._broadcast_tensors(
{"key": torch.zeros(1)}, local_state_dict={}, device=torch.device("cpu"), keys=["key"]
)
def init_process(rank, size, fn):
"""Initialize the distributed environment."""
os.environ["MASTER_ADDR"] = "127.0.0.1"
os.environ["MASTER_PORT"] = "29500"
dist.init_process_group(backend="nccl", rank=rank, world_size=size)
fn(rank, size)
if __name__ == "__main__":
size = 2
processes = []
mp.set_start_method("spawn")
for rank in range(size):
p = mp.Process(target=init_process, args=(rank, size, run))
p.start()
processes.append(p)
for p in processes:
p.join()
```
A possible correction to this would be as follows
```python
def _broadcast_tensors(
full_state_dict: dict[str, Any],
local_state_dict: dict[str, Any],
keys: list[str],
device: torch.device,
pg: torch.distributed.ProcessGroup | None = None,
) -> None:
tensors = []
for key in keys:
if torch.distributed.get_rank() == 0:
full_state = full_state_dict[key]
assert isinstance(full_state, torch.Tensor)
full_tensor = full_state.detach().to(device)
else:
tensor_info = full_state_dict[key]
full_tensor = torch.empty(
size=tensor_info.size,
device=device,
dtype=tensor_info.dtype,
)
tensors.append(full_tensor)
local_state = local_state_dict.get(key)
if local_state is None:
continue
if isinstance(local_state, torch.distributed._tensor.DTensor): # pyright: ignore[reportPrivateImportUsage]
local_state_dict[key] = (local_state, full_tensor)
else:
local_state_dict[key] = full_tensor
if pg is None:
pg = torch.distributed.distributed_c10d._get_default_group()
tensors = [tensor.to(pg._device_types[0]) for tensor in tensors] # cast to the process group device
if len(tensors) > 1:
torch.distributed._broadcast_coalesced(pg, tensors, 500, 0) # pyright: ignore[reportPrivateImportUsage]
else:
torch.distributed.broadcast(tensors[0], src=0, group=pg)
tensors = [tensor.to(device) for tensor in tensors] # cast back to the original device
# Because the way the code prior to the broadcast operates by reference, we need to redefine these elements prior to distribution
if isinstance(local_state_dict[keys[0]][0], torch.distributed._tensor.DTensor): # pyright: ignore[reportPrivateImportUsage]
local_state_dict[keys[0]] = (local_state_dict[keys[0]][0], tensors[0])
else:
local_state_dict[keys[0]] = tensors[0]
torch.distributed._state_dict_utils._distribute_tensors(local_state_dict, keys, device, pg)
```
It is possible to also to make an example using either pytorch lightning, or `set_optimizer_state_dict`, which are the natural places these bugs would crop up, but they are less minimal in highlighting what the bug is.
If you think this approach is reasonable, happy to open a PR.
### Versions
```
PyTorch version: 2.5.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.15 (main, Oct 3 2024, 07:27:34) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-122-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 550.90.12
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 224
On-line CPU(s) list: 0-223
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8480+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 56
Socket(s): 2
Stepping: 8
Frequency boost: enabled
CPU max MHz: 2001.0000
CPU min MHz: 800.0000
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 5.3 MiB (112 instances)
L1i cache: 3.5 MiB (112 instances)
L2 cache: 224 MiB (112 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-55,112-167
NUMA node1 CPU(s): 56-111,168-223
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] hypothesis-torch==0.7.19
[pip3] mypy==1.12.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-lightning==2.4.0
[pip3] torch==2.5.0
[pip3] torchmetrics==1.5.0
[pip3] torchvision==0.20.0
[pip3] triton==3.1.0
[conda] hypothesis-torch 0.7.19 pypi_0 pypi
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] pytorch-lightning 2.4.0 pypi_0 pypi
[conda] torch 2.5.0 pypi_0 pypi
[conda] torchmetrics 1.5.0 pypi_0 pypi
[conda] torchvision 0.20.0 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed | low | Critical |
2,612,395,501 | go | x/tools/gopls: identifies arbitrary dotted text in strings as links | <!--
For asking questions, see:
- [Stack Overflow](https://stackoverflow.com/questions/tagged/go+visual-studio-code)
- [GitHub Discussions (Help)](https://github.com/golang/vscode-go/discussions/categories/help)
- [`#vscode` channel in Gophers Slack](https://invite.slack.golangbridge.org/messages/vscode)
Before filing an issue, please review our troubleshooting guides
* [Troubleshooting problems with debugging](https://github.com/golang/vscode-go/wiki/debugging#troubleshooting)
* [Troubleshooting other problems](https://github.com/golang/vscode-go/wiki/troubleshooting)
Please answer these questions before submitting your issue. Thanks!
-->
### What version of Go, VS Code & VS Code Go extension are you using?
<details><summary>Version Information</summary><br>
* Run `go version` to get version of Go from _the VS Code integrated terminal_.
```zsh
% go version
go version go1.23.1 darwin/amd64
```
* Run `gopls -v version` to get version of Gopls from _the VS Code integrated terminal_.
```zsh
% $(go env GOPATH)/bin/gopls -v version
Build info
----------
golang.org/x/tools/gopls v0.16.2
golang.org/x/tools/gopls@v0.16.2 h1:K1z03MlikHfaMTtG01cUeL5FAOTJnITuNe0TWOcg8tM=
github.com/BurntSushi/toml@v1.2.1 h1:9F2/+DoOYIOksmaJFPw1tGFy1eDnIJXg+UHjuD8lTak=
github.com/google/go-cmp@v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
golang.org/x/exp/typeparams@v0.0.0-20221212164502-fae10dda9338 h1:2O2DON6y3XMJiQRAS1UWU+54aec2uopH3x7MAiqGW6Y=
golang.org/x/mod@v0.20.0 h1:utOm6MM3R3dnawAiJgn0y+xvuYRsm1RKM/4giyfDgV0=
golang.org/x/sync@v0.8.0 h1:3NFvSEYkUoMifnESzZl15y791HH1qU2xm6eCJU5ZPXQ=
golang.org/x/telemetry@v0.0.0-20240829154258-f29ab539cc98 h1:Wm3cG5X6sZ0RSVRc/H1/sciC4AT6HAKgLCSH2lbpR/c=
golang.org/x/text@v0.16.0 h1:a94ExnEXNtEwYLGJSIUxnWoxoRz/ZcCsV63ROupILh4=
golang.org/x/tools@v0.22.1-0.20240829175637-39126e24d653 h1:6bJEg2w2kUHWlfdJaESYsmNfI1LKAZQi6zCa7LUn7eI=
golang.org/x/vuln@v1.0.4 h1:SP0mPeg2PmGCu03V+61EcQiOjmpri2XijexKdzv8Z1I=
honnef.co/go/tools@v0.4.7 h1:9MDAWxMoSnB6QoSqiVr7P5mtkT9pOc1kSxchzPCnqJs=
mvdan.cc/gofumpt@v0.6.0 h1:G3QvahNDmpD+Aek/bNOLrFR2XC6ZAdo62dZu65gmwGo=
mvdan.cc/xurls/v2@v2.5.0 h1:lyBNOm8Wo71UknhUs4QTFUNNMyxy2JEIaKKo0RWOh+8=
go: go1.23.1
```
* Run `code -v` or `code-insiders -v` to get version of VS Code or VS Code Insiders.
```zsh
% code -v
[1002/194224.823600:ERROR:codesign_util.cc(109)] SecCodeCheckValidity: Error Domain=NSOSStatusErrorDomain Code=-67062 "(null)" (-67062)
1.93.1
38c31bc77e0dd6ae88a4e9cc93428cc27a56ba40
x64
```
* Check your installed extensions to get the version of the VS Code Go extension
`v0.42.1`
* Run Ctrl+Shift+P (Cmd+Shift+P on Mac OS) > `Go: Locate Configured Go Tools` command.
```markdown
# Tools Configuration
## Environment
GOBIN: undefined
toolsGopath:
gopath: /Users/marc/.asdf/installs/golang/1.23.1/packages
GOROOT: /Users/marc/.asdf/installs/golang/1.23.1/go
PATH: /usr/local/anaconda3/condabin:/Users/marc/.asdf/shims:/usr/local/opt/asdf/libexec/bin:/Users/marc/.cargo/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/local/bin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/bin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/appleinternal/bin:/opt/X11/bin:/Library/Apple/usr/bin:/usr/local/MacGPG2/bin:/Library/TeX/texbin:/Applications/Wireshark.app/Contents/MacOS:/Applications/iTerm.app/Contents/Resources/utilities
## Tools
go: /Users/marc/.asdf/shims/go: go version go1.23.1 darwin/amd64
gopls: /Users/marc/.asdf/installs/golang/1.23.1/packages/bin/gopls (version: v0.16.2 built with go: go1.23.1)
gotests: /Users/marc/.asdf/installs/golang/1.23.1/packages/bin/gotests (version: v1.6.0 built with go: go1.23.1)
gomodifytags: /Users/marc/.asdf/installs/golang/1.23.1/packages/bin/gomodifytags (version: v1.17.0 built with go: go1.23.1)
impl: /Users/marc/.asdf/installs/golang/1.23.1/packages/bin/impl (version: v1.4.0 built with go: go1.23.1)
goplay: /Users/marc/.asdf/installs/golang/1.23.1/packages/bin/goplay (version: v1.0.0 built with go: go1.23.1)
dlv: /Users/marc/.asdf/installs/golang/1.23.1/packages/bin/dlv (version: v1.23.1 built with go: go1.23.1)
staticcheck: /Users/marc/.asdf/installs/golang/1.23.1/packages/bin/staticcheck (version: v0.5.1 built with go: go1.23.1)
## Go env
Workspace Folder (bugreport): /Users/marc/Projects/go/bugreport
GO111MODULE=''
GOARCH='amd64'
GOBIN=''
GOCACHE='/Users/marc/Library/Caches/go-build'
GOENV='/Users/marc/Library/Application Support/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='amd64'
GOHOSTOS='darwin'
GOINSECURE=''
GOMODCACHE='/Users/marc/.asdf/installs/golang/1.23.1/packages/pkg/mod'
GONOPROXY='gitlab.com/nielsen-media/*'
GONOSUMDB='gitlab.com/nielsen-media/*'
GOOS='darwin'
GOPATH='/Users/marc/.asdf/installs/golang/1.23.1/packages'
GOPRIVATE='gitlab.com/nielsen-media/*'
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/Users/marc/.asdf/installs/golang/1.23.1/go'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/Users/marc/.asdf/installs/golang/1.23.1/go/pkg/tool/darwin_amd64'
GOVCS=''
GOVERSION='go1.23.1'
GODEBUG=''
GOTELEMETRY='local'
GOTELEMETRYDIR='/Users/marc/Library/Application Support/go/telemetry'
GCCGO='gccgo'
GOAMD64='v1'
AR='ar'
CC='clang'
CXX='clang++'
CGO_ENABLED='1'
GOMOD='/Users/marc/Projects/go/bugreport/go.mod'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -arch x86_64 -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -ffile-prefix-map=/var/folders/0d/8r9jqxd54rx2l7n0vqv2f0qm0000gn/T/go-build2339330694=/tmp/go-build -gno-record-gcc-switches -fno-common'
```
</details>
### Share the Go related settings you have added/edited
Run `Preferences: Open Settings (JSON)` command to open your settings.json file.
Share all the settings with the `go.` or `["go"]` or `gopls` prefixes.
```json
{
"go.coverOnSingleTest": true,
"go.coverOnSingleTestFile": true,
"go.enableCodeLens": {},
"go.formatTool": "goimports",
"go.testFlags": [
"-count=1",
"-v",
"-args",
"-test.v"
],
"go.testTimeout": "0",
"go.toolsManagement.autoUpdate": false,
"gopls": {
"ui.semanticTokens": true,
},
}
```
### Describe the bug
A clear and concise description of what the bug.
Arbitrary dotted text in strings is being identified as links and is underlined when "editor.links" is enabled (set to true). When this extension, golang.go, is enabled, this happens. When this extension is disabled, it does not.
A clear and concise description of what you expected to happen.
I'd expect URLs having schema to be underscored as links when "editor.links" is enabled. Maybe "real" FQDNs not having a schema, can be detected and underscored as links, but would require name lookups, to verify dotted text as links, which would slow things down quite a bit.
### Steps to reproduce the behavior:
- Create a simple Go program
- mkdir bugreport; cd bugreport
- go mod init bugreport
- create file main.go and paste the following content into main.go
```go
package main
import (
"log"
"time"
)
type person struct {
name string
age int
}
func main() {
announcement := time.Date(2009, 11, 10, 0, 0, 0, 0, time.UTC)
years := int(time.Since(announcement).Hours() / 24 / 365)
p := person{name: "gopher", age: years}
log.Printf("p.name is %q, p.age is %d", p.name, p.age)
log.Printf("go.dev is my favorite site!")
log.Printf("https://p.name is not a real site!")
}
```
With "editor.links" enabled (set to true), and the golang.go extension **enabled**, the following dotted text in the `log.Printf` format strings is underlined as links in `main()`
- p.name
- go.dev
- https://p.name
With "editor.links" enabled (set to true), and the golang.go extension **disabled**, the following dotted text in the `log.Printf` format strings is underlined as links in `main()`
- https://p.name
With "editor.links" disabled (set to false), none of the dotted text is underlined as links.
### Screenshots or recordings
If applicable, add screenshots or recordings to help explain your problem.
#### "editor.links" enabled && golang.go extension enabled

#### "editor.links" enabled && golang.go extension disabled

#### "editor.links" disabled && golang.go extension enabled

| gopls,Tools,upstream-tools | low | Critical |
2,612,429,299 | go | proposal: strings, bytes: Add JoinSeq | ### Proposal Details
#61901 added SplitSeq and friends to bytes, but nothing for joining. Right now you have to do slices.Collect first or use a bytes.Buffer/strings.Builder, which feels unfortunate. I propose to add `strings.JoinSeq(seq iter.Seq[string], sep string) string` and the bytes equivalent as well. | Proposal | low | Major |
2,612,435,042 | react | [React 19] Inconsistent hoisting behavior between links created with `preload()` and manually inserted ones | ## Summary
When trying to preload the `srcset` of an image with a `link` tag. I found an unexpected behavior.
### Examples
#### Preloading srcset with `preload()`
``` js
preload('https://sample.com/foo', {
as: 'image',
imageSrcSet: 'https://sample.com/foo 1x',
});
```
##### Results (As expected)
``` html
<html>
<head>
<link rel="preload" as="image" imagesrcset="https://sample.com/foo 1x" />
</head>
<body></body>
</html>
```
#### Preloading srcset with `<link >`
``` html
<link
rel="preload"
as="image"
imageSrcSet="https://sample.com/foo 1x"
/>
```
##### Expected
``` html
<html>
<head>
<link rel="preload" as="image" imagesrcset="https://sample.com/foo 1x" />
</head>
<body></body>
</html>
```
##### Current
``` html
<html>
<head></head>
<body>
<link rel="preload" as="image" imagesrcset="https://sample.com/foo 1x" />
</body>
</html>
```
#### Preloading src with `preload()`
``` js
preload('https://sample.com/foo', {
as: 'image',
});
```
##### Results (As expected)
``` html
<html>
<head>
<link rel="preload" as="image" href="https://sample.com/foo" />
</head>
<body></body>
</html>
```
#### Preloading src with `<link >`
``` html
<link
rel="preload"
as="image"
href="https://sample.com/foo"
/>
```
##### Results (As expected)
``` html
<html>
<head>
<link rel="preload" as="image" href="https://sample.com/foo" />
</head>
<body></body>
</html>
```
#### Rendering `<img /> with srcset`
``` html
<img src="https://sample.com/bar" srcSet="https://sample.com/bar 1x" />
```
##### Results (As expected)
``` html
<html>
<head>
<link rel="preload" as="image" imagesrcset="https://sample.com/foo 1x" />
</head>
<body>
<img src="https://sample.com/bar" srcset="https://sample.com/bar 1x" />
</body>
</html>
```
Is this intended? Why links created with `preload()` end up hoisted to the head, while manually inserted links without the `href` attribute do not?
| Resolution: Stale,React 19 | medium | Minor |
2,612,492,249 | next.js | Next.JS does NOT support Docker Swarm. | ### Link to the code that reproduces this issue
https://github.com/SanderCokart/sandercokart.com/tree/development
### To Reproduce
`docker service create` and `docker stack deploy` both do launch the nodes and services but going to localhost:3000 results in nothing.
### Current vs. Expected behavior
Going to localhost:3000 results in nothing.
**Expected behavior:** webpage shows up
### Provide environment information
```bash
Operating System:
Platform: win32
Arch: x64
Version: Windows 11 Pro
Available memory (MB): 65268
Available CPU cores: 16
Binaries:
Node: 20.17.0
npm: N/A
Yarn: N/A
pnpm: 9.11.0
Relevant Packages:
next: 14.2.11 // An outdated version detected (latest is 15.0.1), upgrade is highly recommended!
eslint-config-next: 14.2.11
react: 18.3.1
react-dom: 18.3.1
typescript: 5.6.2
Next.js Config:
output: standalone
```
### Which area(s) are affected? (Select all that apply)
Output (export/standalone)
### Which stage(s) are affected? (Select all that apply)
Other (Deployed)
### Additional context
After a `docker swarm init` you can run `docker service create -p 80:80 --replicas 1 --name nginx nginx` and then visit localhost:80 (make sure to disable apache in wsl running `sudo systemctl stop apache2.service` in wsl terminal) you can see the nginx welcome page.
Using curl to go to localhost:3000 you get this:
```
curl: (52) Empty reply from server
``` | bug,Output (export/standalone) | low | Minor |
2,612,509,514 | next.js | Example Dockerfile does not copy .npmc / .yarnrc files | ### Verify canary release
- [X] I verified that the issue exists in the latest Next.js canary release
### Provide environment information
```bash
Operating System:
Platform: win32
Arch: x64
Version: Windows 11 Pro
Available memory (MB): 32566
Available CPU cores: 16
Binaries:
Node: 20.12.2
npm: N/A
Yarn: N/A
pnpm: 9.10.0
Relevant Packages:
next: 14.2.10 // An outdated version detected (latest is 15.0.1), upgrade is highly recommended!
eslint-config-next: N/A
react: 18.3.1
react-dom: 18.3.1
typescript: 5.5.4
Next.js Config:
output: standalone
```
### Which example does this report relate to?
with-docker
### What browser are you using? (if relevant)
_No response_
### How are you deploying your application? (if relevant)
_No response_
### Describe the Bug
Currently example `Dockerfile` does not try to copy `.npmrc` / `.yarnrc` files which can lead to issues when installing packages
### Expected Behavior
Example `Dockerfile` should attempt to copy those configuration files just like it attempts to copy the lock files
### To Reproduce
1. Add a package from a private registry to your NextJS project
2. Copy the example `Dockerfile` into your project
3. Attempt to build the docker image | examples | low | Critical |
2,612,510,079 | ollama | RFC: multi-template models | Some models are being released with multiple chat_templates, eg [Aya-Expanse 32B & 8B](https://huggingface.co/CohereForAI/aya-expanse-8b/blob/b9848575c8731981dfcf2e1f3bfbcb917a2e585d/tokenizer_config.json#L304) which has a default template but also templates tuned for RAG and Tool use.
There's a few ways these could be supported.
1. Mash all of the templates together and add a new option to the API request that the golang template can access.
```go
{{- if eq .TemplateSelector "rag" }}{{ system_message = 'Decide which of the retrieved documents are relevant to the user" }}
{{- else if eq .TemplateSelector "tool_use" }}{{ system_message = "You can use the following tools" }}
{{- else system_message = "You are Aya, a brilliant, sophisticated, multilingual AI-assistant" }}
{{- end }}
<BOS_TOKEN>
...
```
2. Give the API the ability to supply a template. This currently works for `/api/generate` but not `/api/chat`, and requires some surgery to make it work.
3. Create different versions of the model with the same GGUF with different TEMPLATE statements. This is easiest but requires loading the same model multiple times.
4. Modifying TEMPLATE processing to support TEMPLATE[]. Also requires metadata in the API call to choose the appropriate template.
Is there any appetite for supporting multi-template models? | feature request | low | Minor |
2,612,517,201 | react | [DevTools Bug]: React profiler reporting false rendering times | ### Website or app
https://codesandbox.io/p/sandbox/suspicious-darkness-zpfcld
### Repro steps
Visit sandbox: https://codesandbox.io/p/sandbox/suspicious-darkness-zpfcld?file=%2Fsrc%2FApp.js%3A19%2C22
(This is a modified version of a sandbox from Dan Abramov's blog https://overreacted.io/before-you-memo/)
Observe that ExpensiveTree does not actually render when you type in the text input. I confirmed this with the Chrome CPU profiler (basically 100% idle):
<img width="341" alt="Screenshot 2024-10-24 at 1 36 54 PM" src="https://github.com/user-attachments/assets/e6444e4a-1543-4122-b9f4-5b22e41312b0">
Also, ExpensiveTree waits a random amount of time every time it updates. It renders the resulting delay, and you can observe that the value never changes.
However, React Profiler falsely reports that ExpensiveTree is actually rendering:
<img width="1590" alt="Screenshot 2024-10-24 at 1 38 06 PM" src="https://github.com/user-attachments/assets/8d09c18e-2a5f-473e-b65a-13d4c6b92449">
The times are all the same, which suggests that the Profiler UI is:
- falsely marking ExpensiveTree as rendered when it wasn't
- probably using the initial render time as the value
### How often does this bug happen?
Every time
### DevTools package (automated)
_No response_
### DevTools version (automated)
_No response_
### Error message (automated)
_No response_
### Error call stack (automated)
_No response_
### Error component stack (automated)
_No response_
### GitHub query string (automated)
_No response_ | Type: Bug,Status: Unconfirmed,Component: Developer Tools | medium | Critical |
2,612,518,855 | flutter | [camera] Integration tests fail when Android example app targets SDK >=31 | Upon expecting an image to be at a resolution of 240, it receives 1080 instead. This is both `camera_android` and `camera_android_camerax`.
Related to https://github.com/flutter/flutter/issues/152929 | a: tests,platform-android,p: camera,package,P2,c: flake,team-android,triaged-android | low | Minor |
2,612,527,693 | react-native | Unhandled JS Exception: Invariant Violation: TurboModuleRegistry.getEnforcing(...): 'DevMenu' | ### Description
Hi, Guys.
Im stuck in this error for a while. It only happens in release mode. I dont know if it is a react native bug or not. Can any one help me please?
The error:
Unhandled JS Exception: Invariant Violation: TurboModuleRegistry.getEnforcing(...): 'DevMenu' when app will run on any device.
### Steps to reproduce
1 - Change schema to Release
2 - Run
### React Native Version
0.74.2
### Affected Platforms
Runtime - iOS
### Output of `npx react-native info`
```text
System:
OS: macOS 14.2.1
CPU: (8) x64 Intel(R) Core(TM) i5-8257U CPU @ 1.40GHz
Memory: 77.69 MB / 8.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 18.17.0
path: /usr/local/bin/node
Yarn:
version: 3.6.4
path: /usr/local/bin/yarn
npm:
version: 6.14.18
path: ~/DevFontes/SarApp/node_modules/.bin/npm
Watchman:
version: 2024.08.26.00
path: /usr/local/bin/watchman
Managers:
CocoaPods:
version: 1.15.2
path: /usr/local/bin/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 23.5
- iOS 17.5
- macOS 14.5
- tvOS 17.5
- visionOS 1.2
- watchOS 10.5
Android SDK: Not Found
IDEs:
Android Studio: 2021.2 AI-212.5712.43.2112.8512546
Xcode:
version: 15.4/15F31d
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.9
path: /Users/alanmachado/.jenv/shims/javac
Ruby:
version: 2.6.10
path: /usr/bin/ruby
npmPackages:
"@react-native-community/cli": Not Found
react:
installed: 18.2.0
wanted: 18.2.0
react-native:
installed: 0.74.6
wanted: 0.74.6
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: false
iOS:
hermesEnabled: true
newArchEnabled: false
info React Native v0.76.0 is now available (your project is running on v0.74.6).
info Changelog: https://github.com/facebook/react-native/releases/tag/v0.76.0
info Diff: https://react-native-community.github.io/upgrade-helper/?from=0.74.6
info For more info, check out "https://reactnative.dev/docs/upgrading?os=macos".
```
### Stacktrace or Logs
```text
Unhandled JS Exception: Invariant Violation: TurboModuleRegistry.getEnforcing(...): 'DevMenu' could not be found. Verify that a module by this name is registered in the native binary.Bridgeless mode: false. TurboModule interop: false. Modules loaded: {"NativeModules":["UIManager","PlatformConstants","DeviceInfo","SourceCode","BlobModule"],"TurboModules":[],"NotFound":["NativePerformanceCxx","NativePerformanceObserverCxx","RedBox","BugReporting","HeadlessJsTaskSupport","DevMenu"]}, js engine: hermes
```
### Reproducer
personal code
### Screenshots and Videos
_No response_ | Needs: Repro,Newer Patch Available,Needs: Attention | low | Critical |
2,612,534,794 | kubernetes | cloud-provider HasClusterID related functionality should have better documentation in the code | _note from @elmiko, i am transferring this issue from its original location https://github.com/kubernetes/cloud-provider/issues/71_
originally posted by @guettli
Looking at the basic_main.go
```go
if !cloud.HasClusterID() {
if config.ComponentConfig.KubeCloudShared.AllowUntaggedCloud {
klog.Warning("detected a cluster without a ClusterID. A ClusterID will be required in the future. Please tag your cluster to avoid any future issues")
} else {
klog.Fatalf("no ClusterID found. A ClusterID is required for the cloud provider to function properly. This check can be bypassed by setting the allow-untagged-cloud option")
}
}
```
[basic_main.go](https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/cloud-provider/sample/basic_main.go#L90)
But where is ClusterID actually used?
Afaik allow-untagged-cloud is deprecated, but afaik ClusterID does not get used.
The current state is confusing.
Please explain how a ccm should implement a ClusterID.
/sig cloud-provider | sig/storage,help wanted,sig/cloud-provider,triage/accepted | medium | Major |
2,612,554,272 | flutter | [in_app_purchase_storekit] ☂️ StoreKit2 Feature Tracker | Now that in_app_purchase has been [migrated from StoreKit to StoreKit 2](https://github.com/flutter/flutter/issues/116383), incrementally add missing StoreKit 2 features:
### StoreKit 2 Features 😄
- Restore Purchaes/Syncing Purchases
- [ ] https://github.com/flutter/flutter/issues/118960
- This may or not be covered by restorePurchases in the StoreKit2 implementation.
- https://developer.apple.com/documentation/storekit/transaction/3851204-currententitlements/
- [ ] https://github.com/flutter/flutter/issues/159631
- [ ] Subscription Management
- [ ] https://github.com/flutter/flutter/issues/147698
- [ ] https://github.com/flutter/flutter/issues/143514
- [ ] https://github.com/flutter/flutter/issues/159664
- [ ] https://github.com/flutter/flutter/issues/160826
- [ ] https://github.com/flutter/flutter/issues/161393
- [ ] StoreKitTest
- [ ] https://github.com/flutter/flutter/issues/65082
- This may be able to be closed as is, but maybe documentation should be written about how to use StoreKitTest first
- [ ] Transaction properties
- [ ] https://github.com/flutter/flutter/issues/158225
- [x] https://github.com/flutter/flutter/issues/158882
### Package Level Issues 📦
- [x] https://github.com/flutter/flutter/issues/158894
- [ ] https://github.com/flutter/flutter/issues/159871
### Bugs 🐛
- [x] https://github.com/flutter/flutter/issues/159080
- [x] https://github.com/flutter/flutter/issues/159843
- [x] https://github.com/flutter/flutter/issues/160148
- [ ] https://github.com/flutter/flutter/issues/159807
| c: new feature,platform-ios,platform-mac,P2,team-ios,triaged-ios | low | Critical |
2,612,554,306 | storybook | [Bug]: @storybook/react - @types/estree is outdated | ### Describe the bug
Getting this error when building a image snapshot plugin for vitest browser mode integration:
```sh
node_modules/rollup/dist/rollup.d.ts (4:0): "BaseNode" is not exported by "node_modules/@types/estree/index.d.ts", imported by "node_modules/rollup/dist/rollup.d.ts".
DTS Build failed
```
This happen as I need to augment the `BrowserPage` type in `@vitest/browser/context`.
`rollup` is using `@types/estree` 1.0.6, while `@storybook/react` is using `0.0.51`
### Reproduction link
https://github.com/repobuddy/storybook-image-snapshot
### Reproduction steps
- Checkout branch `sb-estree`
- `npm i`
- `npm build`
Should observe:
```sh
DTS ⚡️ Build success in 802ms
DTS dist/preview.d.ts 61.00 B
node_modules/rollup/dist/rollup.d.ts (4:0): "BaseNode" is not exported by "node_modules/@types/estree/index.d.ts", imported by "node_modules/rollup/dist/rollup.d.ts".
Error: error occured in dts build
```
### System
```sh
Storybook Environment Info:
System:
OS: Linux 5.15 Ubuntu 22.04.3 LTS 22.04.3 LTS (Jammy Jellyfish)
CPU: (16) x64 AMD Ryzen 7 5800X 8-Core Processor
Shell: 5.8.1 - /usr/bin/zsh
Binaries:
Node: 20.10.0 - /run/user/1000/fnm_multishells/962_1729806910856/bin/node
Yarn: 1.22.21 - /run/user/1000/fnm_multishells/962_1729806910856/bin/yarn
npm: 10.2.3 - /run/user/1000/fnm_multishells/962_1729806910856/bin/npm <----- active
pnpm: 9.12.1 - /run/user/1000/fnm_multishells/962_1729806910856/bin/pnpm
npmPackages:
@storybook/addon-essentials: ^8.3.6 => 8.3.6
@storybook/addon-interactions: ^8.3.6 => 8.3.6
@storybook/addon-links: ^8.3.6 => 8.3.6
@storybook/blocks: ^8.3.6 => 8.3.6
@storybook/experimental-addon-test: ^8.3.6 => 8.3.6
@storybook/icons: ^1.2.10 => 1.2.10
@storybook/react: ^8.3.6 => 8.3.6
@storybook/react-vite: ^8.3.6 => 8.3.6
@storybook/test: ^8.3.6 => 8.3.6
storybook: ^8.3.6 => 8.3.6
```
### Additional context
_No response_ | bug,dependencies,help wanted,sev:S3 | low | Critical |
2,612,560,268 | godot | In ParticleEmitters (at least CPU2D), the particles' gradient breaks if the parent has an animation playing. | ### Tested versions
Reproducible in v4.3.stable.official [77dcf97d8]
### System information
Godot v4.3.stable - Windows 10.0.22631 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 2070 SUPER (NVIDIA; 32.0.15.6094) - Intel(R) Core(TM) i9-10900F CPU @ 2.80GHz (20 Threads)
### Issue description
When using CPUParticles2D (I have not tested other particles) and making the colour a gradient, the colour works as expected - shifting further right the longer its lifespan goes on. However, if the particle node or one of its parents has an animationPlayer playing an animation on it, it causes the gradient to seemingly break, and instead the colour of the particles seem to be based on when the particles spawn, which is wrong. I have attached an example below. They fix themselves instantly once the animation ends.
https://github.com/user-attachments/assets/b3bba603-b261-4795-8da8-08e3ac277778
### Steps to reproduce
1) Create a CPUParticles2D
2) Place a gradient on the particles (increase the amount of particles to make it easier to see)
3) Create an AnimationPlayer
4) Make an animation that manipulates the particles (what the animation does doesn't matter). Making the animation long or loop will make it easier to see)
5) Play the animation, and see the gradient freak out
### Minimal reproduction project (MRP)
N/A, follow the above steps | bug,topic:animation,topic:particles | low | Minor |
2,612,567,186 | vscode | Multi-diff editor is not screen reader accessible | 1. Enable a screen reader
2. Open the multi=diff editor
3. Navigate the diff
4. 🐛 , only the modified line is accessible, a user cannot navigate to the original line

| accessibility,multi-diff-editor | low | Minor |
2,612,594,524 | ui | [bug]: command clicks are ignored (although enter works as expected | ### Describe the bug
Given this component:
```jsx
const SatelliteSearch = () => {
const satellites = useAtomValue($satellites);
const [selectedSatelliteIndexes, setSelectedSatelliteIndexes] = useAtom(
$selectedSatellitesIndexes
);
const [isFocused, setIsFocused] = useState(false);
return (
<div className="absolute top-2 right-2 z-50 w-96">
<Command className="rounded-lg border shadow-md">
<CommandInput
placeholder="Search for a satellite by name or NORAD ID"
onFocus={() => setIsFocused(true)}
onBlur={() => setIsFocused(false)}
/>
<CommandList>
{isFocused && (
<>
<CommandEmpty>No results found.</CommandEmpty>
{satellites.map((satellite) => (
<CommandItem
key={`${satellite.name} (${satellite.satnum})`}
value={`${satellite.name} (${satellite.satnum})`}
onSelect={() => {
// if the satellite is already selected, remove it from the list
if (selectedSatelliteIndexes.has(satellite.index!)) {
selectedSatelliteIndexes.delete(satellite.index!);
setSelectedSatelliteIndexes(
new Set(selectedSatelliteIndexes)
);
} else {
setSelectedSatelliteIndexes(
new Set([satellite.index!, ...selectedSatelliteIndexes])
);
}
}}
>
<span className="flex flex-row items-center w-full">
{satellite.name} ({satellite.satnum})
</span>
{selectedSatelliteIndexes.has(satellite.index!) ? (
<CheckIcon className="ml-2 h-4 w-4 text-green-500" />
) : null}
</CommandItem>
))}
</>
)}
</CommandList>
</Command>
</div>
);
};
```
Selecting via Enter works, but clicks do nothing (the list is just closed with no action and nothing I logs).
### Affected component/components
Command, Command Item
### How to reproduce
Provided in the description.
### Codesandbox/StackBlitz link
_No response_
### Logs
```bash
Nothing in the logs
```
### System Info
```bash
package.json:
"cmdk": "^1.0.0",
```
These two issues are both closed without an actual fix.
- https://github.com/shadcn-ui/ui/issues/2944
- https://github.com/shadcn-ui/ui/issues/2963
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,612,605,408 | three.js | KTX2Loader: Improve transcoder target format selection | ### Description
Related:
- https://github.com/mrdoob/three.js/pull/29730
As described in the issue above, Firefox may indicate support for compressed formats that it actually emulates in drivers, decompressing or transcoding to another format. This emulation has much higher cost than just transcoding to the 'right' format in KTX2Loader, but we don't necessarily know what the right format is because Firefox doesn't say. We don't have this problem on ANGLE, but currently we are preserving a less-optimal choice of target transcoding format for ANGLE because of the Firefox issue.
### Solution
- [ ] Consider detecting the Firefox user agent and adjusting format selection as a special case, if we can do so reliably
- For Linux+Firefox+RadeonSI we should prefer BCn formats over ETC2 as the target from ETC1S, when both are "supported"
- For other platforms, is the reverse happening? Would Firefox report BCn support when only ETC2 is actually available?
- [ ] Test major browsers on multiple platforms, check target format selection, and implement unit tests for transcoder target formats if possible (WASM in tests may be tricky...)
- [ ] Consider preferring BC1/3 over BC7 for ETC1S transcoding
- See https://github.com/KhronosGroup/3D-Formats-Guidelines/issues/25
- [ ] File an issue with Firefox about the emulated support
- [ ] Fix invalid WebGPU feature detection in KTX2Loader for (non-existent) texture-compression-bptc and texture-compression-etc1
### Alternatives
n/a
### Additional context
_No response_ | Suggestion,Loaders | low | Minor |
2,612,607,195 | node | dns.lookup timeout / cancellation | ### What is the problem this feature will solve?
I'm working on some dev tools that require the user to set up a local DNS record in their `/etc/hosts` file, and I'd like to check if this lookup of `some.custom.local` is resolving correctly to localhost.
If I understand correctly, dns `Resolver` does not hit the local OS dns stack, so I must use `dns.lookup`, however
the resolver methods have options for timeout/retries and cancellation, and lookup does not.
If successful, the dns lookup returns very quickly, yet if it fails it hangs for quite a long time while it probably retries and waits for a timeout. I understand the lookup method is doing very different things to the resolver method and this may be challenging, but nonetheless, I would like to be able to set a very short timeout or cancel the lookup if it does not succeed immediately.
### What is the feature you are proposing to solve the problem?
I'd like to be able to set timeout/retry options on `dns.promises.lookup`, or to receive a promise that can be cancelled.
### What alternatives have you considered?
Current workaround is to use `Promise.race`, but the process still hangs until the dns lookup finishes.
```
const raceResult = await Promise.race([
sleep(10, false),
dns.promises.lookup(devHost),
]);
if (raceResult === false) {
throw new Error('DNS LOOKUP FAILED');
proces.exit(1);
}
``` | dns,feature request | low | Critical |
2,612,607,982 | pytorch | [torch.export] round is not an allowed operator type | ### 🐛 Describe the bug
Hi team, just want to report that round isn't supported in export. This can be bypassed with `math.trunc(x + 0.5)`, which is supported.
repro:
```
class M(torch.nn.Module):
def forward(self, x):
ori_size = (
round(x.shape[-2] / 1),
round(x.shape[-1] / 1),
)
x = F.interpolate(x, size=ori_size, mode="bilinear")
return x
input1 = (torch.rand(1, 3, 28, 28, device="cuda"),)
input2 = (torch.rand(1, 3, 56, 56, device="cuda"),)
inputs = [input1, input2]
model = M().cuda()
_ = model(*input1)
dynamic_shapes = {
"x": {2: torch.export.Dim.DYNAMIC, 3: torch.export.Dim.DYNAMIC},
}
ep = torch.export.export(model, input1, dynamic_shapes=dynamic_shapes, strict=False)
path = torch._inductor.aot_compile(ep.module(), input1)
aot_model = torch._export.aot_load(path, device="cuda")
for input in inputs:
torch.testing.assert_close(aot_model(*input), model(*input))
```
error:
```
torch/_export/verifier.py", line 197, in _check_valid_op
raise SpecViolationError(
torch._export.verifier.SpecViolationError: Operator '<built-in function round>' is not an allowed operator type: (<class 'torch._ops.OpOverload'>, <class 'torch._ops.HigherOrderOperator'>)
Valid builtin ops: [<built-in function getitem>, <built-in function add>, <built-in function mul>, <built-in function sub>, <built-in function truediv>, <built-in function ge>, <built-in function le>, <built-in function gt>, <built-in function lt>, <built-in function eq>, <built-in function ne>, <built-in function floordiv>, <built-in function mod>, <built-in function and_>, <built-in function or_>, <built-in function not_>, <built-in function pow>, <built-in function neg>, <built-in function abs>, <built-in function ceil>, <built-in function floor>, <built-in function trunc>]Valid torch functions: (<class 'torch.autograd.grad_mode.set_grad_enabled'>, <function sym_int at 0x7fae060e30a0>, <function sym_float at 0x7fae060e3010>, <function sym_ite at 0x7fae060e3370>, <function sym_max at 0x7fae060e3130>, <function sym_min at 0x7fae060e3250>, <function sym_not at 0x7fae06305240>, <function _sym_sqrt at 0x7fae060e3400>, <built-in function _set_grad_enabled>, <function _enter_autocast at 0x7fae0127a950>, <function _exit_autocast at 0x7fae0127ac20>)
```
### Versions
trunk
cc @ezyang @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | triaged,oncall: pt2,oncall: export | low | Critical |
2,612,611,139 | ui | [feat]: close sidebar on item click (mobile) | ### Feature description
On mobile devices, where the sidebar is displayed as a drawer, clicking on a menu item navigates to the corresponding page but leaves the drawer open. This behavior is not ideal because the drawer continues to cover the main content area, preventing users from seeing the page they've navigated to.
### Current Implementation
```
<SidebarMenuButton asChild isActive={currentPath === "/analytics"}>
<Link href="/analytics">
<ChartSpline />
<span>Analytics</span>
</Link>
</SidebarMenuButton>
```
### Expected Behavior
When a user selects a menu item in the sidebar drawer on a mobile device, the drawer should automatically close after navigation. This allows the new page content to be immediately visible to the user.
### Question/Suggestion
Should we introduce an additional prop to the Sidebar component that enables the drawer to close when a SidebarMenuButton is clicked? Alternatively, is there another workaround or best practice we can implement to achieve this behavior?
### Affected component/components
Sidebar
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | medium | Critical |
2,612,612,125 | pytorch | Using int(shape) in export would result in silent specialization | ### 🐛 Describe the bug
Hi team, just reporting this problem. I can bypass it if I replace int with `math.trunc`.
repro:
```
class M(torch.nn.Module):
def forward(self, x):
ori_size = (
int(x.shape[-2] / 1),
int(x.shape[-1] / 1),
)
x = F.interpolate(x, size=ori_size, mode="bilinear")
return x
input1 = (torch.rand(1, 3, 28, 28, device="cuda"),)
input2 = (torch.rand(1, 3, 56, 56, device="cuda"),)
inputs = [input1, input2]
model = M().cuda()
_ = model(*input1)
dynamic_shapes = {
"x": {2: torch.export.Dim.DYNAMIC, 3: torch.export.Dim.DYNAMIC},
}
ep = torch.export.export(model, input1, dynamic_shapes=dynamic_shapes, strict=False)
path = torch._inductor.aot_compile(ep.module(), input1)
aot_model = torch._export.aot_load(path, device="cuda")
for input in inputs:
torch.testing.assert_close(aot_model(*input), model(*input))
```
error:
```
torch/testing/_comparison.py", line 1530, in assert_close
raise error_metas[0].to_error(msg)
AssertionError: The values for attribute 'shape' do not match: torch.Size([1, 3, 28, 28]) != torch.Size([1, 3, 56, 56]).
```
### Versions
trunk
cc @ezyang @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | triaged,oncall: pt2,oncall: export | low | Critical |
2,612,621,283 | flutter | `TextInputType` should support `webSearch` which maps to iOS `UIKeyboardType.webSearch` | ### Use case
iOS supports since iOS 7.0 the keyboard type `webSearch` which is a system keyboard that can be used in text fields that both search, but also cope with URLs. From the official documentation:
```objc
UIKeyboardTypeWebSearch API_AVAILABLE(ios(7.0)), // A default keyboard type with URL-oriented addition (shows space . prominently).
```

On Android this type could be mapped to `url` as it does pretty much the same, and has both space and . keys.
On iOS the `url` keyboard is useless for the search use case as it won't show a space bar.
I tried to workaround with custom iOS code but this is very tedious and I didn't find a solution.
Also there is a third-party package [keyboard_actions](https://pub.dev/packages/keyboard_actions) which allows to add a top-bar "." button, but this unfortauntely isn't the same user experience. Browsers like Safari, Chrome and Duck Duck Go use the iOS native `UIKeyboardTypeWebSearch` in the search bars as well.
### Proposal
1. `TextInputType` should get a new enumeration `webSearch`
2. On Android (or any other platform) this remapped to `url`
3. The platform dependent code in `flutter/engine` for iOS is adapted to map `webSearch` to `UIKeyboardTypeWebSearch`
If there is a chance this is accepted by the Flutter team, I am gladly willing to do the implementation and create PRs for flutter and the engine. | a: text input,platform-ios,framework,engine,P2,team-text-input,triaged-text-input | low | Major |
2,612,636,003 | angular | Animations in <ng-content> Not Triggered by animateChild() When Parent Component Uses Query + Stagger Animation | ### Which @angular/* package(s) are the source of the bug?
animations
### Is this a regression?
No
### Description
When parent component has query + stagger animation that queries child animations to animate using animateChild(), animations applied to elements projected into <ng-content> are not animated. meanwhile animations projected inside that are wrapped into child components are queried and animated correctly.
### Please provide a link to a minimal reproduction of the bug
https://stackblitz.com/edit/angular-866nmh?file=src%2Fmain.ts
### Please provide the exception or error you saw
_No response_
### Please provide the environment you discovered this bug in (run `ng version`)
angular 17
### Anything else?
_No response_ | area: animations | low | Critical |
2,612,641,960 | godot | 4-Channel Audio Input is not supported | ### Tested versions
- Reproducible in godot 4.3 stable
### System information
Windows 11 Pro; godot 4.3-stable 4-Channel Microphone Array
### Issue description
trying to run the audio\mic_record sample on an ASUS laptop with a 4-channel microphone array produces no audio when recording, and throws the below error multiple times:
```
E 0:00:05:0391 thread_func: WASAPI: unsupported channel count in microphone!
<C++ Source> drivers/wasapi/audio_driver_wasapi.cpp:855 @ thread_func()
```
### Steps to reproduce
Run the audio\mic_record sample on a machine with a 4-channel microphone array input device.
The Driver only caters for 1 or 2 channels for input.
### Minimal reproduction project (MRP)
https://github.com/godotengine/godot/blob/77dcf97d82cbfe4e4615475fa52ca03da645dbd8/drivers/wasapi/audio_driver_wasapi.cpp#L855
| bug,topic:audio | low | Critical |
2,612,696,191 | langchain | Error during FAISS save_local due to __pydantic_private__ attribute | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```from langchain_community.vectorstores import FAISS
from langchain_community.embeddings import GoogleGenerativeAIEmbeddings
embeddings = GoogleGenerativeAIEmbeddings(model="models/embedding-001")
print(f'Loading FAISS index from temp directory: {temp_dir}')
self.vector_store = FAISS.load_local(temp_dir, embeddings=embeddings, allow_dangerous_deserialization=True)
print("Successfully Loaded the vector store")
self.vector_store.add_texts(text_chunks)
print("Updated the vector store")
cloud_path_to_delete = f"/tmp/{uid}/{client_organization_id}/{grant_proposal_form_id}"
self.delete_directory_in_cloud(self.upload_bucket_name, cloud_path_to_delete)
print("Successfully deleted the Faiss_index")
print(f"Vector store attributes before saving:{vars(self.vector_store)}")
updated_faiss_path = f"/tmp/{uid}/{client_organization_id}/{grant_proposal_form_id}"
try:
self.vector_store.save_local(updated_faiss_path)
print("Successfully saved the updated Faiss Index")
except Exception as e:
print(f"Error during FAISS save_local:{e}")
self.upload_Vector_DB_files(self.upload_bucket_name, updated_faiss_path)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am experiencing an error when attempting to save the FAISS vector store using the `save_local()` method from the `langchain_community` library using google cloud run functions. The error is related to the `__pydantic_private__` attribute, which appears to be causing a serialization issue.
**Steps to Reproduce**:
1. Create a FAISS vector store using the `FAISS.from_texts()` method with embeddings provided by `GoogleGenerativeAIEmbeddings`.
2. Attempt to save the vector store using the `save_local()` method.
3. The error is encountered during the saving process.
**Expected Behavior**:
The vector store should be saved without serialization issues.
**Actual Behavior**:
The following error is raised:
Error during FAISS save_local: pydantic_private
```
AttributeError: __pydantic_private__
at .__getattr__ ( /layers/google.python.pip/pip/lib/python3.9/site-packages/pydantic/main.py:853 )
at .__getstate__ ( /layers/google.python.pip/pip/lib/python3.9/site-packages/pydantic/main.py:942 )
at .save_local ( /layers/google.python.pip/pip/lib/python3.9/site-packages/langchain_community/vectorstores/faiss.py:1162 )
at .download_and_initialize_vectorStore_fromGC ( /workspace/VectorStoreUpdateTrigger.py:106 )
at .get_vector_store ( /workspace/VectorStoreUpdateTrigger.py:124 )
at .process_file ( /workspace/main.py:54 )
at .process_new_file ( /workspace/main.py:25 )
at .view_func ( /layers/google.python.pip/pip/lib/python3.9/site-packages/functions_framework/__init__.py:237 )
at .wrapper ( /layers/google.python.pip/pip/lib/python3.9/site-packages/functions_framework/execution_id.py:106 )
at .dispatch_request ( /layers/google.python.pip/pip/lib/python3.9/site-packages/flask/app.py:1799 )
at .full_dispatch_request ( /layers/google.python.pip/pip/lib/python3.9/site-packages/flask/app.py:1823 )
at .full_dispatch_request ( /layers/google.python.pip/pip/lib/python3.9/site-packages/flask/app.py:1825 )
at .wsgi_app ( /layers/google.python.pip/pip/lib/python3.9/site-packages/flask/app.py:2529 )```
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Sun Jan 10 15:06:54 PST 2016
> Python Version: 3.9.20 (main, Sep 8 2024, 05:08:52)
[GCC 7.5.0]
Package Information
-------------------
> langchain_core: 0.3.12
> langchain: 0.3.4
> langchain_community: 0.3.3
> langsmith: 0.1.137
> langchain_google_genai: 2.0.1
> langchain_text_splitters: 0.3.0
Optional packages not installed
-------------------------------
> langgraph
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.10
> async-timeout: 4.0.3
> dataclasses-json: 0.6.7
> google-generativeai: 0.8.3
> httpx: 0.27.2
> jsonpatch: 1.33
> numpy: 1.26.4
> orjson: 3.10.10
> packaging: 24.1
> pillow: Installed. No version info available.
> pydantic: 2.9.2
> pydantic-settings: 2.6.0
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.36
> tenacity: 9.0.0
> typing-extensions: 4.12.2 | Ɑ: vector store,stale | low | Critical |
2,612,716,051 | next.js | Compatibility issue: Next.js 15 and React Three Fiber - TypeError: Cannot read properties of undefined (reading 'ReactCurrentOwner') | ### Link to the code that reproduces this issue
https://github.com/frommybrain/r3f-starter
### To Reproduce
To Reproduce:
1. Create a new Next.js 15 project using `create-next-app`
2. Install React Three Fiber and its dependencies
3. Create a simple React Three Fiber component
4. Import and use the component in a Next.js page
5. Run the application in development mode (`next dev`)
6. Open the page in a browser
7. Observe the error in the browser console
### Current vs. Expected behavior
Current behavior: When trying to render a React Three Fiber component in a Next.js 15 application, the following error occurs:
```plaintext
Unhandled Runtime Error
TypeError: Cannot read properties of undefined (reading 'ReactCurrentOwner')
Call Stack:
$$$reconciler
node_modules/react-reconciler/cjs/react-reconciler.development.js (498:1)
createRenderer
node_modules/@react-three/fiber/dist/index-99983b2d.esm.js (223:32)
eval
node_modules/@react-three/fiber/dist/index-99983b2d.esm.js (1728:3)
./node_modules/@react-three/fiber/dist/index-99983b2d.esm.js
file:///Users/kj/dev/experiments/r3f-starter-main/r3f-starter-app/.next/static/chunks/app/layout.js (83:1)
Next.js
eval
./node_modules/@react-three/drei/core/Environment.js
./node_modules/@react-three/drei/core/Environment.js
file:///Users/kj/dev/experiments/r3f-starter-main/r3f-starter-app/.next/static/chunks/app/layout.js (28:1)
Next.js
eval
./src/components/three/mainCanvas.jsx
```
Expected behavior: The React Three Fiber component should render without errors, as it does in previous versions of Next.js.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.1.0: Mon Oct 9 21:27:24 PDT 2023; root:xnu-10002.41.9~6/RELEASE_ARM64_T6000
Available memory (MB): 32768
Available CPU cores: 10
Binaries:
Node: 18.19.0
npm: 10.2.3
Yarn: 1.22.22
pnpm: N/A
Relevant Packages:
next: 15.0.1 // Latest available version is detected (15.0.1).
eslint-config-next: N/A
react: 18.3.1
react-dom: 18.3.1
typescript: N/A
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
create-next-app, Developer Experience
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local), next start (local)
### Additional context
This issue appears to be specific to Next.js 15 and its interaction with React Three Fiber. The error suggests a problem with React's internal workings, possibly due to changes in Next.js 15's handling of React or its bundling process. The issue occurs in the development environment and prevents the application from rendering properly.
It would be helpful to investigate any changes in Next.js 15 that might affect how it interacts with React's internals or how it bundles React-based libraries like React Three Fiber.
TypeError: Cannot read properties of undefined (reading 'ReactCurrentOwner') | Upstream,create-next-app,linear: next | high | Critical |
2,612,745,866 | godot | Multithreaded graphics crashes resizing window | ### Tested versions
4.2.2.stable.official
### System information
Godot v4.2.2.stable - macOS 15.0.1 - Vulkan (Forward+) - integrated Apple M1 - Apple M1 (8 Threads)
### Issue description
project with multithreaded graphics crashes when resizing
### Steps to reproduce
new project, multithreaded graphics, run project, resize window.
### Minimal reproduction project (MRP)
N/A* | bug,needs testing,crash | low | Critical |
2,612,748,543 | pytorch | [ROCm] "No available kernel" when running EFFICIENT_ATTENTION sdpa | Hit this error when running https://huggingface.co/genmo/mochi-1-preview. Repro script as followed.
```python
import torch
from torch.nn.attention import sdpa_kernel
device = torch.device("cuda")
q = torch.randn(2, 8, 1, 512, dtype=torch.bfloat16, device=device)
k = torch.randn(2, 8, 257, 512, dtype=torch.bfloat16, device=device)
v = torch.randn(2, 8, 257, 512, dtype=torch.bfloat16, device=device)
attn_mask = torch.ones(2, 1, 1, 257, dtype=torch.bool, device=device)
with sdpa_kernel(torch.nn.attention.SDPBackend.EFFICIENT_ATTENTION):
torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=attn_mask, dropout_p=0.0)
```
```
repro_amd_sdpa_eff.py:11: UserWarning: Memory efficient kernel not used because: (Triggered internally at ../aten/src/ATen/native/transformers/hip/sdp_utils.cpp:779.)
torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=attn_mask, dropout_p=0.0)
repro_amd_sdpa_eff.py:11: UserWarning: Flash attention requires q,k,v to have the same last dimension and to be less than or equal to 256. Got Query.size(-1): 512, Key.size(-1): 512, Value.size(-1): 512 instead. (Triggered internally at ../aten/src/ATen/native/transformers/hip/sdp_utils.cpp:119.)
torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=attn_mask, dropout_p=0.0)
repro_amd_sdpa_eff.py:11: UserWarning: Flash attention kernel not used because: (Triggered internally at ../aten/src/ATen/native/transformers/hip/sdp_utils.cpp:781.)
torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=attn_mask, dropout_p=0.0)
repro_amd_sdpa_eff.py:11: UserWarning: Flash attention has been runtime disabled. (Triggered internally at ../aten/src/ATen/native/transformers/sdp_utils_cpp.h:546.)
torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=attn_mask, dropout_p=0.0)
repro_amd_sdpa_eff.py:11: UserWarning: CuDNN attention kernel not used because: (Triggered internally at ../aten/src/ATen/native/transformers/hip/sdp_utils.cpp:783.)
torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=attn_mask, dropout_p=0.0)
repro_amd_sdpa_eff.py:11: UserWarning: Torch was not compiled with cuDNN attention. (Triggered internally at ../aten/src/ATen/native/transformers/hip/sdp_utils.cpp:559.)
torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=attn_mask, dropout_p=0.0)
Traceback (most recent call last):
File "repro_amd_sdpa_eff.py", line 11, in <module>
torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=attn_mask, dropout_p=0.0)
RuntimeError: No available kernel. Aborting execution.
```
Environment:
```
torch 2.5.0+rocm6.2
pytorch-triton-rocm 3.1.0
transformers 4.46.0
flash_attn 2.6.3
```
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | module: rocm,triaged | low | Critical |
2,612,770,286 | vscode | Make the limit on non-ASCII characters used to disable their highlighting configurable | The highlighting of non-ASCII characters enabled by `"editor.unicodeHighlight.nonBasicASCII": true` is switched off whenever the number (or proportion? it's unclear) of non-ASCII characters in the document exceeds a threshold. A banner is then displayed, but the banner does not provide any opportunity to turn the highlighting on again.
#147888 rightly objected to this, but was closed because of some disagreement in which the reporter falsely asserted that the primary motivation for showing non-ASCII characters is not security.
Of course it's a security feature, and was announced and documented as such. That is partly *why* I want it *always* turned on, for all documents. I work on security-critical software and I'm well aware of the primary purpose of the setting. It was unhelpful to close that issue just because the reporter misunderstood that, when their actual request —to be able to have the highlighting always switched on— was entirely reasonable.
If the banner gave the option to keep the highlighting on, I'd still find it annoying and want to avoid displaying it at all, but it wouldn't be interfering with my work, which is what the current behaviour is doing.
The argument for why the current behaviour is okay and why #147888 was closed, seemed to assume that the *only* use of this feature should be to spot a small number of non-ASCII characters in an almost completely ASCII file.
There are at least two important use-cases that this assumption is missing:
* Code isn't just written in English. It's common to have non-English strings, identifiers, and/or comments while still wanting to be able to see which characters are ASCII and which are not. This also applies to localization files.
* This setting is *also* extremely useful for files, even those written in English, where the use of non-ASCII characters is intentional. For example, in documentation files I often use non-breaking spaces and hyphens, and “curly” quotes. Often these are mixed with markup and LaTeX code that has to use ASCII spaces, hyphens, and quotes. Being able to clearly see the difference was incredibly useful and was increasing my productivity. Until, that is, I ran into this wall that arbitrarily switches the feature off for some files, even though I explicitly told VS Code to always turn it on.
Having other use cases for seeing non-ASCII characters doesn't in any way conflict with the view of this highlighting as primarily a security feature. On the contrary, deliberately using non-ASCII characters doesn't make it any less important to detect their malicious use in the same file. If you were never intending to use non-ASCII characters at all, you'd want to prohibit them (which is probably best done with a `git` hook and/or in CI), not just make them visible.
The assertion that it's necessary to stop highlighting these characters for performance reasons seems unsupported. I've never seen any performance problem due to this feature when the number of non-ASCII chars was just below the limit. I don't object to the current behaviour being the default, although see no reason for the limit to be set as low as it is. But I don't care as long as I can configure it. | feature-request,unicode-highlight | low | Major |
2,612,784,172 | ollama | add termux compile instructions to web page | pkg ugrade -y golang clang cmake libandroid-execinfo gzip git
git clone https://github.com/ollama/ollama ollama
cd ollama
go generate ./...
go build .
cp ollama ~/../usr/bin
this used to work to 0.3.13 then .14 the err came but i believe u change well , err will be gone , cmd working again ..
greets .. | feature request,build | low | Minor |
2,612,784,352 | rust | Rust compiler hangs when pretty-printing MIR for a constant | I tried this code:
```rust
// compiler-flags: --emit mir
fn main() {
let _dummy = [(); usize::MAX];
}
```
I expected to see this happen: Compilation should succeed and generate a MIR file
Instead, this happened: Compiler hangs
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.82.0 (f6e511eec 2024-10-15)
binary: rustc
commit-hash: f6e511eec7342f59a25f7c0534f1dbea00d01b14
commit-date: 2024-10-15
host: x86_64-unknown-linux-gnu
release: 1.82.0
LLVM version: 19.1.1
```
<details><summary>
I tried the same example in the playground, and it reported that the program was killed.
</summary>
<p>
```
Compiling playground v0.0.1 (/playground)
error: could not compile `playground` (bin "playground")
Caused by:
process didn't exit successfully: `/playground/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/bin/rustc --crate-name playground --edition=2021 src/main.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type bin --emit=dep-info,link -C embed-bitcode=no -C codegen-units=1 -C debuginfo=2 --emit mir=compilation --check-cfg 'cfg(docsrs)' --check-cfg 'cfg(feature, values())' -C metadata=78b40b1b4b7fab71 -C extra-filename=-78b40b1b4b7fab71 --out-dir /playground/target/debug/deps -L dependency=/playground/target/debug/deps -L native=/playground/target/debug/build/libsqlite3-sys-6cf5b51c5e7b32e1/out -L native=/playground/target/debug/build/ring-5cfcc6da0313868c/out -L native=/playground/.cargo/registry/src/index.crates.io-6f17d22bba15001f/windows_x86_64_msvc-0.52.6/lib` (signal: 9, SIGKILL: kill)
```
</p>
</details>
FYI, the same issue happens in StableMIR pretty print. I believe the compiler is hanging inside `rustc_middle::mir::pretty::pretty_print_const_value`. | T-compiler,A-MIR,C-bug,A-const-eval,I-hang | low | Critical |
2,612,875,593 | flutter | At step 5th in `Get to know Firebase for Flutter` codelabs, the screenshot is not synced with code | <img width="340" alt="스크린샷 2024-10-25 오전 10 40 47" src="https://github.com/user-attachments/assets/5f7444e1-cdf4-456e-b72e-1c1a7611a320">
```
// Copyright 2022 The Flutter Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
// step-05
import 'package:flutter/material.dart';
import 'package:go_router/go_router.dart';
import 'widgets.dart';
class AuthFunc extends StatelessWidget {
const AuthFunc({
super.key,
required this.loggedIn,
required this.signOut,
});
final bool loggedIn;
final void Function() signOut;
@override
Widget build(BuildContext context) {
return Row(
children: [
Padding(
padding: const EdgeInsets.only(left: 24, bottom: 8),
child: StyledButton(
onPressed: () {
!loggedIn ? context.push('/sign-in') : signOut();
},
child: !loggedIn ? const Text('RSVP') : const Text('Logout')),
),
Visibility(
visible: loggedIn,
child: Padding(
padding: const EdgeInsets.only(left: 24, bottom: 8),
child: StyledButton(
onPressed: () {
context.push('/profile');
},
child: const Text('Profile')),
))
],
);
}
}
```
I noticed a discrepancy between your code and the screenshots in the codelab. When I ran the code, the profile button was supposed to appear, but it didn't. Could you please update the screenshot for that page?
| team-codelabs,p: firebase,P1,triaged-codelabs | medium | Minor |
2,612,924,052 | ui | [bug]: Date Range Picker : time from did not change after selecting a value | ### Describe the bug
Date Range Picker : time from did not change after selecting a value
### Affected component/components
Date Range Picker
### How to reproduce
**Step 1** <img width="786" alt="image" src="https://github.com/user-attachments/assets/43472fa9-a027-4971-9b9d-2e7bfb98a126">
**Step 2**<img width="731" alt="image" src="https://github.com/user-attachments/assets/f7180af6-515d-4bd5-be4b-0e0041d7f60d">
**Step 3** <img width="717" alt="image" src="https://github.com/user-attachments/assets/d1e55664-efc2-4c22-95d5-ede7e106576a">
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
browsers
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,612,924,866 | PowerToys | missing keys on Keyboard Manager | ### Microsoft PowerToys version
0.85.1
### Installation method
Microsoft Store
### Running as admin
Yes
### Area(s) with issue?
Keyboard Manager
### Steps to reproduce
1. eneble Keyboard Manager
2. click "remap a kay" _**OR**_ "a shortcut" (it happens on both options)
3. select, in any of both options ("Select" OR "Send Key/Shortcut") ANY following options:
> "!"; "@"; "#"; "$"; "%"; "¨" or "&" -> (on the pt BR - ABNT)
> "!"; "@"; "#"; "$"; "%"; "^" or "&" -> (on the en US)
### ✔️ Expected Behavior
It should be possible to remap the mentioned keys as well, as well as any other text character/textual expression
### ❌ Actual Behavior
when any of the mentioned keys is pressed it is not recognized by the application (in the same way as when pressing the "Fn" key, which by default cannot be mapped)
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,612,936,561 | stable-diffusion-webui | [Bug]: OSError: Cannot find empty port in range: 7860-7860 with EC2 in Auto scaling group | ### Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
When I deploy source on a normal ec2, when starting ec2, it does not have this error. But when I deploy source on an ec2 in auto scaling group, it will have this error.

### Steps to reproduce the problem
1. Auto scaling group scale out 1 new ec2
2. EC2 running
3. SD start => error
4. SD restart => success
### What should have happened?
SD should start successfully instead of port error and SD will restart
### What browsers do you use to access the UI ?
_No response_
### Sysinfo
I use ec2 with instance type g6e.xlarge
### Console logs
```Shell
ct 25 01:35:30 ip-20-0-2-59.ec2.internal sh[843]: Launching launch.py...
Oct 25 01:35:30 ip-20-0-2-59.ec2.internal sh[843]: ################################################################
Oct 25 01:35:30 ip-20-0-2-59.ec2.internal sh[843]: glibc version is 2.34
Oct 25 01:35:30 ip-20-0-2-59.ec2.internal sh[843]: Cannot locate TCMalloc. Do you have tcmalloc or google-perftool installed on your system? (improves CPU memory usage)
Oct 25 01:35:49 ip-20-0-1-115.ec2.internal sh[949]: Python 3.11.9 (main, Apr 19 2024, 16:48:06) [GCC 11.2.0]
Oct 25 01:35:49 ip-20-0-1-115.ec2.internal sh[949]: Version: v1.6.0-1704-gc24ff95d
Oct 25 01:35:49 ip-20-0-1-115.ec2.internal sh[949]: Commit hash: c24ff95d305bf56e4afe5fdf76a5350481661c17
Oct 25 01:37:36 ip-20-0-1-115.ec2.internal sh[949]: CUDA 12.1
Oct 25 01:37:36 ip-20-0-1-115.ec2.internal sh[949]: Launching Web UI with arguments: --api --listen --cors-allow-origins '*' --port=7860
Oct 25 01:39:44 ip-20-0-1-115.ec2.internal sh[949]: no module 'xformers'. Processing without...
Oct 25 01:39:44 ip-20-0-1-115.ec2.internal sh[949]: no module 'xformers'. Processing without...
Oct 25 01:39:46 ip-20-0-1-115.ec2.internal sh[949]: No module 'xformers'. Proceeding without it.
Oct 25 01:40:08 ip-20-0-1-115.ec2.internal sh[949]: ControlNet preprocessor location: /home/ec2-user/stable-diffusion-webui/extensions/sd-webui-controlnet/annotator/downloads
Oct 25 01:40:24 ip-20-0-1-115.ec2.internal sh[949]: 2024-10-25 01:40:24,757 - ControlNet - INFO - ControlNet v1.1.455
Oct 25 01:40:38 ip-20-0-1-115.ec2.internal sh[949]: 01:40:38 - ReActor - STATUS - Running v0.7.1-b1 on Device: CUDA
Oct 25 01:40:38 ip-20-0-1-115.ec2.internal sh[949]: Loading weights [bc2f30f4ad] from /home/ec2-user/stable-diffusion-webui/models/Stable-diffusion/beautifulRealistic_v60.safetensors
Oct 25 01:40:41 ip-20-0-1-115.ec2.internal sh[949]: 2024-10-25 01:40:41,227 - ControlNet - INFO - ControlNet UI callback registered.
Oct 25 01:40:48 ip-20-0-1-115.ec2.internal sh[949]: Traceback (most recent call last):
Oct 25 01:40:48 ip-20-0-1-115.ec2.internal sh[949]: File "/home/ec2-user/stable-diffusion-webui/launch.py", line 48, in <module>
Oct 25 01:40:49 ip-20-0-1-115.ec2.internal sh[949]: main()
Oct 25 01:40:49 ip-20-0-1-115.ec2.internal sh[949]: File "/home/ec2-user/stable-diffusion-webui/launch.py", line 44, in main
Oct 25 01:40:49 ip-20-0-1-115.ec2.internal sh[949]: start()
Oct 25 01:40:49 ip-20-0-1-115.ec2.internal sh[949]: File "/home/ec2-user/stable-diffusion-webui/modules/launch_utils.py", line 469, in start
Oct 25 01:40:49 ip-20-0-1-115.ec2.internal sh[949]: webui.webui()
Oct 25 01:40:49 ip-20-0-1-115.ec2.internal sh[949]: File "/home/ec2-user/stable-diffusion-webui/webui.py", line 79, in webui
Oct 25 01:40:49 ip-20-0-1-115.ec2.internal sh[949]: app, local_url, share_url = shared.demo.launch(
Oct 25 01:40:50 ip-20-0-1-115.ec2.internal sh[949]: ^^^^^^^^^^^^^^^^^^^
Oct 25 01:40:50 ip-20-0-1-115.ec2.internal sh[949]: File "/home/ec2-user/stable-diffusion-webui/venv/lib/python3.11/site-packages/gradio/blocks.py", line 1896, in launch
Oct 25 01:40:51 ip-20-0-1-115.ec2.internal sh[949]: ) = networking.start_server(
Oct 25 01:40:52 ip-20-0-1-115.ec2.internal sh[949]: ^^^^^^^^^^^^^^^^^^^^^^^^
Oct 25 01:40:52 ip-20-0-1-115.ec2.internal sh[949]: File "/home/ec2-user/stable-diffusion-webui/venv/lib/python3.11/site-packages/gradio/networking.py", line 169, in start_server
Oct 25 01:40:52 ip-20-0-1-115.ec2.internal sh[949]: raise OSError(
Oct 25 01:40:52 ip-20-0-1-115.ec2.internal sh[949]: OSError: Cannot find empty port in range: 7860-7860. You can specify a different port by setting the GRADIO_SERVER_PORT environment variable or passing the `server_port` parameter to `launch()`.
Oct 25 01:40:54 ip-20-0-1-115.ec2.internal sh[949]: Creating model from config: /home/ec2-user/stable-diffusion-webui/configs/v1-inference.yaml
Oct 25 01:41:43 ip-20-0-1-115.ec2.internal sh[949]: Applying attention optimization: Doggettx... done.
Oct 25 01:41:53 ip-20-0-1-115.ec2.internal sh[949]: Model loaded in 74.7s (load weights from disk: 15.3s, create model: 1.1s, apply weights to model: 48.5s, load textual inversion embeddings: 1.5s, calculate empty prompt: 8.1s).
Oct 25 01:42:07 ip-20-0-1-115.ec2.internal systemd[1]: start-sdw.service: Deactivated successfully.
Oct 25 01:42:07 ip-20-0-1-115.ec2.internal systemd[1]: start-sdw.service: Consumed 17.646s CPU time.
Oct 25 01:42:27 ip-20-0-1-115.ec2.internal systemd[1]: start-sdw.service: Scheduled restart job, restart counter is at 1.
Oct 25 01:43:56 ip-20-0-1-115.ec2.internal systemd[1]: Stopped Run stable diffusion webui.
Oct 25 01:43:56 ip-20-0-1-115.ec2.internal systemd[1]: start-sdw.service: Consumed 17.646s CPU time.
Oct 25 01:43:56 ip-20-0-1-115.ec2.internal systemd[1]: Started Run stable diffusion webui.
```
### Additional information
_No response_ | asking-for-help-with-local-system-issues | low | Critical |
2,612,939,304 | godot | Window can not open at second time if use a button to minimize the window with fullscreen and borderless mode | ### Tested versions
Reproducible: Godot_v4.3-stable_win64
### System information
Windows 10 (and 11), Godot_v4.3, mobile
### Issue description
I use `display/window/size/borderless=true` and `display/window/size/mode=Fullscreen` in project settings, and I add a button to minimize the window.
```
func _pressed() -> void:
DisplayServer.window_set_mode(DisplayServer.WINDOW_MODE_MINIMIZED)
```
The first time, click the button can minimize the window successfully, and click the godot icon at the windows task bar can show the window again successfully.
But at the second time, click the button can minimize the window successfully, however click the godot icon at the windows task bar cannot show the window anymore.
### Steps to reproduce
Directly run the reproduction project, click the button and the godot icon at the windows task bar for two times, the window just can not show.
### Minimal reproduction project (MRP)
[fullscreen_borderless_bug.zip](https://github.com/user-attachments/files/17515029/fullscreen_borderless_bug.zip)
| bug,topic:gui | low | Critical |
2,612,941,684 | yt-dlp | Limit filename by bytes and unicode normalize at the same time | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm requesting a feature unrelated to a specific site
- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Provide a description that is worded well enough to be understood
Currently, you can limit by characters but this will fail sometimes on linux
`%(title).100s - %(uploader).30s [%(id)s].%(ext)s`
So you can limit by bytes, but this will lead to much smaller titles than what you want with cjk or special fonts
`%(title).100B - %(uploader).50B [%(id)s].%(ext)s`
So, you can force it into unicode, but then you can't limit by bytes, only character count, which will have an error on cjk characters
`%(title)+.100U - %(uploader).50B [%(id)s].%(ext)s`
So the one you should use depends on the title of the video, but the title will vary when you are downloading a playlist so, there's no best solution to this.
I provided logs that show the output of all of those examples with different videos. They aren't verbose because verbosity is not necessary here. Here are the videos if you want to try it
https://www.youtube.com/watch?v=pnWh5JlJ9ys
https://www.youtube.com/watch?v=cirDXY3CkSk
https://www.youtube.com/watch?v=8c7CrrMsBvI
https://www.youtube.com/watch?v=oR2cuFgVQBI
https://www.youtube.com/watch?v=J6_vSNtCWs4
Also, a neat little trick to create a temporary playlist you can test this on, since my use case is playlist and youtube channels https://youtube.com/watch_videos?video_ids=pnWh5JlJ9ys,cirDXY3CkSk,8c7CrrMsBvI,oR2cuFgVQBI,J6_vSNtCWs4
I am currently using %(title)+.176B - %(uploader).50B [%(id)s].%(ext)s for now since character counts are more unpredictable. The best I can get while normalizing unicode is %(title)+.50U - %(uploader).50B [%(id)s].%(ext)s but I can go higher if I limit the channel name further or I don't include it. At first I wanted to limit channel names further, but I looked for the longest channel I could find that made sense (Salome's) and that removed some of the name.
### Provide verbose output that clearly demonstrates the problem
- [ ] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [ ] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
yt-dlp -S +size,+br,+res,+fps https://www.youtube.com/watch?v=cirDXY3CkSk -o "%(title).100s - %(uploader).50B [%(id)s].%(ext)s"
[youtube] Extracting URL: https://www.youtube.com/watch?v=cirDXY3CkSk
[youtube] cirDXY3CkSk: Downloading webpage
[youtube] cirDXY3CkSk: Downloading ios player API JSON
[youtube] cirDXY3CkSk: Downloading mweb player API JSON
[youtube] cirDXY3CkSk: Downloading m3u8 information
[info] cirDXY3CkSk: Downloading 1 format(s): 598+599
ERROR: unable to open for writing: [Errno 36] File name too long: 'ぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬ¦モミアゲヲシャカアゲヲ¦Momiagewo Shakaagewo - ぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬ [cirDXY3CkSk].f598.webm.part'
ERROR: unable to open for writing: [Errno 36] File name too long: 'ぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬ¦モミアゲヲシャカアゲヲ¦Momiagewo Shakaagewo - ぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬ [cirDXY3CkSk].f599.m4a.part'
yt-dlp -S +size,+br,+res,+fps https://www.youtube.com/watch?v=cirDXY3CkSk -o "%(title).100U - %(uploader).50B [%(id)s].%(ext)s"
[youtube] Extracting URL: https://www.youtube.com/watch?v=cirDXY3CkSk
[youtube] cirDXY3CkSk: Downloading webpage
[youtube] cirDXY3CkSk: Downloading ios player API JSON
[youtube] cirDXY3CkSk: Downloading mweb player API JSON
[youtube] cirDXY3CkSk: Downloading m3u8 information
[info] cirDXY3CkSk: Downloading 1 format(s): 598+599
ERROR: unable to open for writing: [Errno 36] File name too long: 'ぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬ¦モミアゲヲシャカアゲヲ¦Momiagewo Shakaagewo - ぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬ [cirDXY3CkSk].f598.webm.part'
ERROR: unable to open for writing: [Errno 36] File name too long: 'ぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬ¦モミアゲヲシャカアゲヲ¦Momiagewo Shakaagewo - ぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬ [cirDXY3CkSk].f599.m4a.part'
yt-dlp -S +size,+br,+res,+fps https://www.youtube.com/watch?v=cirDXY3CkSk -o "%(title).100B - %(uploader).50B [%(id)s].%(ext)s"
[youtube] Extracting URL: https://www.youtube.com/watch?v=cirDXY3CkSk
[youtube] cirDXY3CkSk: Downloading webpage
[youtube] cirDXY3CkSk: Downloading ios player API JSON
[youtube] cirDXY3CkSk: Downloading mweb player API JSON
[youtube] cirDXY3CkSk: Downloading m3u8 information
[info] cirDXY3CkSk: Downloading 1 format(s): 598+599
[download] Destination: ぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬ - ぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬ [cirDXY3CkSk].f598.webm
[download] 100% of 701.10KiB in 00:00:01 at 623.58KiB/s
[download] Destination: ぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬ - ぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬ [cirDXY3CkSk].f599.m4a
[download] 100% of 781.66KiB in 00:00:00 at 1.50MiB/s
[Merger] Merging formats into "ぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬ - ぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬ [cirDXY3CkSk].mkv"
Deleting original file ぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬ - ぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬ [cirDXY3CkSk].f598.webm (pass -k to keep)
Deleting original file ぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬ - ぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬ [cirDXY3CkSk].f599.m4a (pass -k to keep)
yt-dlp -S +size,+br,+res,+fps https://www.youtube.com/watch?v=pnWh5JlJ9ys -o "%(title).100B - %(uploader).50B [%(id)s].%(ext)s"
[youtube] Extracting URL: https://www.youtube.com/watch?v=pnWh5JlJ9ys
[youtube] pnWh5JlJ9ys: Downloading webpage
[youtube] pnWh5JlJ9ys: Downloading ios player API JSON
[youtube] pnWh5JlJ9ys: Downloading mweb player API JSON
[youtube] pnWh5JlJ9ys: Downloading m3u8 information
[info] pnWh5JlJ9ys: Downloading 1 format(s): 598+599
[download] Destination: 𝗡𝗲𝗿𝗶𝘀𝘀𝗮 𝗳𝗼𝘂𝗻𝗱 𝗮 𝗚𝗹𝗶𝘁𝗰𝗵 𝗶𝗻 𝗠𝗶 - Gomi Simpington Ch. [pnWh5JlJ9ys].f598.webm
[download] 100% of 2.50MiB in 00:00:02 at 1.14MiB/s
[download] Destination: 𝗡𝗲𝗿𝗶𝘀𝘀𝗮 𝗳𝗼𝘂𝗻𝗱 𝗮 𝗚𝗹𝗶𝘁𝗰𝗵 𝗶𝗻 𝗠𝗶 - Gomi Simpington Ch. [pnWh5JlJ9ys].f599.m4a
[download] 100% of 2.66MiB in 00:00:01 at 1.82MiB/s
[Merger] Merging formats into "𝗡𝗲𝗿𝗶𝘀𝘀𝗮 𝗳𝗼𝘂𝗻𝗱 𝗮 𝗚𝗹𝗶𝘁𝗰𝗵 𝗶𝗻 𝗠𝗶 - Gomi Simpington Ch. [pnWh5JlJ9ys].mkv"
Deleting original file 𝗡𝗲𝗿𝗶𝘀𝘀𝗮 𝗳𝗼𝘂𝗻𝗱 𝗮 𝗚𝗹𝗶𝘁𝗰𝗵 𝗶𝗻 𝗠𝗶 - Gomi Simpington Ch. [pnWh5JlJ9ys].f598.webm (pass -k to keep)
Deleting original file 𝗡𝗲𝗿𝗶𝘀𝘀𝗮 𝗳𝗼𝘂𝗻𝗱 𝗮 𝗚𝗹𝗶𝘁𝗰𝗵 𝗶𝗻 𝗠𝗶 - Gomi Simpington Ch. [pnWh5JlJ9ys].f599.m4a (pass -k to keep)
yt-dlp -S +size,+br,+res,+fps https://www.youtube.com/watch?v=pnWh5JlJ9ys -o "%(title)+.100U - %(uploader).50B [%(id)s].%(ext)s"
[youtube] Extracting URL: https://www.youtube.com/watch?v=pnWh5JlJ9ys
[youtube] pnWh5JlJ9ys: Downloading webpage
[youtube] pnWh5JlJ9ys: Downloading ios player API JSON
[youtube] pnWh5JlJ9ys: Downloading mweb player API JSON
[youtube] pnWh5JlJ9ys: Downloading m3u8 information
[info] pnWh5JlJ9ys: Downloading 1 format(s): 598+599
[download] Destination: Nerissa found a Glitch in Minecraft and Exploits it, but the Devs are watching |Hololive| - Gomi Simpington Ch. [pnWh5JlJ9ys].f598.webm
[download] 100% of 2.50MiB in 00:00:01 at 1.30MiB/s
[download] Destination: Nerissa found a Glitch in Minecraft and Exploits it, but the Devs are watching |Hololive| - Gomi Simpington Ch. [pnWh5JlJ9ys].f599.m4a
[download] 100% of 2.66MiB in 00:00:01 at 1.94MiB/s
[Merger] Merging formats into "Nerissa found a Glitch in Minecraft and Exploits it, but the Devs are watching |Hololive| - Gomi Simpington Ch. [pnWh5JlJ9ys].mkv"
Deleting original file Nerissa found a Glitch in Minecraft and Exploits it, but the Devs are watching |Hololive| - Gomi Simpington Ch. [pnWh5JlJ9ys].f598.webm (pass -k to keep)
Deleting original file Nerissa found a Glitch in Minecraft and Exploits it, but the Devs are watching |Hololive| - Gomi Simpington Ch. [pnWh5JlJ9ys].f599.m4a (pass -k to keep)
yt-dlp -S +size,+br,+res,+fps https://www.youtube.com/watch?v=cirDXY3CkSk -o "%(title)+.100U - %(uploader).50B [%(id)s].%(ext)s"
[youtube] Extracting URL: https://www.youtube.com/watch?v=cirDXY3CkSk
[youtube] cirDXY3CkSk: Downloading webpage
[youtube] cirDXY3CkSk: Downloading ios player API JSON
[youtube] cirDXY3CkSk: Downloading mweb player API JSON
[youtube] cirDXY3CkSk: Downloading m3u8 information
[info] cirDXY3CkSk: Downloading 1 format(s): 598+599
ERROR: unable to open for writing: [Errno 36] File name too long: 'ぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬ¦モミアゲヲシャカアゲヲ¦Momiagewo Shakaagewo - ぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬ [cirDXY3CkSk].f598.webm.part'
ERROR: unable to open for writing: [Errno 36] File name too long: 'ぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬ¦モミアゲヲシャカアゲヲ¦Momiagewo Shakaagewo - ぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬぬ [cirDXY3CkSk].f599.m4a.part'
yt-dlp -S +size,+br,+res,+fps https://www.youtube.com/watch?v=pnWh5JlJ9ys -o "%(title)+.176B - %(uploader).50B [%(id)s].%(ext)s"
[youtube] Extracting URL: https://www.youtube.com/watch?v=pnWh5JlJ9ys
[youtube] pnWh5JlJ9ys: Downloading webpage
[youtube] pnWh5JlJ9ys: Downloading ios player API JSON
[youtube] pnWh5JlJ9ys: Downloading mweb player API JSON
[youtube] pnWh5JlJ9ys: Downloading m3u8 information
[info] pnWh5JlJ9ys: Downloading 1 format(s): 598+599
[download] Destination: 𝗡𝗲𝗿𝗶𝘀𝘀𝗮 𝗳𝗼𝘂𝗻𝗱 𝗮 𝗚𝗹𝗶𝘁𝗰𝗵 𝗶𝗻 𝗠𝗶𝗻𝗲𝗰𝗿𝗮𝗳𝘁 𝗮𝗻𝗱 𝗘𝘅𝗽𝗹𝗼𝗶𝘁𝘀 𝗶 - Gomi Simpington Ch. [pnWh5JlJ9ys].f598.webm
[download] 100% of 2.50MiB in 00:00:01 at 1.27MiB/s
[download] Destination: 𝗡𝗲𝗿𝗶𝘀𝘀𝗮 𝗳𝗼𝘂𝗻𝗱 𝗮 𝗚𝗹𝗶𝘁𝗰𝗵 𝗶𝗻 𝗠𝗶𝗻𝗲𝗰𝗿𝗮𝗳𝘁 𝗮𝗻𝗱 𝗘𝘅𝗽𝗹𝗼𝗶𝘁𝘀 𝗶 - Gomi Simpington Ch. [pnWh5JlJ9ys].f599.m4a
[download] 100% of 2.66MiB in 00:00:01 at 2.07MiB/s
[Merger] Merging formats into "𝗡𝗲𝗿𝗶𝘀𝘀𝗮 𝗳𝗼𝘂𝗻𝗱 𝗮 𝗚𝗹𝗶𝘁𝗰𝗵 𝗶𝗻 𝗠𝗶𝗻𝗲𝗰𝗿𝗮𝗳𝘁 𝗮𝗻𝗱 𝗘𝘅𝗽𝗹𝗼𝗶𝘁𝘀 𝗶 - Gomi Simpington Ch. [pnWh5JlJ9ys].mkv"
Deleting original file 𝗡𝗲𝗿𝗶𝘀𝘀𝗮 𝗳𝗼𝘂𝗻𝗱 𝗮 𝗚𝗹𝗶𝘁𝗰𝗵 𝗶𝗻 𝗠𝗶𝗻𝗲𝗰𝗿𝗮𝗳𝘁 𝗮𝗻𝗱 𝗘𝘅𝗽𝗹𝗼𝗶𝘁𝘀 𝗶 - Gomi Simpington Ch. [pnWh5JlJ9ys].f599.m4a (pass -k to keep)
Deleting original file 𝗡𝗲𝗿𝗶𝘀𝘀𝗮 𝗳𝗼𝘂𝗻𝗱 𝗮 𝗚𝗹𝗶𝘁𝗰𝗵 𝗶𝗻 𝗠𝗶𝗻𝗲𝗰𝗿𝗮𝗳𝘁 𝗮𝗻𝗱 𝗘𝘅𝗽𝗹𝗼𝗶𝘁𝘀 𝗶 - Gomi Simpington Ch. [pnWh5JlJ9ys].f598.webm (pass -k to keep)
```
| enhancement,triage | low | Critical |
2,612,974,282 | stable-diffusion-webui | [Feature Request]: Support for SD3.5 | ### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What would your feature do ?
SD3.5:
https://huggingface.co/stabilityai/stable-diffusion-3.5-large
### Proposed workflow
can use SD3.5
### Additional information
_No response_ | enhancement | medium | Major |
2,612,982,830 | langchain | [AIMessage]tool_calls.0.args is not always returned as a valid dictionary | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
prompt_system = """
You are a transcription engineer. I will provide you with some code,
and you need to transcribe it into Python. You must use the check_code_syntax tool.
If the last tool execution resulted in a code error, you need to correct the code based on the error
message and then invoke the check_code_syntax tool again to see if it can compile successfully.
If it passes the check, meaning the return value is_success is True, you should output TASK_END + the corrected code.
"""
tools = [check_code_syntax]
def create_agent():
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
prompt_system,
),
MessagesPlaceholder(variable_name='user')
]
)
prompt = prompt.partial(tool_names=", ".join([tool.name for tool in tools]))
agent = prompt | llm.bind_tools(tools)
return agent
rewrite_agent = create_agent()
def should_continue(state:MessagesState) -> bool:
"""Return the next node to execute."""
last_message = state["messages"][-1]
# If there is no function call, then we finish
if not last_message.tool_calls:
return END
# Otherwise if there is, we continue
args = last_message.tool_calls[0]['args']
if not isinstance(args,dict):
args = json.loads(args)
return "tools"
def chatbot(state: State):
return [rewrite_agent.invoke(state["messages"])]
workflow = StateGraph(MessagesState)
workflow.add_node('tools', ToolNode(tools))
workflow.add_node('rewrite_agent', chatbot)
workflow.add_edge(START, 'rewrite_agent')
workflow.add_edge('tools','rewrite_agent')
workflow.add_conditional_edges('rewrite_agent',should_continue,['tools',END])
graph = workflow.compile()
### Error Message and Stack Trace (if applicable)
[Error Message, I add some print for debug]
================================ Human Message =================================
code is
def add(a, b):
prnt(a+b)
add(3, 5)
_dict {'content': '', 'refusal': None, 'role': 'assistant', 'function_call': None, 'tool_calls': [{'id': '0192c19eef3bb5786799c95962fe59d0', 'function': {'arguments': '{"code": "def add(a, b):\\n prnt(a+b)\\nadd(3, 5)", "language": "python"}', 'name': 'check_code_syntax'}, 'type': 'function'}]}
role assistant
name None
id None
raw_tool_call {'id': '0192c19eef3bb5786799c95962fe59d0', 'function': {'arguments': '{"code": "def add(a, b):\\n prnt(a+b)\\nadd(3, 5)", "language": "python"}', 'name': 'check_code_syntax'}, 'type': 'function'}
tool_calls []_
================================== Ai Message ==================================
Tool Calls:
check_code_syntax (0192c19eef3bb5786799c95962fe59d0)
Call ID: 0192c19eef3bb5786799c95962fe59d0
Args:
code: def add(a, b):
prnt(a+b)
add(3, 5)
language: python
================================= Tool Message =================================
Name: check_code_syntax
FalseTraceback (most recent call last):
File "/tmp/tmp4ljh8rbe", line 3, in <module>
add(3, 5)
File "/tmp/tmp4ljh8rbe", line 2, in add
prnt(a+b)
^^^^
NameError: name 'prnt' is not defined. Did you mean: 'print'?
_dict {'content': "It looks like there's a typo in the code. The function `prnt` should be `print`. Let me correct that and check the code again.\n", 'refusal': None, 'role': 'assistant', 'function_call': None, 'tool_calls': [{'id': '0192c19efcad36cfccc5be45121408cf', 'function': {'arguments': '"{\\"code\\": \\"def add(a, b):\\\\n print(a+b)\\\\nadd(3, 5)\\", \\"language\\": \\"python\\"}"', 'name': 'check_code_syntax'}, 'type': 'function'}]}
role assistant
name None
id None
raw_tool_call {'id': '0192c19efcad36cfccc5be45121408cf', 'function': {'arguments': '"{\\"code\\": \\"def add(a, b):\\\\n print(a+b)\\\\nadd(3, 5)\\", \\"language\\": \\"python\\"}"', 'name': 'check_code_syntax'}, 'type': 'function'}
tool_calls []_
Traceback (most recent call last):
File "/home/son1enardo/nj_rewrite/re_write_agent.py", line 167, in <module>
for chunk in graph.stream({"messages": [input_message]},
File "/home/son1enardo/miniconda3/envs/langgraph/lib/python3.11/site-packages/langgraph/pregel/__init__.py", line 1298, in stream
for _ in runner.tick(
File "/home/son1enardo/miniconda3/envs/langgraph/lib/python3.11/site-packages/langgraph/pregel/runner.py", line 56, in tick
run_with_retry(t, retry_policy)
File "/home/son1enardo/miniconda3/envs/langgraph/lib/python3.11/site-packages/langgraph/pregel/retry.py", line 29, in run_with_retry
task.proc.invoke(task.input, config)
File "/home/son1enardo/miniconda3/envs/langgraph/lib/python3.11/site-packages/langgraph/utils/runnable.py", line 409, in invoke
input = context.run(step.invoke, input, config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/son1enardo/miniconda3/envs/langgraph/lib/python3.11/site-packages/langgraph/utils/runnable.py", line 183, in invoke
ret = context.run(self.func, input, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/son1enardo/nj_rewrite/re_write_agent.py", line 141, in chatbot
return [rewrite_agent.invoke(state["messages"])]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/son1enardo/miniconda3/envs/langgraph/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3024, in invoke
input = context.run(step.invoke, input, config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/son1enardo/miniconda3/envs/langgraph/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 5354, in invoke
return self.bound.invoke(
^^^^^^^^^^^^^^^^^^
File "/home/son1enardo/miniconda3/envs/langgraph/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 286, in invoke
self.generate_prompt(
File "/home/son1enardo/miniconda3/envs/langgraph/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 786, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/son1enardo/miniconda3/envs/langgraph/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 643, in generate
raise e
File "/home/son1enardo/miniconda3/envs/langgraph/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 633, in generate
self._generate_with_cache(
File "/home/son1enardo/miniconda3/envs/langgraph/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 851, in _generate_with_cache
result = self._generate(
^^^^^^^^^^^^^^^
File "/home/son1enardo/miniconda3/envs/langgraph/lib/python3.11/site-packages/langchain_openai/chat_models/base.py", line 690, in _generate
return self._create_chat_result(response, generation_info)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/son1enardo/miniconda3/envs/langgraph/lib/python3.11/site-packages/langchain_openai/chat_models/base.py", line 727, in _create_chat_result
message = _convert_dict_to_message(res["message"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/son1enardo/miniconda3/envs/langgraph/lib/python3.11/site-packages/langchain_openai/chat_models/base.py", line 138, in _convert_dict_to_message
return AIMessage(
^^^^^^^^^^
File "/home/son1enardo/miniconda3/envs/langgraph/lib/python3.11/site-packages/langchain_core/messages/ai.py", line 179, in __init__
super().__init__(content=content, **kwargs)
File "/home/son1enardo/miniconda3/envs/langgraph/lib/python3.11/site-packages/langchain_core/messages/base.py", line 76, in __init__
super().__init__(content=content, **kwargs)
File "/home/son1enardo/miniconda3/envs/langgraph/lib/python3.11/site-packages/langchain_core/load/serializable.py", line 111, in __init__
super().__init__(*args, **kwargs)
File "/home/son1enardo/miniconda3/envs/langgraph/lib/python3.11/site-packages/pydantic/main.py", line 212, in __init__
validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pydantic_core._pydantic_core.ValidationError: 1 validation error for AIMessage
tool_calls.0.args
Input should be a valid dictionary [type=dict_type, input_value='{"code": "def add(a, b):..., "language": "python"}', input_type=str]
For further information visit https://errors.pydantic.dev/2.9/v/dict_type
### Description
The LLM returns two AIMessage objects, but there is inconsistency in how tool_call.0.args is structured between them.
In the first AIMessage, the tool_call.0.args is returned correctly as expected, like this:
`{'arguments': '{"code": "def add(a, b):\\n prnt(a+b)\\nadd(3, 5)", "language": "python"}', 'name': 'check_code_syntax'}
`
However, in the second AIMessage, the tool_call.0.args is returned as a nested, escaped string, like this:
`{'arguments': '"{\\"code\\": \\"def add(a, b):\\\\n print(a+b)\\\\nadd(3, 5)\\", \\"language\\": \\"python\\"}"'}
`
This discrepancy occurs without any additional processing on the returned results, and the second message causes issues due to the additional escaping, which requires further parsing to get the correct dictionary structure.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Fri Mar 29 23:14:13 UTC 2024
> Python Version: 3.11.10 (main, Oct 3 2024, 07:29:13) [GCC 11.2.0]
Package Information
-------------------
> langchain_core: 0.3.10
> langchain: 0.3.1
> langchain_community: 0.3.1
> langsmith: 0.1.129
> langchain_elasticsearch: 0.3.0
> langchain_experimental: 0.3.2
> langchain_ollama: 0.2.0
> langchain_openai: 0.2.2
> langchain_text_splitters: 0.3.0
> langgraph: 0.2.34
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.8
> async-timeout: Installed. No version info available.
> dataclasses-json: 0.6.7
> elasticsearch[vectorstore-mmr]: Installed. No version info available.
> httpx: 0.27.2
> jsonpatch: 1.33
> langgraph-checkpoint: 2.0.1
> numpy: 1.26.4
> ollama: 0.3.3
> openai: 1.50.2
> orjson: 3.10.7
> packaging: 24.1
> pydantic: 2.9.2
> pydantic-settings: 2.5.2
> PyYAML: 6.0.2
> requests: 2.32.3
> SQLAlchemy: 2.0.35
> tenacity: 8.5.0
> tiktoken: 0.7.0
> typing-extensions: 4.12.2 | 🤖:bug,stale,investigate | low | Critical |
2,612,995,970 | pytorch | DISABLED test_file_reader_no_memory_leak (__main__.TestScript) | Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_file_reader_no_memory_leak&suite=TestScript&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/32039218367).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_file_reader_no_memory_leak`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_jit.py", line 12806, in test_file_reader_no_memory_leak
self.assertLess(peak_from_file, peak_from_string * 500)
File "/opt/conda/envs/py_3.11/lib/python3.11/unittest/case.py", line 1259, in assertLess
self.fail(self._formatMessage(msg, standardMsg))
File "/opt/conda/envs/py_3.11/lib/python3.11/unittest/case.py", line 703, in fail
raise self.failureException(msg)
AssertionError: 269112 not less than 100000
To execute this test, run the following from the base repo dir:
python test/test_jit.py TestScript.test_file_reader_no_memory_leak
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_jit.py`
ConnectionTimeoutError: Connect timeout for 5000ms, GET https://raw.githubusercontent.com/pytorch/pytorch/main/test/test_jit.py -2 (connected: false, keepalive socket: false, socketHandledRequests: 1, socketHandledResponses: 0)
headers: {}
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @clee2000 | oncall: jit,module: flaky-tests,skipped,module: unknown | low | Critical |
2,612,999,047 | kubernetes | There is a conflict between the metrics of controller-runtime and component-base, the metric workqueue_depth of controller-runtime repository not take effect | ### What happened?
https://github.com/kubernetes-sigs/controller-runtime/blob/7399a3a595bf254add9d0c96c49af462e1aac193/pkg/metrics/workqueue.go#L99
https://github.com/kubernetes/component-base/blob/03d57670a9cda43def5d9c960823d6d4558e99ff/metrics/prometheus/workqueue/metrics.go#L101
Both repository try to set the Provider but only the earliest will take effect. In the case where component-base library is initialized first, the workqueue_depth metrics in component-base will be used and the metrics in controller-runtime will not work. It will cause the default exposed metrics in controller-runtime to be unable to show the workflow_depth number.
Is there any hint or recommendation for handling that?
### What did you expect to happen?
The workqueue_depth should take effect that provided by controller-runtime repository
### How can we reproduce it (as minimally and precisely as possible)?
The component-base library might not be directly depended. However, other libraries like k8s.io/apiextensions-apiserver;if the codes in these libraries call component-base functions, the initialization function in component-base will work and might call workqueue.SetProvider before controller-runtime, which will prevent the later controller-runtime from setting its own workqueue_depth metrics.
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.17", GitCommit:"953be8927218ec8067e1af2641e540238ffd7576", GitTreeState:"clean", BuildDate:"2023-02-22T13:34:27Z", GoVersion:"go1.19.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.17", GitCommit:"953be8927218ec8067e1af2641e540238ffd7576", GitTreeState:"clean", BuildDate:"2023-02-22T13:27:46Z", GoVersion:"go1.19.6", Compiler:"gc", Platform:"linux/amd64"}
```
</details>
### Cloud provider
<details>
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/instrumentation,triage/accepted | low | Minor |
2,613,013,231 | opencv | OCL sepFilter2D and Scharr freeze for several minutes | ### System Information
OpenCV version: 4.10.0 (appeared earlier?)
Device: Nothing Phone 2
OS: Android 14
CPU: Qualcomm Snapdragon 8+ Gen1
iGPU: QUALCOMM Adreno(TM) (OpenCL 3.0 Adreno(TM) 730)
### Detailed description
There are two test cases at which `cv::sepFilter2D` and `cv::Scharr()` misbehave, i.e. freeze for about 240 secs (Scharr) or 60 secs (sepFilter2D) and quit with wrong results.
In both cases they invoke OpenCL version and freeze [at the same place](https://github.com/opencv/opencv/blob/e4bcd46f64ee44b10a21cbd048bb428a83f9f417/modules/imgproc/src/filter.dispatch.cpp#L849) after running `col_filter` kernel.
The test cases are:
* `OCL_Filter/Scharr3x3_cols16_rows2.Mat/26, where GetParam() = (8UC1, 0, 1x0, BORDER_REFLECT, 0.2, false, 1)`
- full ROI of size 16x30
- kernel arguments:
```
-D RADIUSY=1 -D LSIZE0=16 -D LSIZE1=10 -D CN=1 -D srcT=float -D dstT=uchar
-D convertToFloatT=noconvert -D floatT=float -D convertToDstT=convert_uchar_sat_rte
-D srcT1=float -D dstT1=uchar -D SHIFT_BITS=0
-D COEFF=DIG(0.6000000238f)DIG(2.000000000f)DIG(0.6000000238f)
```
* `OCL_ImageProc/SepFilter2D.Mat/10, where GetParam() = (CV_8U, Channels(1), BORDER_REFLECT, true, false)`
- full ROI of size 60x51
- kernel arguments:
```
-D RADIUSY=3 -D LSIZE0=16 -D LSIZE1=10 -D CN=1 -D srcT=float -D dstT=uchar
-D convertToFloatT=noconvert -D floatT=float -D convertToDstT=convert_uchar_sat_rte
-D srcT1=float -D dstT1=uchar -D SHIFT_BITS=0
-D COEFF=DIG(0.008996695280f)DIG(-0.01206895430f)DIG(0.3683320880f)DIG(-0.1414978057f)DIG(-0.1748190373f)DIG(-0.1139794663f)DIG(-0.1803059429f)
```
The root cause is still unclear to me:
* I've found only two such cases, no idea why these specific 2 cases among several hundreds
* Changing local group size or turning off barriers does not change the behavior
* All attempts to run this code with hardware sanitizer enabled result in an immediate segfault even before the first test is started
### Steps to reproduce
Build OpenCV for ARM Android, upload binaries to device and run the tests.
CMake flags for build:
```
WITH_OPENCL : true
ANDROID_ABI : "arm64-v8a"
ANDROID_SDK : "~/Android/Sdk"
ANDROID_NDK : "~/Android/Sdk/ndk/26.1.10909125"
CMAKE_TOOLCHAIN_FILE : "~/Android/Sdk/ndk/26.1.10909125/build/cmake/android.toolchain.cmake"
BUILD_ANDROID_PROJECTS : false
```
Upload binary:
```
adb push ${OpenCV_build_directory}/bin/opencv_test_imgproc /data/local/tmp
```
Run:
```
adb shell '/data/local/tmp/opencv_test_imgproc --gtest_filter=*OCL_ImageProc/SepFilter2D.Mat/10'
```
```
adb shell '/data/local/tmp/opencv_test_imgproc --gtest_filter=*OCL_Filter/Scharr3x3_cols16_rows2.Mat/26'
```
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [X] I updated to the latest OpenCV version and the issue is still there
- [X] There is reproducer code and related data files (videos, images, onnx, etc) | bug,category: imgproc,platform: android,category: ocl | low | Minor |
2,613,037,603 | vscode | augmented_images = data_aug(img) |
Type: <b>Bug</b>
Auto-generated text from notebook cell performance. The duration for the renderer, VS Code Builtin Notebook Output Renderer, is slower than expected.
Execution Time: 42ms
Renderer Duration: 2ms
VS Code version: Code 1.94.2 (384ff7382de624fb94dbaf6da11977bba1ecd427, 2024-10-09T16:08:44.566Z)
OS version: Windows_NT x64 10.0.22631
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|12th Gen Intel(R) Core(TM) i5-12450H (12 x 2496)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|15.73GB (7.29GB free)|
|Process Argv|C:\\Users\\debas\\Downloads\\Facial Verification with a Siamese Network - Final.ipynb --crash-reporter-id ade5cbed-8fa8-49e0-a845-9ba4f7020180|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (32)</summary>
Extension|Author (truncated)|Version
---|---|---
vscode-django|bat|1.15.0
gitlens|eam|15.6.2
vscode-html-css|ecm|2.0.10
copilot|Git|1.242.0
copilot-chat|Git|0.21.2
rainbow-csv|mec|3.12.0
vscode-docker|ms-|1.29.3
csdevkit|ms-|1.11.14
csharp|ms-|2.50.27
vscode-dotnet-runtime|ms-|2.2.1
debugpy|ms-|2024.12.0
python|ms-|2024.16.1
jupyter|ms-|2024.9.1
jupyter-keymap|ms-|1.1.2
jupyter-renderers|ms-|1.0.19
vscode-jupyter-cell-tags|ms-|0.1.9
vscode-jupyter-slideshow|ms-|0.1.6
cmake-tools|ms-|1.19.52
cpptools|ms-|1.22.10
cpptools-extension-pack|ms-|1.3.0
vsliveshare|ms-|1.0.5941
java|red|1.35.1
cmake|twx|0.0.17
intellicode-api-usage-examples|Vis|0.2.9
vscodeintellicode|Vis|1.3.2
vscode-gradle|vsc|3.16.4
vscode-java-debug|vsc|0.58.0
vscode-java-dependency|vsc|0.24.0
vscode-java-pack|vsc|0.29.0
vscode-java-test|vsc|0.42.0
vscode-maven|vsc|0.44.0
JavaScriptSnippets|xab|1.8.0
(1 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
a9j8j154:30646983
962ge761:30959799
pythongtdpath:30769146
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
945dj816:31013170
dvdeprecation:31068756
dwnewjupytercf:31046870
impr_priority:31102340
nativerepl2:31139839
refactort:31108082
pythonrstrctxt:31112756
wkspc-onlycs-t:31132770
wkspc-ranged-t:31151552
cf971741:31144450
defaultse:31146405
iacca2:31156134
notype1:31157159
5fd0e150:31155592
dwcopilot:31164048
icondisabled:31158250
```
</details>
<!-- generated by issue reporter --> | under-discussion | low | Critical |
2,613,042,071 | svelte | Svelte 5: onMount accepts async function, $effect does not | ### Describe the bug
In most cases, I'm trying to replace onMount with $effect, which seems to be the better way to go. However, $effect doesn't allow an async function:
```
<script lang="ts">
import type { Snippet } from 'svelte'
interface Props {
theme: string
children?: Snippet
}
let { theme, children }: Props = $props()
$effect(async () => {
await import(`@styles/themes/${theme}.scss`)
})
</script>
{@render children?.()}
```
Tying Error:
```
Argument of type '() => Promise<void>' is not assignable to parameter of type '() => void | (() => void)'.
Type 'Promise<void>' is not assignable to type 'void | (() => void)'.ts(2345)
```
### Reproduction
See above
### Logs
_No response_
### System Info
```shell
System:
OS: macOS 15.0.1
CPU: (12) arm64 Apple M3 Pro
Memory: 76.77 MB / 18.00 GB
Shell: 5.9 - /bin/zsh
Binaries:
Node: 20.15.1 - ~/.asdf/installs/nodejs/20.15.1/bin/node
Yarn: 4.3.1 - ~/.asdf/installs/nodejs/20.15.1/bin/yarn
npm: 10.8.2 - ~/.asdf/plugins/nodejs/shims/npm
bun: 1.1.0 - ~/.bun/bin/bun
Browsers:
Chrome: 130.0.6723.70
Safari: 18.0.1
npmPackages:
svelte: ^5.1.2 => 5.1.2
```
### Severity
annoyance | types / typescript | medium | Critical |
2,613,046,069 | flutter | gclient sync of the engine cannot pull flutter/java/openjdk/linux-aarch64 | I get an error saying gclient cannot find `flutter/java/openjdk/linux-aarc64` on aarch64 Linux. Looks like it doesn't exist? https://chrome-infra-packages.appspot.com/p/flutter/java/openjdk
```
> ________ running 'cipd ensure -log-level error -root /nix/store/q0pwqcvz1il5b0b38h6r8agnifx8v2cr-flutter-engine-source-af0f0d559c8a87d912a20971bbd84afc80a54b0f-x86_64-linux-x86_64-linux -ensure-file /build/tmpzwogih2k.ensure' in '.'
> Errors:
> failed to resolve flutter/java/openjdk/linux-arm64@version:17 (line 23): no such package: flutter/java/openjdk/linux-arm64
> Error: Command 'cipd ensure -log-level error -root /nix/store/q0pwqcvz1il5b0b38h6r8agnifx8v2cr-flutter-engine-source-af0f0d559c8a87d912a20971bbd84afc80a54b0f-x86_64-linux-x86_64-linux -ensure-file /build/tmpzwogih2k.ensure' returned non-zero exit status 1
> Errors:
> failed to resolve flutter/java/openjdk/linux-arm64@version:17 (line 23): no such package: flutter/java/openjdk/linux-arm64
``` | engine,platform-linux,P3,platform-host-arm,platform-target-arm,e: local-engine-development,team-engine,triaged-engine | low | Critical |
2,613,063,015 | next.js | Error next js 15 image component | ### Link to the code that reproduces this issue
https://codesandbox.io/p/github/Farruh-JS/next15_image_issue/master
### To Reproduce
Codesandbox link: https://codesandbox.io/p/github/Farruh-JS/next15_image_issue/master
Github link: https://github.com/Farruh-JS/next15_image_issue
I don't know why, but on Codesandbox everything worked fine for me, you should run this locally.

### Current vs. Expected behavior
If you use the Next Image component, some random images will return a 500 error. However, if you render the same image with an img tag, it will work fine.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Binaries:
Node: 20.11.1
npm: 10.5.0
Relevant Packages:
next: 15.0.1
eslint-config-next: 15.0.0-canary.148
react: 19.0.0-rc-69d4b800-20241021
react-dom: 19.0.0-rc-69d4b800-20241021
typescript: 5
Next.js Config:
images: {
remotePatterns: [
{ protocol: 'https', hostname: 'ik.imagekit.io' },
{
protocol: 'https',
hostname: 'storage.googleapis.com',
port: '',
pathname: '/polebor/site/media/original/**',
},
{ protocol: 'https', hostname: 'mp.softly.uz' },
],
},
```
### Which area(s) are affected? (Select all that apply)
Image (next/image)
### Which stage(s) are affected? (Select all that apply)
next dev (local), next start (local)
### Additional context
I tested my reproduction against different canary releases including latest 15.0.2-canary.6 version, and the first one that introduced the bug was "15.0.0", since reverting to "14.2.14" works.
Also i have tried on my windows machine, got the same problem
| bug,Image (next/image) | low | Critical |
2,613,133,338 | godot | GraphEdit in the editor scales GraphNodes according to editor scale but not positions, leading to broken display at scales different than the design scale | ### Tested versions
Godot 4.3 Stable
### System information
Windows 10 64, Godot 4.3 Stable, Forward+
### Issue description
Godot Engine does not respect the grid of visual nodes (in shaders or other graphs) if the UI scale has changed. It is crucial for public projects or games developed by multiple people on different screens. Or even if a single person tries to work on a desktop (100% scale) and laptop (150-200% scale).
### You made a node graph, when your system is at 100% scale, it looks clean:

### With 150% scale, nodes are mashed together:

### With 200% scale, it gets even worse:

The solution is not just change sizes of nodes themselves, but also make them respect the grid (and reorganize accordingly).
### Steps to reproduce
1. Use system scale of 100% (and/or editor scale of 100%)
2. Make a visual shader with several nodes aligned in a grid, save it
3. Change UI scale (in the system or in the Godot Editor), relaunch
4. Grid is now messed up completely
### Minimal reproduction project (MRP)
You can try my graph from here: https://github.com/ArseniyMirniy/Godot-4-Free-Color-Correction-and-Screen-Effects-Visual-Shader | bug,topic:editor,confirmed,topic:visualscript,topic:shaders | low | Critical |
2,613,142,151 | godot | When system scale changes and Godot 4 is running, it tries to adjust and usually crashes | ### Tested versions
Godot 4.3 Stable
### System information
Windows 10 and 11, Godot 4.3
### Issue description
Godot 4 is always following system scale, and there are many issues with that. It ignores the defined scale in the editor settings (if they are changed in system, it changes them again), it breaks UI elements, and finally, it tries to adjust on runtime.
If Godot 4.3 editor is running, and the user tries to change the scale in Windows OS settings, the app starting to consume 100% of CPU for a certain amount of time and tries to rescale everything on runtime. Very often it does not succeed and crashes. Sometimes it changes the scale, and it works (in my test, 4/10 rescales worked, and 6 crashed).
### Steps to reproduce
1. Launch Godot 4.3
2. Open Windows 10 OS settings
3. Change scale and apply
4. Godot either freezes for 10-15 seconds and changes scale or crashes.
### Minimal reproduction project (MRP)
Does not require a project. | bug,platform:windows,topic:porting,needs testing,crash | low | Critical |
2,613,147,626 | go | govulncheck-action: Warning: Both go-version and go-version-file inputs are specified, only go-version will be used while only 'go-version-file: go.mod' is specified | ### govulncheck version
golang/govulncheck-action@v1.0.4
### Does this issue reproduce at the latest version of golang.org/x/vuln?
- Yes.
### Output of `go env` in your module/workspace:
```shell
-
```
### What did you do?
```
- uses: golang/govulncheck-action@v1.0.4
with:
go-version-file: go.mod
go-package: ./...
```
### What did you see happen?
Warning: Both go-version and go-version-file inputs are specified, only go-version will be used
### What did you expect to see?
No warning as `go-version-file: go.mod` has been defined. If this is the case, then the code should omit go-version and only use the version that is defined in the go.mod file. Now it is using another Golang version, while another version is defined in the go.mod file. | NeedsInvestigation,vulncheck or vulndb | low | Minor |
2,613,151,881 | kubernetes | CRD Mutation handler with leases | ### What would you like to be added?
We have a leases feature in Kubernetes, If we have an api server is deployed with v1 we can create another v10 version of api server. When deploying CRDs we can use the apiversion v10 to deploy on a newly leased for 10 hours. So that we can test it in 10 hours and move it to v1 CRD upgrade.
### Why is this needed?
This is to have a closest env with the production cluster where we can test with the real time env with automation. This is not only for CRD it can be applied for every resources that's been there in the K8s. But we can start with CRDs and cluster based resources. | sig/api-machinery,kind/feature,triage/accepted | low | Minor |
2,613,181,044 | rust | ICE: `could not resolve DefId` | <!--
[31mICE[0m: Rustc ./a.rs '' 'error: internal compiler error: could not resolve DefId(0:3 ~ a[cd3a]::UnsafeCopy)', 'error: internal compiler error: could not resolve DefId(0:3 ~ a[cd3a]::UnsafeCopy)'
File: /tmp/im/a.rs
-->
auto-reduced (treereduce-rust):
````rust
trait UnsafeCopy<'a, T: Copy>
where
for<'b> <Self as UnsafeCopy<'b, T>>::Item: use<'i, 'a, Self>,
{
}
````
original:
````rust
trait UnsafeCopy<'a, T: Copy>
where
for<'b> <Self as UnsafeCopy<'b, T>>::Item: use<'i, 'a, Self>,
{
type Item;
fn bug(item: &Self::Item) -> (
//~^ ERROR `dyn` is a keyword
//~| WARN this is accepted in the current edition
// Note that we do not lint nor fix occurrences under macros
dyn
) {
let x: Monster = **item;
&x as *const _;
}
}
impl<T: Copy + std::ops::Deref> UnsafeCopy<'_, T> for T {
type Item = T;
//~^ type mismatch resolving `<T as Deref>::Target == T`
}
pub fn main() {
<&'static str>::bug(&"");
//~^ type mismatch resolving `<&str as Deref>::Target == &str`
}
````
Version information
````
rustc 1.84.0-nightly (788202a2c 2024-10-25)
binary: rustc
commit-hash: 788202a2cef5dde0743490fd51515f373d4207a6
commit-date: 2024-10-25
host: x86_64-unknown-linux-gnu
release: 1.84.0-nightly
LLVM version: 19.1.1
````
Possibly related line of code:
https://github.com/rust-lang/rust/blob/788202a2cef5dde0743490fd51515f373d4207a6/compiler/rustc_hir_analysis/src/collect/resolve_bound_vars.rs#L1429-L1441
Command:
`/home/matthias/.rustup/toolchains/master/bin/rustc `
<details><summary><strong>Program output</strong></summary>
<p>
```
error: `use<...>` precise capturing syntax not allowed in bounds
--> /tmp/icemaker_global_tempdir.Xk7Jb50ziwEv/rustc_testrunner_tmpdir_reporting.iDnmKNe3NCiP/mvce.rs:3:48
|
3 | for<'b> <Self as UnsafeCopy<'b, T>>::Item: use<'i, 'a, Self>,
| ^^^^^^^^^^^^^^^^^
error[E0261]: use of undeclared lifetime name `'i`
--> /tmp/icemaker_global_tempdir.Xk7Jb50ziwEv/rustc_testrunner_tmpdir_reporting.iDnmKNe3NCiP/mvce.rs:3:52
|
3 | for<'b> <Self as UnsafeCopy<'b, T>>::Item: use<'i, 'a, Self>,
| ^^ undeclared lifetime
|
= note: for more information on higher-ranked polymorphism, visit https://doc.rust-lang.org/nomicon/hrtb.html
help: consider making the bound lifetime-generic with a new `'i` lifetime
|
3 | for<'i, 'b> <Self as UnsafeCopy<'b, T>>::Item: use<'i, 'a, Self>,
| +++
help: consider introducing lifetime `'i` here
|
1 | trait UnsafeCopy<'i, 'a, T: Copy>
| +++
error[E0576]: cannot find associated type `Item` in trait `UnsafeCopy`
--> /tmp/icemaker_global_tempdir.Xk7Jb50ziwEv/rustc_testrunner_tmpdir_reporting.iDnmKNe3NCiP/mvce.rs:3:42
|
3 | for<'b> <Self as UnsafeCopy<'b, T>>::Item: use<'i, 'a, Self>,
| ^^^^ not found in `UnsafeCopy`
error[E0601]: `main` function not found in crate `mvce`
--> /tmp/icemaker_global_tempdir.Xk7Jb50ziwEv/rustc_testrunner_tmpdir_reporting.iDnmKNe3NCiP/mvce.rs:5:2
|
5 | }
| ^ consider adding a `main` function to `/tmp/icemaker_global_tempdir.Xk7Jb50ziwEv/rustc_testrunner_tmpdir_reporting.iDnmKNe3NCiP/mvce.rs`
error: internal compiler error: could not resolve DefId(0:3 ~ mvce[1259]::UnsafeCopy)
--> /tmp/icemaker_global_tempdir.Xk7Jb50ziwEv/rustc_testrunner_tmpdir_reporting.iDnmKNe3NCiP/mvce.rs:3:60
|
3 | for<'b> <Self as UnsafeCopy<'b, T>>::Item: use<'i, 'a, Self>,
| ^^^^
thread 'rustc' panicked at compiler/rustc_hir_analysis/src/collect/resolve_bound_vars.rs:1435:14:
Box<dyn Any>
stack backtrace:
0: 0x77b4b869d63a - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::h5e8099de960d4cc6
1: 0x77b4b8e041ca - core::fmt::write::hf87b533eacaf516a
2: 0x77b4ba0a72d1 - std::io::Write::write_fmt::hb58c604a9dfe85f1
3: 0x77b4b869d492 - std::sys::backtrace::BacktraceLock::print::h922a09e37c27473c
4: 0x77b4b869f996 - std::panicking::default_hook::{{closure}}::ha4ce1ceabb39d9dc
5: 0x77b4b869f7e0 - std::panicking::default_hook::hdbfa0836e0b339ee
6: 0x77b4b771d2ef - std[a585c2783b39f442]::panicking::update_hook::<alloc[4209cb77a293fca]::boxed::Box<rustc_driver_impl[a403acafb158cb6a]::install_ice_hook::{closure#0}>>::{closure#0}
7: 0x77b4b86a00a8 - std::panicking::rust_panic_with_hook::h87bc2ca479f0c940
8: 0x77b4b7756971 - std[a585c2783b39f442]::panicking::begin_panic::<rustc_errors[5a9eb276510fb09c]::ExplicitBug>::{closure#0}
9: 0x77b4b7749916 - std[a585c2783b39f442]::sys::backtrace::__rust_end_short_backtrace::<std[a585c2783b39f442]::panicking::begin_panic<rustc_errors[5a9eb276510fb09c]::ExplicitBug>::{closure#0}, !>
10: 0x77b4b7744f19 - std[a585c2783b39f442]::panicking::begin_panic::<rustc_errors[5a9eb276510fb09c]::ExplicitBug>
11: 0x77b4b77604e1 - <rustc_errors[5a9eb276510fb09c]::diagnostic::BugAbort as rustc_errors[5a9eb276510fb09c]::diagnostic::EmissionGuarantee>::emit_producing_guarantee
12: 0x77b4b780626f - <rustc_errors[5a9eb276510fb09c]::DiagCtxtHandle>::span_bug::<rustc_span[babd1ca03564a0eb]::span_encoding::Span, alloc[4209cb77a293fca]::string::String>
13: 0x77b4b963459b - <rustc_hir_analysis[e81663bf6ec75b26]::collect::resolve_bound_vars::BoundVarContext>::resolve_type_ref
14: 0x77b4b963571d - <rustc_hir_analysis[e81663bf6ec75b26]::collect::resolve_bound_vars::BoundVarContext as rustc_hir[5879a0835a2abccf]::intravisit::Visitor>::visit_where_predicate
15: 0x77b4b9634ddf - <rustc_hir_analysis[e81663bf6ec75b26]::collect::resolve_bound_vars::BoundVarContext as rustc_hir[5879a0835a2abccf]::intravisit::Visitor>::visit_generics
16: 0x77b4b9632080 - rustc_hir[5879a0835a2abccf]::intravisit::walk_item::<rustc_hir_analysis[e81663bf6ec75b26]::collect::resolve_bound_vars::BoundVarContext>
17: 0x77b4b9631459 - <rustc_hir_analysis[e81663bf6ec75b26]::collect::resolve_bound_vars::BoundVarContext as rustc_hir[5879a0835a2abccf]::intravisit::Visitor>::visit_item
18: 0x77b4b962d8b6 - rustc_hir_analysis[e81663bf6ec75b26]::collect::resolve_bound_vars::resolve_bound_vars
19: 0x77b4b962d2dc - rustc_query_impl[cf4848b6d652eea9]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[cf4848b6d652eea9]::query_impl::resolve_bound_vars::dynamic_query::{closure#2}::{closure#0}, rustc_middle[5b3bbff2208691d6]::query::erase::Erased<[u8; 8usize]>>
20: 0x77b4b962a066 - rustc_query_system[b2d751a8ce89225a]::query::plumbing::try_execute_query::<rustc_query_impl[cf4848b6d652eea9]::DynamicConfig<rustc_query_system[b2d751a8ce89225a]::query::caches::VecCache<rustc_hir[5879a0835a2abccf]::hir_id::OwnerId, rustc_middle[5b3bbff2208691d6]::query::erase::Erased<[u8; 8usize]>>, false, false, false>, rustc_query_impl[cf4848b6d652eea9]::plumbing::QueryCtxt, false>
21: 0x77b4b9629a0d - rustc_query_impl[cf4848b6d652eea9]::query_impl::resolve_bound_vars::get_query_non_incr::__rust_end_short_backtrace
22: 0x77b4b96297ec - rustc_query_impl[cf4848b6d652eea9]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[cf4848b6d652eea9]::query_impl::named_variable_map::dynamic_query::{closure#2}::{closure#0}, rustc_middle[5b3bbff2208691d6]::query::erase::Erased<[u8; 8usize]>>
23: 0x77b4b962a066 - rustc_query_system[b2d751a8ce89225a]::query::plumbing::try_execute_query::<rustc_query_impl[cf4848b6d652eea9]::DynamicConfig<rustc_query_system[b2d751a8ce89225a]::query::caches::VecCache<rustc_hir[5879a0835a2abccf]::hir_id::OwnerId, rustc_middle[5b3bbff2208691d6]::query::erase::Erased<[u8; 8usize]>>, false, false, false>, rustc_query_impl[cf4848b6d652eea9]::plumbing::QueryCtxt, false>
24: 0x77b4b9629acd - rustc_query_impl[cf4848b6d652eea9]::query_impl::named_variable_map::get_query_non_incr::__rust_end_short_backtrace
25: 0x77b4b97aa25e - <rustc_middle[5b3bbff2208691d6]::ty::context::TyCtxt>::named_bound_var
26: 0x77b4b929508c - rustc_hir_analysis[e81663bf6ec75b26]::collect::predicates_of::gather_explicit_predicates_of
27: 0x77b4b9aec38e - rustc_hir_analysis[e81663bf6ec75b26]::collect::predicates_of::trait_explicit_predicates_and_bounds
28: 0x77b4b9aec318 - rustc_query_impl[cf4848b6d652eea9]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[cf4848b6d652eea9]::query_impl::trait_explicit_predicates_and_bounds::dynamic_query::{closure#2}::{closure#0}, rustc_middle[5b3bbff2208691d6]::query::erase::Erased<[u8; 32usize]>>
29: 0x77b4b9aec2eb - <rustc_query_impl[cf4848b6d652eea9]::query_impl::trait_explicit_predicates_and_bounds::dynamic_query::{closure#2} as core[c9fb322ef39a2568]::ops::function::FnOnce<(rustc_middle[5b3bbff2208691d6]::ty::context::TyCtxt, rustc_span[babd1ca03564a0eb]::def_id::LocalDefId)>>::call_once
30: 0x77b4b9aebdc4 - rustc_query_system[b2d751a8ce89225a]::query::plumbing::try_execute_query::<rustc_query_impl[cf4848b6d652eea9]::DynamicConfig<rustc_query_system[b2d751a8ce89225a]::query::caches::VecCache<rustc_span[babd1ca03564a0eb]::def_id::LocalDefId, rustc_middle[5b3bbff2208691d6]::query::erase::Erased<[u8; 32usize]>>, false, false, false>, rustc_query_impl[cf4848b6d652eea9]::plumbing::QueryCtxt, false>
31: 0x77b4b9aebb2b - rustc_query_impl[cf4848b6d652eea9]::query_impl::trait_explicit_predicates_and_bounds::get_query_non_incr::__rust_end_short_backtrace
32: 0x77b4b9aeba56 - rustc_middle[5b3bbff2208691d6]::query::plumbing::query_get_at::<rustc_query_system[b2d751a8ce89225a]::query::caches::VecCache<rustc_span[babd1ca03564a0eb]::def_id::LocalDefId, rustc_middle[5b3bbff2208691d6]::query::erase::Erased<[u8; 32usize]>>>
33: 0x77b4b928e4f9 - rustc_query_impl[cf4848b6d652eea9]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[cf4848b6d652eea9]::query_impl::explicit_predicates_of::dynamic_query::{closure#2}::{closure#0}, rustc_middle[5b3bbff2208691d6]::query::erase::Erased<[u8; 32usize]>>
34: 0x77b4b928d6e6 - rustc_query_system[b2d751a8ce89225a]::query::plumbing::try_execute_query::<rustc_query_impl[cf4848b6d652eea9]::DynamicConfig<rustc_query_system[b2d751a8ce89225a]::query::caches::DefIdCache<rustc_middle[5b3bbff2208691d6]::query::erase::Erased<[u8; 32usize]>>, false, false, false>, rustc_query_impl[cf4848b6d652eea9]::plumbing::QueryCtxt, false>
35: 0x77b4b928d3a3 - rustc_query_impl[cf4848b6d652eea9]::query_impl::explicit_predicates_of::get_query_non_incr::__rust_end_short_backtrace
36: 0x77b4b928c28c - rustc_hir_analysis[e81663bf6ec75b26]::collect::predicates_of::predicates_of
37: 0x77b4b928c19f - rustc_query_impl[cf4848b6d652eea9]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[cf4848b6d652eea9]::query_impl::predicates_of::dynamic_query::{closure#2}::{closure#0}, rustc_middle[5b3bbff2208691d6]::query::erase::Erased<[u8; 32usize]>>
38: 0x77b4b928d6ce - rustc_query_system[b2d751a8ce89225a]::query::plumbing::try_execute_query::<rustc_query_impl[cf4848b6d652eea9]::DynamicConfig<rustc_query_system[b2d751a8ce89225a]::query::caches::DefIdCache<rustc_middle[5b3bbff2208691d6]::query::erase::Erased<[u8; 32usize]>>, false, false, false>, rustc_query_impl[cf4848b6d652eea9]::plumbing::QueryCtxt, false>
39: 0x77b4b928d2a3 - rustc_query_impl[cf4848b6d652eea9]::query_impl::predicates_of::get_query_non_incr::__rust_end_short_backtrace
40: 0x77b4b916fac4 - <rustc_hir_analysis[e81663bf6ec75b26]::collect::CollectItemTypesVisitor as rustc_hir[5879a0835a2abccf]::intravisit::Visitor>::visit_item
41: 0x77b4b5dcb518 - rustc_hir_analysis[e81663bf6ec75b26]::check::wfcheck::check_well_formed
42: 0x77b4b9728c09 - rustc_query_impl[cf4848b6d652eea9]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[cf4848b6d652eea9]::query_impl::check_well_formed::dynamic_query::{closure#2}::{closure#0}, rustc_middle[5b3bbff2208691d6]::query::erase::Erased<[u8; 1usize]>>
43: 0x77b4b9728359 - rustc_query_system[b2d751a8ce89225a]::query::plumbing::try_execute_query::<rustc_query_impl[cf4848b6d652eea9]::DynamicConfig<rustc_query_system[b2d751a8ce89225a]::query::caches::VecCache<rustc_span[babd1ca03564a0eb]::def_id::LocalDefId, rustc_middle[5b3bbff2208691d6]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[cf4848b6d652eea9]::plumbing::QueryCtxt, false>
44: 0x77b4b9727fd0 - rustc_query_impl[cf4848b6d652eea9]::query_impl::check_well_formed::get_query_non_incr::__rust_end_short_backtrace
45: 0x77b4b9728e86 - rustc_hir_analysis[e81663bf6ec75b26]::check::wfcheck::check_mod_type_wf
46: 0x77b4b9728cb1 - rustc_query_impl[cf4848b6d652eea9]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[cf4848b6d652eea9]::query_impl::check_mod_type_wf::dynamic_query::{closure#2}::{closure#0}, rustc_middle[5b3bbff2208691d6]::query::erase::Erased<[u8; 1usize]>>
47: 0x77b4b9d38ec3 - rustc_query_system[b2d751a8ce89225a]::query::plumbing::try_execute_query::<rustc_query_impl[cf4848b6d652eea9]::DynamicConfig<rustc_query_system[b2d751a8ce89225a]::query::caches::DefaultCache<rustc_span[babd1ca03564a0eb]::def_id::LocalModDefId, rustc_middle[5b3bbff2208691d6]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[cf4848b6d652eea9]::plumbing::QueryCtxt, false>
48: 0x77b4b9d38c71 - rustc_query_impl[cf4848b6d652eea9]::query_impl::check_mod_type_wf::get_query_non_incr::__rust_end_short_backtrace
49: 0x77b4b8f9fb51 - rustc_hir_analysis[e81663bf6ec75b26]::check_crate
50: 0x77b4b969e117 - rustc_interface[14453d2331067b08]::passes::run_required_analyses
51: 0x77b4b9b85a1e - rustc_interface[14453d2331067b08]::passes::analysis
52: 0x77b4b9b859f1 - rustc_query_impl[cf4848b6d652eea9]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[cf4848b6d652eea9]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[5b3bbff2208691d6]::query::erase::Erased<[u8; 1usize]>>
53: 0x77b4b9d471ee - rustc_query_system[b2d751a8ce89225a]::query::plumbing::try_execute_query::<rustc_query_impl[cf4848b6d652eea9]::DynamicConfig<rustc_query_system[b2d751a8ce89225a]::query::caches::SingleCache<rustc_middle[5b3bbff2208691d6]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[cf4848b6d652eea9]::plumbing::QueryCtxt, false>
54: 0x77b4b9d46ecf - rustc_query_impl[cf4848b6d652eea9]::query_impl::analysis::get_query_non_incr::__rust_end_short_backtrace
55: 0x77b4b9c265b1 - rustc_interface[14453d2331067b08]::interface::run_compiler::<core[c9fb322ef39a2568]::result::Result<(), rustc_span[babd1ca03564a0eb]::ErrorGuaranteed>, rustc_driver_impl[a403acafb158cb6a]::run_compiler::{closure#0}>::{closure#1}
56: 0x77b4b9c996d4 - std[a585c2783b39f442]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[14453d2331067b08]::util::run_in_thread_with_globals<rustc_interface[14453d2331067b08]::util::run_in_thread_pool_with_globals<rustc_interface[14453d2331067b08]::interface::run_compiler<core[c9fb322ef39a2568]::result::Result<(), rustc_span[babd1ca03564a0eb]::ErrorGuaranteed>, rustc_driver_impl[a403acafb158cb6a]::run_compiler::{closure#0}>::{closure#1}, core[c9fb322ef39a2568]::result::Result<(), rustc_span[babd1ca03564a0eb]::ErrorGuaranteed>>::{closure#0}, core[c9fb322ef39a2568]::result::Result<(), rustc_span[babd1ca03564a0eb]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[c9fb322ef39a2568]::result::Result<(), rustc_span[babd1ca03564a0eb]::ErrorGuaranteed>>
57: 0x77b4b9c99b0d - <<std[a585c2783b39f442]::thread::Builder>::spawn_unchecked_<rustc_interface[14453d2331067b08]::util::run_in_thread_with_globals<rustc_interface[14453d2331067b08]::util::run_in_thread_pool_with_globals<rustc_interface[14453d2331067b08]::interface::run_compiler<core[c9fb322ef39a2568]::result::Result<(), rustc_span[babd1ca03564a0eb]::ErrorGuaranteed>, rustc_driver_impl[a403acafb158cb6a]::run_compiler::{closure#0}>::{closure#1}, core[c9fb322ef39a2568]::result::Result<(), rustc_span[babd1ca03564a0eb]::ErrorGuaranteed>>::{closure#0}, core[c9fb322ef39a2568]::result::Result<(), rustc_span[babd1ca03564a0eb]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[c9fb322ef39a2568]::result::Result<(), rustc_span[babd1ca03564a0eb]::ErrorGuaranteed>>::{closure#1} as core[c9fb322ef39a2568]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
58: 0x77b4b9c9a5ab - std::sys::pal::unix::thread::Thread::new::thread_start::h197af60b272f0d26
59: 0x77b4bb40639d - <unknown>
60: 0x77b4bb48b49c - <unknown>
61: 0x0 - <unknown>
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: please make sure that you have updated to the latest nightly
note: rustc 1.84.0-nightly (788202a2c 2024-10-25) running on x86_64-unknown-linux-gnu
query stack during panic:
#0 [resolve_bound_vars] resolving lifetimes for `UnsafeCopy`
#1 [named_variable_map] looking up a named region inside `UnsafeCopy`
end of query stack
error: aborting due to 5 previous errors
Some errors have detailed explanations: E0261, E0576, E0601.
For more information about an error, try `rustc --explain E0261`.
```
</p>
</details>
<!--
query stack:
#0 [resolve_bound_vars] resolving lifetimes for `UnsafeCopy`
#1 [named_variable_map] looking up a named region inside `UnsafeCopy`
-->
| I-ICE,T-compiler,C-bug,S-has-mcve,S-bug-has-test,F-precise_capturing,S-has-bisection | low | Critical |
2,613,200,149 | vscode | All <span> tags in hover have a 4px bottom margin which causes nested spans to have stacking bottom margins | Type: <b>Bug</b>
When an extension provides a hover using the `HoverProvider.provideHover` API, and the hover contains Markdown with HTML in it, every span gets a 4px bottom margin. This was added in https://github.com/microsoft/vscode/pull/107442 due to https://github.com/microsoft/vscode-pull-request-github/issues/1937
This means that nested spans will get stacking margins, and vertically stacked spans will have gaps between them which may be undesirable depending on how the author of the extension wants to style the hover.
Because of the (understandably) restrictive CSS allowed in `markdownRenderer.ts`, we cannot remove this bottom margin.
I would consider the stacking margins with nested spans to be a bug, because nested spans may be required to do certain types of styling of text and it does not seem correct for that to cause the margin to expand with each level of nesting.
Demonstration of problem:

To reproduce this:
1. build and install the extension at https://github.com/navtej-ac/vscode-hover-sample-extension ... It is based on the Hello World Minimal Sample from the `vscode-extension-samples` repo.
2. With the extension installed, activate it by running the "Hello World" command
3. Then hover on any line the text editor. On every other line we show the hover with background colors and demonstrate both the gap between spans and the background color and the stacking of margins. On the remaining lines we try to show a hover with `margin:0` added to the spans' styling, which causes the HTML sanitizer to strip the styles
Expected: spans with background colors can nest without the outer span increasing in height and spans with backgrounds do not have a vertical gap between them.
Actual: nested spans all have 4px margins which accumulate and vertically stacked spans have a gap between them.
Proposed Fix: Add `margin:0` to the list of allowed styles in `markdownRenderer.ts`.
VS Code version: Code 1.94.2 (Universal) (384ff7382de624fb94dbaf6da11977bba1ecd427, 2024-10-09T16:08:44.566Z)
OS version: Darwin arm64 23.6.0
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Apple M3 (8 x 2400)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|11, 9, 9|
|Memory (System)|16.00GB (0.40GB free)|
|Process Argv|--disable-extensions --crash-reporter-id 05b8a457-1897-4b62-8507-c739f0441e0b|
|Screen Reader|no|
|VM|0%|
</details>Extensions disabled<details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
vscaac:30438847
c4g48928:30535728
azure-dev_surveyone:30548225
vscrpc:30673769
2i9eh265:30646982
962ge761:30959799
pythongtdpath:30769146
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
f3je6385:31013174
dvdeprecation:31068756
dwnewjupyter:31046869
impr_priority:31102340
nativerepl1:31139838
refactort:31108082
pythonrstrctxt:31112756
wkspc-onlycs-t:31132770
wkspc-ranged-t:31151552
cf971741:31144450
defaultse:31146405
iacca2:31156134
notype1:31157159
5fd0e150:31155592
dwcopilot:31164048
iconenabled:31158251
```
</details>
<!-- generated by issue reporter --> | bug,editor-hover | low | Critical |
2,613,202,829 | pytorch | cpu faster than mps for both GRUCell and LSTMCell on apple silicon M3 max: MPS Metal | ### 🐛 Describe the bug
I compare the GRUCell & LTSMCell on both cpu and mps device on apple silicon:
````
# Example usage of the CustomminGRUCell
hidden_size = 128
batch_size = 32
input_size = 128
device = torch.device('cpu')
# Initialize cell and example data
gru_cell = nn.GRUCell(input_size,hidden_size).to(device)
h = torch.randn(batch_size, hidden_size).to(device) # Initial hidden state
inputs = torch.randn(batch_size, input_size).to(device) # Input tensor
````
**result with CPU using %timeit gru_cell(inputs, h)
117 µs ± 3.59 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
**result with MPS using %timeit gru_cell(inputs, h)
249 µs ± 4.36 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
*Similar with LSTMCell:
````
# Example usage of the CustomminGRUCell
hidden_size = 128
batch_size = 32
input_size = 128
device = torch.device('cpu')
# Initialize cell and example data
lstm_cell = nn.LSTMCell(input_size,hidden_size).to(device)
h = torch.randn(batch_size, hidden_size).to(device) # Initial hidden state
c = torch.randn(batch_size, hidden_size).to(device) # Initial hidden state
inputs = torch.randn(batch_size, input_size).to(device) # Input tensor
`````
**Result on CPU using %timeit lstm_cell(inputs, (h, c))
150 µs ± 3.43 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
**Result on MPS using %timeit lstm_cell(inputs, (h, c))
236 µs ± 3.8 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
I have also tested the minGRUCell case:
````
import torch
import torch.nn as nn
import torch.nn.functional as F
class minGRUCell(nn.Module):
def __init__(self, hidden_features):
super(minGRUCell, self).__init__()
self.hidden_features = hidden_features
# Linear layers for input and hidden state transformation
self.dense_i = nn.Linear(hidden_features, hidden_features)
self.dense_h = nn.Linear(hidden_features, hidden_features)
def forward(self, inputs, h):
# GRU Gates
htilde = self.dense_i(inputs)
z = torch.sigmoid( self.dense_h(inputs)) # Update gate ie 0 to 1 value!
# Update hidden state
new_h = torch.lerp(h, htilde, z)
return new_h # Return updated hidden state
# Example usage of the CustomminGRUCell
hidden_size = 128
batch_size = 32
input_size = 128
device = torch.device('cpu')
# Initialize cell and example data
mingru_cell = minGRUCell(hidden_size).to(device)
h = torch.randn(batch_size, hidden_size).to(device) # Initial hidden state
inputs = torch.randn(batch_size, input_size).to(device) # Input tensor
# Forward pass
new_h = mingru_cell(inputs, h)
`````
**Result on cpu : %timeit mingru_cell(inputs, h)
24.4 µs ± 269 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
**Result on mps: %timeit mingru_cell(inputs, h)
84.3 µs ± 1.23 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Any idea how to improve this ???
### Versions
Collecting environment information...
PyTorch version: 2.3.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.1 (arm64)
GCC version: Could not collect
Clang version: 18.1.3
CMake version: version 3.30.3
Libc version: N/A
Python version: 3.11.8 | packaged by conda-forge | (main, Feb 16 2024, 20:49:36) [Clang 16.0.6 ] (64-bit runtime)
Python platform: macOS-15.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M3 Max
Versions of relevant libraries:
[pip3] mypy==0.800
[pip3] mypy_extensions==0.4.4
[pip3] numpy==1.26.3
[pip3] onnx==1.16.1
[pip3] onnx-tf==1.10.0
[pip3] onnx2keras==0.0.24
[pip3] onnxruntime==1.18.0
[pip3] optree==0.12.1
[pip3] torch==2.3.1
[pip3] torch_cluster==1.6.3
[pip3] torch_geometric==2.5.0
[pip3] torch_scatter==2.1.2
[pip3] torch_sparse==0.6.18
[pip3] torch_spline_conv==1.2.2
[pip3] torchdata==0.7.1
[pip3] torchvision==0.18.1
[conda] nomkl 1.0 h5ca1d4c_0 conda-forge
[conda] numpy 1.26.3 pypi_0 pypi
[conda] optree 0.11.0 pypi_0 pypi
[conda] torch 2.3.1 pypi_0 pypi
[conda] torch-cluster 1.6.3 pypi_0 pypi
[conda] torch-geometric 2.5.0 pypi_0 pypi
[conda] torch-scatter 2.1.2 pypi_0 pypi
[conda] torch-sparse 0.6.18 pypi_0 pypi
[conda] torch-spline-conv 1.2.2 pypi_0 pypi
[conda] torchdata 0.7.1 pypi_0 pypi
[conda] torchvision 0.18.1 pypi_0 pypi
cc @msaroufim @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen | module: performance,triaged,module: mps | low | Critical |
2,613,265,857 | pytorch | Runtime error: retains_grad_hooks not implemented for compiled autograd | ### 🚀 The feature, motivation and pitch
When we use compiled autograd to compile the backward graph and get the gradient of some activation, pytorch throws the error "retains_grad_hooks not implemented for compiled autograd". Will pytorch support compiled autograd with retain_grad_hooks?
### Alternatives
_No response_
### Additional context
_No response_
cc @ezyang @chauhang @penguinwu @xmfan @yf225 @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 | triaged,oncall: pt2,module: compiled autograd | low | Critical |
2,613,273,378 | rust | abi_unsupported_vector_types lint is optimization-dependent | The lint guarding against https://github.com/rust-lang/rust/issues/116558 is intended to eventually become a hard error to fix this ABI issue. However, right now the lint is optimization-dependent: it runs during monomorphization to be able to figure out the actual argument types that will be passed at all call sites, and reject them if they need a missing target feature. If a call gets optimized away (e.g. in dead code), the lint will not fire.
I don't know any good way to prevent this, since we need the monomorphization-time information to check whether any of the arguments is a too-large SIMD vector. So this issue mostly serves to let @rust-lang/lang know that this is a thing, and as a place to track any ideas we might have about fixing this, or concerns about making this into an optimization-dependent hard error. | A-lints,T-lang,T-compiler,C-discussion,A-ABI,E-needs-design,L-abi_unsupported_vector_types | low | Critical |
2,613,301,190 | ollama | Does ollama have other model support plans?Such as TTS, graphics, video, etc | We deeply appreciate the convenience, speed, and power of Olama. In order to meet more application scenarios, we hope that Olama can increase support for other model categories, such as text generated speech, text generated images, text generated videos, etc. With the rapid development of AI, the demand for AI will also increase. I hope you can carefully consider it. | model request | low | Minor |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.