id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k ⌀ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,467,974,466 | yt-dlp | [vimeo:user] not all videos extracted | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm asking a question and **not** reporting a bug or requesting a feature
- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Please make sure the question is worded well enough to be understood
Yt-dlp finds 26 videos in the playlist, instead of 36 available on the webpage

### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
F:\_distr\yt>yt-dlp --no-check-certificate --flat-playlist --path F:\VIMEO
-vU "https://vimeo.com/coachingupuniversity"
[debug] Command-line config: ['--no-check-certificate', '--flat-playlist', '--pa
th', 'F:\\VIMEO', '-vU', 'https://vimeo.com/coachingupuniversity']
[debug] Encodings: locale cp1251, fs utf-8, pref cp1251, out utf-8 (No VT), erro
r utf-8 (No VT), screen utf-8 (No VT)
[debug] yt-dlp version stable@2024.08.06 from yt-dlp/yt-dlp [4d9231208] (win_exe
)
[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-7-6.1.7601-SP1 (OpenSSL 1.
1.1k 25 Mar 2021)
[debug] exe versions: ffmpeg 2024-07-04-git-03175b587c-essentials_build-www.gyan
.dev (setts)
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.07.04,
curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.35.5, urllib3-2.2.
2, websockets-12.0
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1830 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releas
es/latest
Latest version: stable@2024.08.06 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.08.06 from yt-dlp/yt-dlp)
[vimeo:user] Extracting URL: https://vimeo.com/coachingupuniversity
[vimeo:user] coachingupuniversity: Downloading page 1
[download] Downloading playlist: up.university
[vimeo:user] coachingupuniversity: Downloading page 2
[vimeo:user] coachingupuniversity: Downloading page 3
[vimeo:user] Playlist up.university: Downloading 26 items of 26
[debug] The information of all playlist entries will be held in memory
[download] Downloading item 1 of 26
[download] Downloading item 2 of 26
[download] Downloading item 3 of 26
[download] Downloading item 4 of 26
[download] Downloading item 5 of 26
[download] Downloading item 6 of 26
[download] Downloading item 7 of 26
[download] Downloading item 8 of 26
[download] Downloading item 9 of 26
[download] Downloading item 10 of 26
[download] Downloading item 11 of 26
[download] Downloading item 12 of 26
[download] Downloading item 13 of 26
[download] Downloading item 14 of 26
[download] Downloading item 15 of 26
[download] Downloading item 16 of 26
[download] Downloading item 17 of 26
[download] Downloading item 18 of 26
[download] Downloading item 19 of 26
[download] Downloading item 20 of 26
[download] Downloading item 21 of 26
[download] Downloading item 22 of 26
[download] Downloading item 23 of 26
[download] Downloading item 24 of 26
[download] Downloading item 25 of 26
[download] Downloading item 26 of 26
[download] Finished downloading playlist: up.university
F:\_distr\yt>yt-dlp -U
Latest version: stable@2024.08.06 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.08.06 from yt-dlp/yt-dlp)
```
| site-bug | low | Critical |
2,468,046,350 | pytorch | `ELU()`'s `alpha` argument with `int`, `complex` or `bool` and `inplace` argument with `int`, `complex` and `float` work against the doc | ### 🐛 Describe the bug
[The doc](https://pytorch.org/docs/stable/generated/torch.nn.ELU.html) of `ELU()` says the types of `alpha` and `inplace` argument are `float` and `bool` respectively as shown below:
- alpha ([float](https://docs.python.org/3/library/functions.html#float)) – the α value for the ELU formulation. Default: 1.0
- inplace ([bool](https://docs.python.org/3/library/functions.html#bool)) – can optionally do the operation in-place. Default: False
But `alpha` argument with `int`, `complex` or `bool` and `inplace` argument with `int`, `complex` and `float` work against [the doc](https://pytorch.org/docs/stable/generated/torch.nn.ELU.html) as shown below:
```python
import torch
from torch import nn
my_tensor = torch.tensor([-1., 0., 1.])
elu = nn.ELU(alpha=1, inplace=1)
elu(input=my_tensor)
# tensor([-0.6321, 0.0000, 1.0000])
my_tensor = torch.tensor([-1., 0., 1.])
elu = nn.ELU(alpha=1.+0.j, inplace=1.+0.j)
elu(input=my_tensor)
# tensor([-0.6321, 0.0000, 1.0000])
my_tensor = torch.tensor([-1., 0., 1.])
elu = nn.ELU(alpha=True, inplace=1.)
elu(input=my_tensor)
# tensor([-0.6321, 0.0000, 1.0000])
```
### Versions
```python
import torch
torch.__version__ # 2.3.1+cu121
```
cc @svekars @brycebortree @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki | module: docs,module: nn,triaged,module: python frontend | low | Critical |
2,468,066,198 | pytorch | torch.cond example fails in PyTorch2.4 | ### 🐛 Describe the bug
I don't want to believe it's PyTorch's issue as it's in the example.
Try example https://pytorch.org/docs/stable/cond.html in PyTorch 2.4.0
<img width="678" alt="image" src="https://github.com/user-attachments/assets/37a1e038-0f91-4764-bfbd-702b0bb2a909">
<img width="1486" alt="image" src="https://github.com/user-attachments/assets/78ec418f-cbd7-48a3-a697-aa83fef86f86">
### Versions
Collecting environment information...
PyTorch version: 2.4.0+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Enterprise
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.22631-SP0
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture=9
CurrentClockSpeed=3600
DeviceID=CPU0
Family=179
L2CacheSize=6144
L2CacheSpeed=
Manufacturer=GenuineIntel
MaxClockSpeed=3600
Name=Intel(R) Xeon(R) W-2133 CPU @ 3.60GHz
ProcessorType=3
Revision=21764
Versions of relevant libraries:
[pip3] numpy==1.26.2
[pip3] onnx==1.14.0
[pip3] onnxruntime==1.16.3
[pip3] optree==0.11.0
[pip3] torch==2.4.0
[pip3] torchaudio==0.11.0+cpu
[pip3] torchvision==0.19.0
[conda] Could not collect
cc @albanD | triaged,module: python frontend | low | Critical |
2,468,073,036 | godot | Hover popup menus appear black in Wayland (XWayland) | ### Tested versions
4.3.stable
### System information
Fedora 40 - KDE Plasma 6.1.3 (Wayland) - AMD Radeon Vega 8 Graphics - 8 × AMD Ryzen 5 3550H with Radeon Vega Mobile Gfx
### Issue description
With `single_window_mode` and `prefer_wayland` disabled, popup windows that appear on hover (i.e. toolbar menus or tooltips) render black until the cursor hovers over them.
https://github.com/user-attachments/assets/a393c43c-a4a7-4675-bdf0-d8218353f90e
This is only present in Wayland. In X11 popups seem to appear black only for a split second.
In Godot 4.2.2 and prior the issue was missing. It only appeared in 4.3.
Sorry if the issue is already reported, I couldn't find anything similar.
### Steps to reproduce
1. Open Godot Editor in Wayland
2. Click on a toolbar menu item
3. Hover on a different toolbar menu item
4. Observe the popup menu being black
### Minimal reproduction project (MRP)
Empty project | bug,platform:linuxbsd,topic:porting,topic:gui | low | Minor |
2,468,075,397 | pytorch | Performance: Autodiff stores inputs of convolution layers with non-trainable weight | ### 🐛 Describe the bug
**Description:** Assume we have a concolution layer that processes an input `x` into an output `y` using a kernel `w`, i.e. `y = conv(x, w)`. Further, assume that the input is differentiable, but the weight is not, i.e. `w.requires_grad=False` and `x.requires_grad=True`.
- **Desired behavior:** To backpropagate through this layer, we only need to store `w`, but not `x`.
- **Observed behavior:** It seems that PyTorch stores `x`, too, even when `w.requires_grad=False`.
**Summary:** Since `x` is unnecessarily kept in memory, this increases memory usage.
The same reasoning applies to other layers whose output depends linearly on both the input and the parameters, such as fully-connected layers, transpose convolutions, and batch normalization in evaluation mode.
We found the following layers to be affected by the above behavior:
- `torch.nn.Conv{1,2,3}d`
- `torch.nn.ConvTranspose{1,2,3}d`
- `torch.nn.BatchNormNd` (evaluation mode)
**Solution:** Store only the necessary tensors depending on the `requires_grad` values of layer inputs and weights. See e.g. [here](https://github.com/plutonium-239/memsave_torch/blob/fef6003dc84111de5fa8993f39e6a94194e6382d/memsave_torch/nn/functional/Conv.py#L40-L43) for an example fix that simply defines a new `torch.autograd.Function` for convolution with the correct logic.
**More details:** We (@f-dangel and me) describe this finding in more detail in our [WANT@ICML'24 paper](https://openreview.net/pdf?id=KsUUzxUK7N) along with visualizations demonstrating this behaviour.
**MWE:** Let's consider a sequence of 5 size-preserving convolutions (each intermediate takes 512 MiB storage). If we set all parameters except for the first layer's weight to non-trainable, PyTorch should discard the non-trainable layers' inputs. However, the memory consumption seems to be the same as if all weights were trainable. See Figure 1 in the [paper](https://openreview.net/pdf?id=KsUUzxUK7N).
Example requirements: `pip install memory_profiler==0.61.0 torch==2.0.1`
```python
"""PyTorch's convolutions store their input when `weight.requires_grad=False`."""
from collections import OrderedDict
from memory_profiler import memory_usage
from torch import rand
from torch.nn import Conv2d, Sequential
SHAPE_X = (256, 8, 256, 256) # shape of the input
MEM_X = 512 # requires 512 MiB storage
NUM_LAYERS = 5
def setup():
"""Create a deep linear CNN with size-preserving convolutions and an input."""
layers = OrderedDict()
for i in range(NUM_LAYERS):
layers[f"conv{i}"] = Conv2d(8, 8, 3, padding=1, bias=False)
return rand(*SHAPE_X), Sequential(layers)
# Consider three different scenarios: 1) no parameters are trainable, 2) all
# layers are trainable, 3) only the first layer is trainable
def non_trainable():
"""Forward pass through the CNN with all layers non-trainable."""
X, net = setup()
for i in range(NUM_LAYERS):
getattr(net, f"conv{i}").weight.requires_grad = False
for name, param in net.named_parameters():
print(f"{name}, requires_grad={param.requires_grad}")
return net(X)
def all_trainable():
"""Forward pass through the CNN with all layers trainable."""
X, net = setup()
for i in range(NUM_LAYERS):
getattr(net, f"conv{i}").weight.requires_grad = True
for name, param in net.named_parameters():
print(f"{name}, requires_grad={param.requires_grad}")
return net(X)
def first_trainable():
"""Forward pass through the CNN with first layer trainable."""
X, net = setup()
for i in range(NUM_LAYERS):
getattr(net, f"conv{i}").weight.requires_grad = i == 1
for name, param in net.named_parameters():
print(f"{name}, requires_grad={param.requires_grad}")
return net(X)
if __name__ == "__main__":
kwargs = {"interval": 1e-4, "max_usage": True} # memory profiler settingss
# measure memory and print
mem_setup = memory_usage(setup, **kwargs)
print(f"Weights+input: {mem_setup:.1f} MiB.")
mem_non = memory_usage(non_trainable, **kwargs) - mem_setup
print(
f"Non-trainable: {mem_non:.1f} MiB (≈{mem_non / MEM_X:.1f} hidden activations)."
)
mem_all = memory_usage(all_trainable, **kwargs) - mem_setup
print(
f"All-trainable: {mem_all:.1f} MiB (≈{mem_all / MEM_X:.1f} hidden activations)."
)
mem_first = memory_usage(first_trainable, **kwargs) - mem_setup
print(
f"First-trainable: {mem_first:.1f} MiB (≈{mem_first / MEM_X:.1f} hidden activations)."
)
```
Output:
```bash
Weights+input: 702.5 MiB.
conv0.weight, requires_grad=False
conv1.weight, requires_grad=False
conv2.weight, requires_grad=False
conv3.weight, requires_grad=False
conv4.weight, requires_grad=False
Non-trainable: 1535.5 MiB # (≈3.0×hidden activations).
conv0.weight, requires_grad=True
conv1.weight, requires_grad=True
conv2.weight, requires_grad=True
conv3.weight, requires_grad=True
conv4.weight, requires_grad=True
All-trainable: 3071.6 MiB # (≈6.0×hidden activations).
conv0.weight, requires_grad=False
conv1.weight, requires_grad=True
conv2.weight, requires_grad=False
conv3.weight, requires_grad=False
conv4.weight, requires_grad=False
First-trainable: 3071.6 MiB # (≈6.0×hidden activations).
```
### Versions
```
Collecting environment information...
PyTorch version: 2.0.1+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 Home Single Language
GCC version: (tdm64-1) 10.3.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.8.6 (tags/v3.8.6:db45529, Sep 23 2020, 15:52:53) [MSC v.1927 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19041-SP0
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1660 Ti
Nvidia driver version: 546.17
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture=9
CurrentClockSpeed=2592
DeviceID=CPU0
Family=198
L2CacheSize=1536
L2CacheSpeed=
Manufacturer=GenuineIntel
MaxClockSpeed=2592
Name=Intel(R) Core(TM) i7-10750H CPU @ 2.60GHz
ProcessorType=3
Revision=
Versions of relevant libraries:
[pip3] flake8==7.0.0
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.24.4
[pip3] pytorch-lightning==1.2.8
[pip3] pytorch-memlab==0.3.0
[pip3] pytorch-metric-learning==1.2.1
[pip3] torch==2.0.1+cu118
[pip3] torch-cluster==1.6.1+pt20cu118
[pip3] torch-geometric==2.3.1
[pip3] torch-scatter==2.1.1+pt20cu118
[pip3] torch-sparse==0.6.17+pt20cu118
[pip3] torch-spline-conv==1.2.2+pt20cu118
[pip3] torchaudio==2.0.2+cu118
[pip3] torcheval==0.0.7
[pip3] torchinfo==1.8.0
[pip3] torchmetrics==0.8.2
[pip3] torchsample==0.1.3
[pip3] torchview==0.2.6
[pip3] torchvision==0.15.2+cu118
[pip3] torchviz==0.0.2
[conda] Could not collect
```
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan | module: autograd,module: memory usage,triaged,enhancement,needs design | low | Critical |
2,468,081,098 | pytorch | AOTDispatcher loses stack traces of bw nodes from compiled autograd | repro:
```
import torch
@torch.compile
def f(x):
return torch.matmul(x, x).sin()
x = torch.randn(4, 4, requires_grad=True)
with torch._dynamo.utils.maybe_enable_compiled_autograd(True):
out = f(x)
out.sum().backward()
```
running with `TORCH_LOGS="compiled_autograd_verbose,aot"` per @xmfan, you can see that the compiled autograd graph shows info about backward nodes in the stack trace, but we get "no stacktrace found" in the AOT inference graph:
```
INFO: TRACED GRAPH
===== Joint graph 0 =====
/home/hirsheybar/local/b/pytorch/torch/fx/_lazy_graph_module.py class joint_helper(torch.nn.Module):
def forward(self, primals, tangents):
primals_1: "f32[4, 4][4, 1]cpu"; tangents_1: "f32[4, 4][4, 1]cpu";
primals_1, tangents_1, = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec)
# File: /home/hirsheybar/local/b/pytorch/tmp5.py:6 in f, code: return torch.matmul(x, x).sin()
mm: "f32[4, 4][4, 1]cpu" = torch.ops.aten.mm.default(primals_1, primals_1)
sin: "f32[4, 4][4, 1]cpu" = torch.ops.aten.sin.default(mm)
cos: "f32[4, 4][4, 1]cpu" = torch.ops.aten.cos.default(mm); mm = None
mul: "f32[4, 4][4, 1]cpu" = torch.ops.aten.mul.Tensor(tangents_1, cos); tangents_1 = cos = None
permute: "f32[4, 4][1, 4]cpu" = torch.ops.aten.permute.default(primals_1, [1, 0])
mm_1: "f32[4, 4][4, 1]cpu" = torch.ops.aten.mm.default(permute, mul); permute = None
permute_1: "f32[4, 4][1, 4]cpu" = torch.ops.aten.permute.default(primals_1, [1, 0]); primals_1 = None
mm_2: "f32[4, 4][4, 1]cpu" = torch.ops.aten.mm.default(mul, permute_1); mul = permute_1 = None
# File: /home/hirsheybar/local/b/pytorch/tmp5.py:6 in f, code: return torch.matmul(x, x).sin()
add: "f32[4, 4][4, 1]cpu" = torch.ops.aten.add.Tensor(mm_2, mm_1); mm_2 = mm_1 = None
return pytree.tree_unflatten([sin, add], self._out_spec)
INFO: aot_config id: 0, fw_metadata=ViewAndMutationMeta(input_info=[InputAliasInfo(is_leaf=True, mutates_data=False, mutates_metadata=False, mutations_hidden_from_autograd=True, mutations_under_no_grad_or_inference_mode=False, mutation_inductor_storage_resize=False, mutates_storage_metadata=False, requires_grad=True, keep_input_mutations=True)], output_info=[OutputAliasInfo(output_type=<OutputType.non_alias: 1>, raw_type=<class 'torch._subclasses.functional_tensor.FunctionalTensor'>, base_idx=None, dynamic_dims=set(), requires_grad=True, functional_tensor=None)], num_intermediate_bases=0, keep_input_mutations=True, traced_tangents=[FakeTensor(..., size=(4, 4))], subclass_inp_meta=[0], subclass_fw_graph_out_meta=[0], subclass_tangent_meta=[0], is_train=True, traced_tangent_metas=None, num_symints_saved_for_bw=0, grad_enabled_mutation=None, deterministic=False, static_input_indices=[], tokens={}, indices_of_inputs_that_requires_grad_with_mutations_in_bw=[], bw_donated_idxs=None), inner_meta=ViewAndMutationMeta(input_info=[InputAliasInfo(is_leaf=True, mutates_data=False, mutates_metadata=False, mutations_hidden_from_autograd=True, mutations_under_no_grad_or_inference_mode=False, mutation_inductor_storage_resize=False, mutates_storage_metadata=False, requires_grad=True, keep_input_mutations=True)], output_info=[OutputAliasInfo(output_type=<OutputType.non_alias: 1>, raw_type=<class 'torch._subclasses.functional_tensor.FunctionalTensor'>, base_idx=None, dynamic_dims=set(), requires_grad=True, functional_tensor=None)], num_intermediate_bases=0, keep_input_mutations=True, traced_tangents=[FakeTensor(..., size=(4, 4))], subclass_inp_meta=[0], subclass_fw_graph_out_meta=[0], subclass_tangent_meta=[0], is_train=True, traced_tangent_metas=None, num_symints_saved_for_bw=0, grad_enabled_mutation=None, deterministic=False, static_input_indices=[], tokens={}, indices_of_inputs_that_requires_grad_with_mutations_in_bw=[], bw_donated_idxs=None)
INFO: TRACED GRAPH
===== Forward graph 0 =====
/home/hirsheybar/local/b/pytorch/torch/fx/_lazy_graph_module.py class GraphModule(torch.nn.Module):
def forward(self, primals_1: "f32[4, 4][4, 1]cpu"):
# File: /home/hirsheybar/local/b/pytorch/tmp5.py:6 in f, code: return torch.matmul(x, x).sin()
mm: "f32[4, 4][4, 1]cpu" = torch.ops.aten.mm.default(primals_1, primals_1)
sin: "f32[4, 4][4, 1]cpu" = torch.ops.aten.sin.default(mm)
permute: "f32[4, 4][1, 4]cpu" = torch.ops.aten.permute.default(primals_1, [1, 0]); primals_1 = None
return (sin, mm, permute)
INFO: TRACED GRAPH
===== Backward graph 0 =====
<eval_with_key>.1 class GraphModule(torch.nn.Module):
def forward(self, mm: "f32[4, 4][4, 1]cpu", permute: "f32[4, 4][1, 4]cpu", tangents_1: "f32[4, 4][4, 1]cpu"):
# File: /home/hirsheybar/local/b/pytorch/tmp5.py:6 in f, code: return torch.matmul(x, x).sin()
cos: "f32[4, 4][4, 1]cpu" = torch.ops.aten.cos.default(mm); mm = None
mul: "f32[4, 4][4, 1]cpu" = torch.ops.aten.mul.Tensor(tangents_1, cos); tangents_1 = cos = None
mm_1: "f32[4, 4][4, 1]cpu" = torch.ops.aten.mm.default(permute, mul)
mm_2: "f32[4, 4][4, 1]cpu" = torch.ops.aten.mm.default(mul, permute); mul = permute = None
# File: /home/hirsheybar/local/b/pytorch/tmp5.py:6 in f, code: return torch.matmul(x, x).sin()
add: "f32[4, 4][4, 1]cpu" = torch.ops.aten.add.Tensor(mm_2, mm_1); mm_2 = mm_1 = None
return (add,)
INFO: TRACED GRAPH
===== Joint graph 1 =====
/home/hirsheybar/local/b/pytorch/torch/fx/_lazy_graph_module.py class joint_helper(torch.nn.Module):
def forward(self, primals, tangents):
primals_1: "f32[4, 4][4, 1]cpu"; tangents_1: "f32[4, 4][4, 1]cpu";
primals_1, tangents_1, = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec)
# File: /home/hirsheybar/local/b/pytorch/tmp5.py:6 in f, code: return torch.matmul(x, x).sin()
mm: "f32[4, 4][4, 1]cpu" = torch.ops.aten.mm.default(primals_1, primals_1)
sin: "f32[4, 4][4, 1]cpu" = torch.ops.aten.sin.default(mm)
cos: "f32[4, 4][4, 1]cpu" = torch.ops.aten.cos.default(mm); mm = None
mul: "f32[4, 4][4, 1]cpu" = torch.ops.aten.mul.Tensor(tangents_1, cos); tangents_1 = cos = None
permute: "f32[4, 4][1, 4]cpu" = torch.ops.aten.permute.default(primals_1, [1, 0])
mm_1: "f32[4, 4][4, 1]cpu" = torch.ops.aten.mm.default(permute, mul); permute = None
permute_1: "f32[4, 4][1, 4]cpu" = torch.ops.aten.permute.default(primals_1, [1, 0]); primals_1 = None
mm_2: "f32[4, 4][4, 1]cpu" = torch.ops.aten.mm.default(mul, permute_1); mul = permute_1 = None
# File: /home/hirsheybar/local/b/pytorch/tmp5.py:6 in f, code: return torch.matmul(x, x).sin()
add: "f32[4, 4][4, 1]cpu" = torch.ops.aten.add.Tensor(mm_2, mm_1); mm_2 = mm_1 = None
return pytree.tree_unflatten([sin, add], self._out_spec)
INFO: aot_config id: 1, fw_metadata=ViewAndMutationMeta(input_info=[InputAliasInfo(is_leaf=True, mutates_data=False, mutates_metadata=False, mutations_hidden_from_autograd=True, mutations_under_no_grad_or_inference_mode=False, mutation_inductor_storage_resize=False, mutates_storage_metadata=False, requires_grad=True, keep_input_mutations=True)], output_info=[OutputAliasInfo(output_type=<OutputType.non_alias: 1>, raw_type=<class 'torch._subclasses.functional_tensor.FunctionalTensor'>, base_idx=None, dynamic_dims=set(), requires_grad=True, functional_tensor=None)], num_intermediate_bases=0, keep_input_mutations=True, traced_tangents=[FakeTensor(..., size=(4, 4))], subclass_inp_meta=[0], subclass_fw_graph_out_meta=[0], subclass_tangent_meta=[0], is_train=True, traced_tangent_metas=None, num_symints_saved_for_bw=0, grad_enabled_mutation=None, deterministic=False, static_input_indices=[], tokens={}, indices_of_inputs_that_requires_grad_with_mutations_in_bw=[], bw_donated_idxs=None), inner_meta=ViewAndMutationMeta(input_info=[InputAliasInfo(is_leaf=True, mutates_data=False, mutates_metadata=False, mutations_hidden_from_autograd=True, mutations_under_no_grad_or_inference_mode=False, mutation_inductor_storage_resize=False, mutates_storage_metadata=False, requires_grad=True, keep_input_mutations=True)], output_info=[OutputAliasInfo(output_type=<OutputType.non_alias: 1>, raw_type=<class 'torch._subclasses.functional_tensor.FunctionalTensor'>, base_idx=None, dynamic_dims=set(), requires_grad=True, functional_tensor=None)], num_intermediate_bases=0, keep_input_mutations=True, traced_tangents=[FakeTensor(..., size=(4, 4))], subclass_inp_meta=[0], subclass_fw_graph_out_meta=[0], subclass_tangent_meta=[0], is_train=True, traced_tangent_metas=None, num_symints_saved_for_bw=0, grad_enabled_mutation=None, deterministic=False, static_input_indices=[], tokens={}, indices_of_inputs_that_requires_grad_with_mutations_in_bw=[], bw_donated_idxs=None)
INFO: TRACED GRAPH
===== Forward graph 1 =====
/home/hirsheybar/local/b/pytorch/torch/fx/_lazy_graph_module.py class GraphModule(torch.nn.Module):
def forward(self, primals_1: "f32[4, 4][4, 1]cpu"):
# File: /home/hirsheybar/local/b/pytorch/tmp5.py:6 in f, code: return torch.matmul(x, x).sin()
mm: "f32[4, 4][4, 1]cpu" = torch.ops.aten.mm.default(primals_1, primals_1)
sin: "f32[4, 4][4, 1]cpu" = torch.ops.aten.sin.default(mm)
permute: "f32[4, 4][1, 4]cpu" = torch.ops.aten.permute.default(primals_1, [1, 0]); primals_1 = None
return (sin, mm, permute)
INFO: TRACED GRAPH
===== Backward graph 1 =====
<eval_with_key>.5 class GraphModule(torch.nn.Module):
def forward(self, mm: "f32[4, 4][4, 1]cpu", permute: "f32[4, 4][1, 4]cpu", tangents_1: "f32[4, 4][4, 1]cpu"):
# File: /home/hirsheybar/local/b/pytorch/tmp5.py:6 in f, code: return torch.matmul(x, x).sin()
cos: "f32[4, 4][4, 1]cpu" = torch.ops.aten.cos.default(mm); mm = None
mul: "f32[4, 4][4, 1]cpu" = torch.ops.aten.mul.Tensor(tangents_1, cos); tangents_1 = cos = None
mm_1: "f32[4, 4][4, 1]cpu" = torch.ops.aten.mm.default(permute, mul)
mm_2: "f32[4, 4][4, 1]cpu" = torch.ops.aten.mm.default(mul, permute); mul = permute = None
# File: /home/hirsheybar/local/b/pytorch/tmp5.py:6 in f, code: return torch.matmul(x, x).sin()
add: "f32[4, 4][4, 1]cpu" = torch.ops.aten.add.Tensor(mm_2, mm_1); mm_2 = mm_1 = None
return (add,)
DEBUG: Cache miss due to new autograd node: torch::autograd::GraphRoot (NodeCall 0) with key size 39, previous key sizes=[]
DEBUG: TRACED GRAPH
===== Compiled autograd graph =====
<eval_with_key>.8 class CompiledAutograd(torch.nn.Module):
def forward(self, inputs, sizes, scalars, hooks):
# No stacktrace found for following nodes
getitem: "f32[]cpu" = inputs[0]
getitem_1: "f32[4, 4]cpu" = inputs[1]
getitem_2: "f32[4, 4]cpu" = inputs[2]
getitem_3: "f32[4, 4]cpu" = inputs[3]; inputs = None
# File: /home/hirsheybar/local/b/pytorch/torch/_dynamo/compiled_autograd.py:379 in set_node_origin, code: SumBackward0 (NodeCall 1)
expand: "f32[4, 4]cpu" = torch.ops.aten.expand.default(getitem, [4, 4]); getitem = None
# File: /home/hirsheybar/local/b/pytorch/torch/_dynamo/compiled_autograd.py:379 in set_node_origin, code: CompiledFunctionBackward (NodeCall 2)
clone: "f32[4, 4]cpu" = torch.ops.aten.clone.default(expand, memory_format = torch.contiguous_format); expand = None
cos: "f32[4, 4]cpu" = torch.ops.aten.cos.default(getitem_1); getitem_1 = None
mul: "f32[4, 4]cpu" = torch.ops.aten.mul.Tensor(clone, cos); clone = cos = None
mm: "f32[4, 4]cpu" = torch.ops.aten.mm.default(getitem_2, mul)
mm_1: "f32[4, 4]cpu" = torch.ops.aten.mm.default(mul, getitem_2); mul = getitem_2 = None
add: "f32[4, 4]cpu" = torch.ops.aten.add.Tensor(mm_1, mm); mm_1 = mm = None
# File: /home/hirsheybar/local/b/pytorch/torch/_dynamo/compiled_autograd.py:379 in set_node_origin, code: torch::autograd::AccumulateGrad (NodeCall 3)
accumulate_grad_ = torch.ops.inductor.accumulate_grad_.default(getitem_3, add); getitem_3 = add = accumulate_grad_ = None
_exec_final_callbacks_stub = torch__dynamo_external_utils__exec_final_callbacks_stub(); _exec_final_callbacks_stub = None
return []
INFO: TRACED GRAPH
===== Forward graph 2 =====
/home/hirsheybar/local/b/pytorch/torch/fx/_lazy_graph_module.py class <lambda>(torch.nn.Module):
def forward(self, arg0_1: "f32[][]cpu", arg1_1: "f32[4, 4][4, 1]cpu", arg2_1: "f32[4, 4][1, 4]cpu", arg3_1: "f32[4, 4][4, 1]cpu"):
# No stacktrace found for following nodes
expand: "f32[4, 4][0, 0]cpu" = torch.ops.aten.expand.default(arg0_1, [4, 4]); arg0_1 = None
clone: "f32[4, 4][4, 1]cpu" = torch.ops.aten.clone.default(expand, memory_format = torch.contiguous_format); expand = None
cos: "f32[4, 4][4, 1]cpu" = torch.ops.aten.cos.default(arg1_1); arg1_1 = None
mul: "f32[4, 4][4, 1]cpu" = torch.ops.aten.mul.Tensor(clone, cos); clone = cos = None
mm: "f32[4, 4][4, 1]cpu" = torch.ops.aten.mm.default(arg2_1, mul)
mm_1: "f32[4, 4][4, 1]cpu" = torch.ops.aten.mm.default(mul, arg2_1); mul = arg2_1 = None
add: "f32[4, 4][4, 1]cpu" = torch.ops.aten.add.Tensor(mm_1, mm); mm_1 = mm = None
clone_1: "f32[4, 4][4, 1]cpu" = torch.ops.aten.clone.default(add); add = None
return (clone_1,)
INFO: TRACED GRAPH
===== Forward graph 3 =====
/home/hirsheybar/local/b/pytorch/torch/fx/_lazy_graph_module.py class <lambda>(torch.nn.Module):
def forward(self, arg0_1: "f32[][]cpu", arg1_1: "f32[4, 4][4, 1]cpu", arg2_1: "f32[4, 4][1, 4]cpu", arg3_1: "f32[4, 4][4, 1]cpu"):
# No stacktrace found for following nodes
expand: "f32[4, 4][0, 0]cpu" = torch.ops.aten.expand.default(arg0_1, [4, 4]); arg0_1 = None
clone: "f32[4, 4][4, 1]cpu" = torch.ops.aten.clone.default(expand, memory_format = torch.contiguous_format); expand = None
cos: "f32[4, 4][4, 1]cpu" = torch.ops.aten.cos.default(arg1_1); arg1_1 = None
mul: "f32[4, 4][4, 1]cpu" = torch.ops.aten.mul.Tensor(clone, cos); clone = cos = None
mm: "f32[4, 4][4, 1]cpu" = torch.ops.aten.mm.default(arg2_1, mul)
mm_1: "f32[4, 4][4, 1]cpu" = torch.ops.aten.mm.default(mul, arg2_1); mul = arg2_1 = None
add: "f32[4, 4][4, 1]cpu" = torch.ops.aten.add.Tensor(mm_1, mm); mm_1 = mm = None
clone_1: "f32[4, 4][4, 1]cpu" = torch.ops.aten.clone.default(add); add = None
return (clone_1,)
```
cc @ezyang @chauhang @penguinwu @zou3519 | triaged,oncall: pt2,module: aotdispatch,module: compiled autograd,module: pt2-dispatcher | low | Critical |
2,468,083,884 | puppeteer | [Feature]: Add Ignore SSL Warnings and Disable Web Security for FireFox browser | ### Feature description
Hello there,
Chrome browser has many args flags to use and some (not many) of the flags can be used for FireFox.
If possible to add 2 flags from the chrome browser to the Firefox. (to have the same functionality)
`--disable-web-security`
`--ignore-certificate-errors`
Would appreciate it! | feature,bidi,P3 | low | Critical |
2,468,090,633 | godot | Running C# projects is slower since 4.3 | ### Tested versions
v4.3.stable.mono.official [77dcf97d8]
### System information
Godot v4.3.stable.mono - Windows 10.0.22621 - Vulkan (Forward+) - dedicated AMD Radeon R7 350 Series (Advanced Micro Devices, Inc.; 27.20.20913.2000) - AMD Ryzen 5 3600 6-Core Processor (12 Threads)
### Issue description
opening the engine is significantly slower than previous versions and so is building projects,along with that sometimes the building process freezes and crashes and the first few frames after building the project is always a freeze frame,i also notice that during that freeze frame process time spikes to around 10ms maybe everything is being loaded all at once?

### Steps to reproduce
just build any project and turn on both FPS and processing monitoring
### Minimal reproduction project (MRP)
N/A | bug,needs testing,topic:dotnet,regression,performance | low | Critical |
2,468,098,469 | deno | Deno panics when arguments are logged inside Proxy get handler | Deno version/platform:
```bash
Platform: macos aarch64
Version: 1.45.5
```
Output of `RUST_BACKTRACE=full deno run repro.js`:
```js
thread 'main' panicked at /private/tmp/deno-20240804-11972-reb6i7/deno_core/core/error.rs:1179:69:
called `Option::unwrap()` on a `None` value
stack backtrace:
0: 0x1018eb8cc - <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt::hf39803c4c3b51aba
1: 0x100b3c088 - core::fmt::write::hbf8e5b0458d484e6
2: 0x1018e55f8 - std::io::Write::write_fmt::h9d86762e315790f6
3: 0x1018eb728 - std::sys_common::backtrace::print::h69d5458f6b7dcb4e
4: 0x1018e25bc - std::panicking::default_hook::{{closure}}::h2ef3755a81b71a1e
5: 0x1018e2288 - std::panicking::default_hook::hd6154982520b4b4e
6: 0x100aa41ac - deno::setup_panic_hook::{{closure}}::h84c297acab1ea025
7: 0x1018e3190 - std::panicking::rust_panic_with_hook::h6c5a53a893b2d04c
8: 0x1018ec3b4 - std::panicking::begin_panic_handler::{{closure}}::hf68643424cf09aa5
9: 0x1018ec348 - std::sys_common::backtrace::__rust_end_short_backtrace::hf6eb2a8a834e3b5e
10: 0x1018e2960 - _rust_begin_unwind
11: 0x100b46688 - core::panicking::panic_fmt::h4444480175e63da9
12: 0x100b46950 - core::panicking::panic::h02dd2f23cf0e0420
13: 0x100b415f8 - core::option::unwrap_failed::h18a31a45446c2f45
14: 0x100c59338 - <extern "C" fn(A0,A1,A2) .> R as v8::support::CFnFrom<F>>::mapping::c_fn::h9b99a1432d854570
```
Reproduction:
```js
const proxy = new Proxy({}, {
get(target, prop, receiver) {
console.log('get', { target, prop, receiver });
return 0;
},
});
console.log(proxy);
```
Possibly related: #24980, #12926 | bug | low | Critical |
2,468,106,226 | pytorch | AOTInductor unit test issue tracker | # test_aot_inductor failures
- [x] #122048
- [x] #122050
- [x] #122978
- [ ] #122980
- [ ] #122983
- [x] #122984
- [x] #122986
- [x] #122989
- [ ] #123691
- [ ] #122990
- [ ] #122991
- [x] #122051
- [x] #121838
- [x] #123210
cc @ezyang @chauhang @penguinwu @chenyang78 | triaged,oncall: pt2,module: aotinductor | low | Critical |
2,468,109,147 | langchain | Error while storing Embeddings to Chroma db | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_community.vectorstores import Chroma
from langchain.prompts import ChatPromptTemplate
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain_huggingface import HuggingFaceEmbeddings
from langchain_community.document_loaders import TextLoader
from chromadb import PersistentClient
import os
CHROMA_PATH = "Chroma"
# get OpenAI Embedding model
text_embedding_model_name = "sentence-transformers/all-MiniLM-L6-v2"
collection_name = "test"
embeddings_model=HuggingFaceEmbeddings(model_name=text_embedding_model_name, model_kwargs={'device': 'cpu'})
client = PersistentClient(path=CHROMA_PATH)
print(client.get_max_batch_size())
if not collection_name in [c.name for c in client.list_collections()]:
DOC_PATHS = ["Novel.txt"]
loaders = [TextLoader(path) for path in DOC_PATHS]
pages = []
for loader in loaders:
pages.extend(loader.load())
# split the doc into smaller chunks i.e. chunk_size=500
text_splitter = RecursiveCharacterTextSplitter(chunk_size=100, chunk_overlap=20)
chunks = text_splitter.split_documents(pages)
Chroma.from_documents(chunks,
embeddings_model,
collection_name=collection_name,
collection_metadata ={"dimensionality": 384},
persist_directory=CHROMA_PATH,
client = client)
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "D:\github\llm\app.py", line 32, in <module>
db_chroma = Chroma.from_documents(chunks,
File "D:\github\llm\venv\lib\site-packages\langchain_community\vectorstores\chroma.py", line 878, in from_documents
return cls.from_texts(
File "D:\github\llm\venv\lib\site-packages\langchain_community\vectorstores\chroma.py", line 842, in from_texts
chroma_collection.add_texts(texts=texts, metadatas=metadatas, ids=ids)
File "D:\github\llm\venv\lib\site-packages\langchain_community\vectorstores\chroma.py", line 313, in add_texts
raise e
File "D:\github\llm\venv\lib\site-packages\langchain_community\vectorstores\chroma.py", line 299, in add_texts
self._collection.upsert(
File "D:\github\llm\venv\lib\site-packages\chromadb\api\models\Collection.py", line 300, in upsert
self._client._upsert(
File "D:\github\llm\venv\lib\site-packages\chromadb\telemetry\opentelemetry\__init__.py", line 146, in wrapper
return f(*args, **kwargs)
File "D:\github\llm\venv\lib\site-packages\chromadb\api\segment.py", line 429, in _upsert
validate_batch(
File "D:\github\llm\venv\lib\site-packages\chromadb\api\types.py", line 541, in validate_batch
raise ValueError(
ValueError: Batch size 10337 exceeds maximum batch size 5461
### Description
Embeddings should be stored in the chroma db, in the batches.
**I was able to fix this issue with a little twaek in following file :**
venv\Lib\site-packages\langchain_community\vectorstores\chroma.py
**line number**: 825, 826, 827
**current**
if hasattr(
chroma_collection._client, "max_batch_size"
): # for Chroma 0.4.10 and above
**changed lines**
if hasattr(
chroma_collection._client, "get_max_batch_size"
): # for Chroma 0.4.10 and above
### System Info
plateform: windows
python version: 3.10.11
langchain version: 0.2.14 | Ɑ: vector store,🤖:bug,🔌: chroma | low | Critical |
2,468,111,137 | godot | blend file does import but will not render materials/textures correctly | ### Tested versions
reproducable on macos 14.5 (23F79) godot stable v4.3
### System information
macos 14.5 (23F79) godot stable v4.3
### Issue description
When I drag a blend file into godot, it imports correctly.
However the materials or textures are not rendered correctly.


### Steps to reproduce
Simply drag and drop a blend file into godot.
### Minimal reproduction project (MRP)
https://github.com/drewpotter/testgame | needs testing,topic:import,topic:3d | low | Minor |
2,468,130,144 | godot | High Quality VRAM Compressed textures crashes game on OpenGL with ANGLE on Windows | ### Tested versions
Tested on: Godot **4.3** _Release_ and Godot **4.2.2** _Release_
The project was not tested on godot 4.1 because the angle compatibility on windows was introduced on version 4.2
### System information
Godot v4.3.stable - Windows 11 - GLES3 (Compatibility) - NVIDIA GeForce RTX 2050 (NVIDIA; 32.0.15.6070) - 11th Gen Intel(R) Core(TM) i5-11400H @ 2.70GHz (12 Threads)
### Issue description
If your project uses any compressed texture where the high quality setting is enabled, and you use **OpenGl** with **ANGLE** on **Windows**, the texture will not load and the game will crash.
This issue seems to persist with both textures formats (S3TC BPTC and ETC2 ASTC)
This was tested on both recent computers (with rtx 2050), and also second generation intel cpus (2010-2011)
### Steps to reproduce
Create a new project, use the compatibility option, then import an image and select the "compressed texture" option and select the high quality option.
After that, run the project using Angle on Windows.
(I recommend exporting the project first so the editor doesn't crash)
### Minimal reproduction project (MRP)
**THE PROJECTS ARE SET TO USE THE NORMAL OPENGL MODE**
To see the bug on action, set to angle mode manually _(Reminder that this bug was tested on windows)_
[Godot 4.2.2 Opengl3 ANGLE PROJECT.zip](https://github.com/user-attachments/files/16626069/Godot.4.2.2.Opengl3.ANGLE.PROJECT.zip)
[Godot 4.3 Opengl Angle - Project.zip](https://github.com/user-attachments/files/16626070/Godot.4.3.Opengl.Angle.-.Project.zip) | bug,platform:windows,topic:rendering,crash | low | Critical |
2,468,198,880 | PowerToys | File Explorer add-ons Preview Unknown File Type as Text | ### Description of the new feature / enhancement
Allow File Explorer Add-Ons to have a switch to allow ALL unknown files to be Previewed as text.
### Scenario when this would be used?
When viewing files with unknown file extensions (not associated with an app) Preview the selected unknown file as text.
I frequently find myself in a folder where there are files I would like to view and see what is in them and must open them using either notepad, vscode, or notepad++ to see what is in them.
A nice feature for me would be the ability to toggle between Text and Hex Dump on the fly.
### Supporting information
_No response_ | Idea-Enhancement,Product-File Explorer,Needs-Triage | low | Major |
2,468,203,003 | godot | GDScript compilation error with no description in built-in scripts | ### Tested versions
- Reproducible in 4.3 stable
### System information
Godot v4.3.stable - Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1660 (NVIDIA; 32.0.15.6081) - AMD Ryzen 7 2700X Eight-Core Processor (16 Threads)
### Issue description
I'm yet to figure out the context and describe the issue precisely. I've opened my project (previously on 4.2.2) in newly released 4.3 and that gave me a fuzzy compilation error:

Also, this seems to cause this error next:
```
Invalid call. Nonexistent function 'new' in base 'GDScript'
```
Also, the editor crashes on exit with no helpful information:

I tried to remove `.godot/` folder, but that didn't help.
Context about project setup (something may be irrelevant to the issue, but I don't know what might be actually useful):
1. I have a custom graph-based (`GraphNode`, `GraphEdit`) plugin for the editor. It also allows to configure custom GUI for particular data types via calling a static `register_gui()` function (which puts that gui in static Dictionary)
2. `configure_ineditor_plugins.gd` is an autoload where all those `register_gui()` are invoked
- This autoload is automatically added by another plugin called `setup` in its `_enable_plugin()` method. This plugin also provides `EditorExportPlugin`, but I don't believe this matters.
4. The class that fails to load on the screenshot is a `@tool` script (basic GUI implementation provided by plugin) from which other `@tool` scripts (custom GUIs in `editor/` folder, project-specific) inherit. This offending class inherits `GraphNode` which is experimental, but there are no visible errors in its code.
5. The error comes from `configure_ineditor_plugins.gd`, but is caused by the compilation error.
### Steps to reproduce
Don't have an MRP yet. This just happens when opening the project.
### Minimal reproduction project (MRP)
Don't have one yet, sorry | bug,topic:gdscript,needs testing,crash,regression | low | Critical |
2,468,227,861 | TypeScript | TypeScript can't infer type of default parameters | ### 🔎 Search Terms
infer default parameters
function wrapper
function factory
### 🕗 Version & Regression Information
- This is the behavior in every version I tried, and I reviewed the FAQ
- I tried down to version 4.0.3 (since in the earlier version `Generator` type is not generic)
### ⏯ Playground Link
https://www.typescriptlang.org/play/?#code/CYUwxgNghgTiAEAzArgOzAFwJYHtVNQBEQYsA3EAHgEF4QAPDEVYAZ3jQGtUcB3VANoBdADTwASgD4AFACh4CggC540gHQbYAc1YrqASngBeSfADizElAw4YlLj35ipI2fpUO++APQAqX-AAtjhgnCDA8HAYyDD4vt4A3LKyiEQk5CDSKOjYePC+0lDG8AAMYgBGxQDkVYYA3vLwYHisOBAgahA4WtINivBQrvDe3vAAegD8jQrl8iPjUwoAvvpJK0lAA
### 💻 Code
```ts
declare function fnDerive<A extends unknown[], R>(
fn: (...args: A) => Generator<unknown, R>,
): unknown /** mocked return */;
fnDerive(function *(a = 0, b = '') {
console.log({
a,
// ^?
b
// ^?
});
});
```
### 🙁 Actual behavior
The type of `a` and `b` is `unknown`.
<img width="480" alt="image" src="https://github.com/user-attachments/assets/a509fbe0-65c9-4e3d-855d-3a5cdd9729c0">
### 🙂 Expected behavior
The type of `a` and `b` should be `number` and `string`
### Additional information about the issue
I was working with a library [`gensync`](https://www.npmjs.com/package/gensync) when I hit this issue, and I extracted the minimum reproduction out of it.
Note that if I don't use default parameters, typescript can infer the `a` and `b` types correctly:
```ts
declare function fnDerive<A extends unknown[], R>(
fn: (...args: A) => Generator<unknown, R>,
): unknown /** mocked return */;
fnDerive(function *(a?: number | undefined, b?: string | undefined) {
console.log({
a,
// ^?
b
// ^?
});
a ??= 0;
b ??= '';
console.log({
a,
// ^?
b
// ^?
});
});
```
<img width="653" alt="image" src="https://github.com/user-attachments/assets/be8b0785-0e9b-44b5-8559-40a03c670368">
| Help Wanted,Possible Improvement | low | Minor |
2,468,244,089 | pytorch | `CELU()`'s `alpha` argument with `int`, `complex` or `bool` and `inplace` argument with `int`, `complex` and `float` work against the doc | ### 🐛 Describe the bug
[The doc](https://pytorch.org/docs/stable/generated/torch.nn.CELU.html) of `CELU()` says the types of `alpha` and `inplace` argument are `float` and `bool` respectively as shown below:
> - alpha ([float](https://docs.python.org/3/library/functions.html#float)) – the α value for the CELU formulation. Default: 1.0
> - inplace ([bool](https://docs.python.org/3/library/functions.html#bool)) – can optionally do the operation in-place. Default: False
But `alpha` argument with `int`, `complex` or `bool` and `inplace` argument with `int`, `complex` and `float` work against [the doc](https://pytorch.org/docs/stable/generated/torch.nn.ELU.html) as shown below:
```python
import torch
from torch import nn
my_tensor = torch.tensor([-1., 0., 1.])
celu = nn.CELU(alpha=1, inplace=1)
celu(input=my_tensor)
# tensor([-0.6321, 0.0000, 1.0000])
my_tensor = torch.tensor([-1., 0., 1.])
celu = nn.CELU(alpha=1.+0.j, inplace=1.+0.j)
celu(input=my_tensor)
# tensor([-0.6321, 0.0000, 1.0000])
my_tensor = torch.tensor([-1., 0., 1.])
celu = nn.CELU(alpha=True, inplace=1.)
celu(input=my_tensor)
# tensor([-0.6321, 0.0000, 1.0000])
```
### Versions
```python
import torch
torch.__version__ # 2.3.1+cu121
```
cc @svekars @brycebortree @albanD | module: docs,triaged,module: python frontend | low | Critical |
2,468,267,250 | langchain | Together LLM (Completions) generate() function's output is missing generation_info and llm_output | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```py
from langchain_together import Together
from llama_recipes.inference.prompt_format_utils import (
build_default_prompt,
create_conversation,
LlamaGuardVersion,
) # LlamaGuard 3 Prompt from https://github.com/meta-llama/llama-recipes/blob/main/src/llama_recipes/inference/prompt_format_utils.py
from pydantic.v1.types import SecretStr
t = Together(
model="meta-llama/Meta-Llama-Guard-3-8B",
together_api_key=SecretStr("<=== API Key goes here ===>"),
max_tokens=35,
logprobs=1,
temperature=0
)
# Expected to return a LLMResult object with Generations that have logprobs, and llm_output with usage
res = t.generate([build_default_prompt("User", create_conversation(["<Sample user prompt>"]), LlamaGuardVersion["LLAMA_GUARD_3"])])
print(res.json()) # {"generations": [[{"text": "safe", "generation_info": null, "type": "Generation"}]], "llm_output": null, "run": [{"run_id": "5b93a422-c74a-41e9-af5e-a7958884a9a9"}]}
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I'm trying to use `langchain_together` library's `Together` for calling the [Together.ai LLM completions endpoint](https://docs.together.ai/reference/completions) and expecting to get a LLMResult with `logprobs` inside of `generation_info` & `usage` in `llm_output`.
Instead the following lacking LLMResult is given as output:
```json
{
"generations": [
[
{
"text": "safe",
"generation_info": null,
"type": "Generation"
}
]
],
"llm_output": null,
"run": [
{
"run_id": "5b93a422-c74a-41e9-af5e-a7958884a9a9"
}
]
}
```
where `generation_info` is None, and `llm_output` is None.
This should be fixed by updating the `langchain_together.Together()` class so that it also has the necessary functions to return LLMResults with `generation_info` & `llm_output` defined when the response includes fields that are to be put in them. The expected output is:
```json
{
"generations": [
[
{
"text": "safe",
"generation_info": {
"finish_reason": "eos",
"logprobs": {
"tokens": [
"safe",
"<|eot_id|>"
],
"token_logprobs": [
-4.6014786e-05,
-0.008911133
],
"token_ids": [
19193,
128009
]
}
},
"type": "Generation"
}
]
],
"llm_output": {
"token_usage": {
"total_tokens": 219,
"completion_tokens": 2,
"prompt_tokens": 217
},
"model_name": "meta-llama/Meta-Llama-Guard-3-8B"
},
"run": [
{
"run_id": "5b93a422-c74a-41e9-af5e-a7958884a9a9"
}
]
}
```
---
This could technically be avoided by using the `langchain_openai` library's `langchain_openai.OpenAI`, but the generate method of this class is **no longer compatible with the old OpenAI Completions -style API that Together.ai uses**. Mainly the underlying `OpenAIBase._generate` method calls the underlying OpenAI completions client with a `list[str]` of prompts, which **Together.ai doesn't support**.
Just in case someone finds this issue looking for a fix, I have a workaround for the workaround. The problem with the `langchain_openai` workaround can be bodged by overriding the `openai.client.completions.create` method after initializing the LLM class, using the `together` python library's equivalent method, and removing incompatible arguments to create, which the API doesn't support. The following is a quick example for doing this:
```python
import together
from langchain_openai import OpenAI
from pydantic.v1.types import SecretStr
from llama_recipes.inference.prompt_format_utils import (
build_default_prompt,
create_conversation,
LlamaGuardVersion,
) # LlamaGuard 3 Prompt from https://github.com/meta-llama/llama-recipes/blob/main/src/llama_recipes/inference/prompt_format_utils.py
together_client = together.Together(api_key=SETTINGS.together_api_key)
llm = OpenAI(
model="meta-llama/Meta-Llama-Guard-3-8B",
api_key=SecretStr("<=== API Key goes here ===>"),
base_url="https://api.together.xyz/v1", # This may be redundant as we override the create class method anyways
max_tokens=200,
logprobs=1,
temperature=0
)
def overridden_create(prompt: list[str], **kwargs):
# Overridden openai.client.completions.create method to use the Together client, as Together doesn't support certain inputs (e.g. seed) and lists of prompts
together_allowed_keys = ["model", "prompt", "max_tokens", "stream", "stop", "temperature", "top_p", "top_k", "repetition_penalty", "logprobs", "echo", "n", "safety_model"]
kwargs = {k: v for k, v in kwargs.items() if k in together_allowed_keys}
return together_client.completions.create(prompt=prompt[0], **kwargs)
llm.client.create = overridden_create
llm_result = llm.generate([build_default_prompt("User", create_conversation(["<Sample user prompt>"]), LlamaGuardVersion["LLAMA_GUARD_3"])])
print(llm_result.json()) # {"generations": [[{"text": "safe", "generation_info": {"finish_reason": "eos", "logprobs": {"tokens": ["safe", "<|eot_id|>"], "token_logprobs": [-4.6014786e-05, -0.008911133], "token_ids": [19193, 128009]}}, "type": "Generation"}]], "llm_output": {"token_usage": {"total_tokens": 219, "completion_tokens": 2, "prompt_tokens": 217}, "model_name": "meta-llama/Meta-Llama-Guard-3-8B"}, "run": [{"run_id": "f015adc7-7558-4251-9fe6-9d11a646c173"}]}
generation = llm_result.generations[0][0]
logprobs = generation.generation_info["logprobs"] # Wow, it works!
token_usage = llm_result.llm_output["token_usage"] # Wow, we also get usage!
```
### System Info
langchain==0.2.14
langchain-core==0.2.32
langchain-openai==0.1.21
langchain-text-splitters==0.2.2
langchain-together==0.1.5
mac (Macbook Pro M1 16GB, 2021), macOS Sonoma 14.5 (23F79)
Python 3.9.19 | 🤖:bug | low | Critical |
2,468,274,332 | godot | Wayland feels sluggish with Vulkan compared to OpenGL | ### Tested versions
4.3
### System information
Godot v4.3.stable - Nobara Linux 40 (GNOME Edition) - Wayland - Vulkan (Mobile) - dedicated NVIDIA GeForce RTX 3050 Ti Laptop GPU - 11th Gen Intel(R) Core(TM) i7-11370H @ 3.30GHz (8 Threads)
### Issue description
While Wayland feels fantastic in general for me on Linux, in Godot something has felt "off" ever since I've begun testing it. It's not _bad_, but it's not as smooth as the rest of my Gnome desktop experience.
Now I have some stats to back that up:
**Prefer Wayland Off**
```
eobet@surface:~/Downloads$ ./Godot_v4.3-stable_linux.x86_64 --print-fps flag: godot --print-fps ./shadertest2/project.godot
Godot Engine v4.3.stable.official.77dcf97d8 - https://godotengine.org
Vulkan 1.3.280 - Forward Mobile - Using Device #1: NVIDIA - NVIDIA GeForce RTX 3050 Ti Laptop GPU
Requested V-Sync mode: Enabled - FPS will likely be capped to the monitor refresh rate.
Editor FPS: 145 (6.89 mspf)
Editor FPS: 145 (6.89 mspf)
Editor FPS: 140 (7.14 mspf)
Editor FPS: 145 (6.89 mspf)
Editor FPS: 144 (6.94 mspf)
Editor FPS: 143 (6.99 mspf)
Editor FPS: 127 (7.87 mspf)
Editor FPS: 115 (8.69 mspf)
Editor FPS: 113 (8.84 mspf)
Editor FPS: 142 (7.04 mspf)
Editor FPS: 135 (7.40 mspf)
Editor FPS: 101 (9.90 mspf)
Editor FPS: 104 (9.61 mspf)
Editor FPS: 125 (8.00 mspf)
Editor FPS: 145 (6.89 mspf)
Editor FPS: 145 (6.89 mspf)
Editor FPS: 143 (6.99 mspf)
Editor FPS: 145 (6.89 mspf)
Editor FPS: 140 (7.14 mspf)
Editor FPS: 118 (8.47 mspf)
Editor FPS: 115 (8.69 mspf)
Editor FPS: 134 (7.46 mspf)
Editor FPS: 145 (6.89 mspf)
Editor FPS: 138 (7.24 mspf)
Editor FPS: 142 (7.04 mspf)
Editor FPS: 128 (7.81 mspf)
Editor FPS: 97 (10.30 mspf)
Editor FPS: 105 (9.52 mspf)
Editor FPS: 109 (9.17 mspf)
Editor FPS: 139 (7.19 mspf)
Editor FPS: 145 (6.89 mspf)
Editor FPS: 56 (17.85 mspf)
Editor FPS: 126 (7.93 mspf)
Editor FPS: 145 (6.89 mspf)
Editor FPS: 143 (6.99 mspf)
Editor FPS: 145 (6.89 mspf)
Editor FPS: 145 (6.89 mspf)
Editor FPS: 145 (6.89 mspf)
Editor FPS: 143 (6.99 mspf)
```
**Prefer Wayland On**
```
eobet@surface:~/Downloads$ ./Godot_v4.3-stable_linux.x86_64 --print-fps flag: godot --print-fps ./shadertest2/project.godot
Godot Engine v4.3.stable.official.77dcf97d8 - https://godotengine.org
WARNING: Can't obtain the XDG decoration manager. Libdecor will be used for drawing CSDs, if available.
at: init (platform/linuxbsd/wayland/wayland_thread.cpp:3713)
Vulkan 1.3.280 - Forward Mobile - Using Device #1: NVIDIA - NVIDIA GeForce RTX 3050 Ti Laptop GPU
Requested V-Sync mode: Enabled - FPS will likely be capped to the monitor refresh rate.
Editor FPS: 46 (21.73 mspf)
Editor FPS: 110 (9.09 mspf)
Editor FPS: 108 (9.25 mspf)
Editor FPS: 35 (28.57 mspf)
Editor FPS: 51 (19.60 mspf)
Editor FPS: 50 (20.00 mspf)
Editor FPS: 62 (16.12 mspf)
Editor FPS: 101 (9.90 mspf)
Editor FPS: 69 (14.49 mspf)
Editor FPS: 39 (25.64 mspf)
Editor FPS: 34 (29.41 mspf)
Editor FPS: 52 (19.23 mspf)
Editor FPS: 94 (10.63 mspf)
Editor FPS: 117 (8.54 mspf)
Editor FPS: 63 (15.87 mspf)
Editor FPS: 10 (100.00 mspf)
Editor FPS: 10 (100.00 mspf)
Editor FPS: 15 (66.66 mspf)
Editor FPS: 101 (9.90 mspf)
Editor FPS: 80 (12.50 mspf)
Editor FPS: 99 (10.10 mspf)
Editor FPS: 110 (9.09 mspf)
Editor FPS: 96 (10.41 mspf)
Editor FPS: 111 (9.00 mspf)
Editor FPS: 83 (12.04 mspf)
Editor FPS: 106 (9.43 mspf)
Editor FPS: 62 (16.12 mspf)
Editor FPS: 83 (12.04 mspf)
Editor FPS: 64 (15.62 mspf)
Editor FPS: 70 (14.28 mspf)
Editor FPS: 69 (14.49 mspf)
Editor FPS: 69 (14.49 mspf)
Editor FPS: 83 (12.04 mspf)
Editor FPS: 102 (9.80 mspf)
Editor FPS: 77 (12.98 mspf)
Editor FPS: 80 (12.50 mspf)
Editor FPS: 75 (13.33 mspf)
Editor FPS: 94 (10.63 mspf)
Editor FPS: 96 (10.41 mspf)
Editor FPS: 100 (10.00 mspf)
Editor FPS: 85 (11.76 mspf)
Editor FPS: 121 (8.26 mspf)
Editor FPS: 97 (10.30 mspf)
Editor FPS: 110 (9.09 mspf)
Editor FPS: 104 (9.61 mspf)
Editor FPS: 90 (11.11 mspf)
Editor FPS: 88 (11.36 mspf)
Editor FPS: 83 (12.04 mspf)
Editor FPS: 102 (9.80 mspf)
Editor FPS: 112 (8.92 mspf)
Editor FPS: 103 (9.70 mspf)
Editor FPS: 98 (10.20 mspf)
Editor FPS: 94 (10.63 mspf)
Editor FPS: 111 (9.00 mspf)
Editor FPS: 98 (10.20 mspf)
Editor FPS: 101 (9.90 mspf)
```
Also, I thought I had a 120hz screen, so it's weird to see it go higher... it's a Microsoft Surface Studio laptop so it's one of those which switch between an Intel GPU and an Nvidia GPU... (I wouldn't mind keeping the Editor at 60 to save on resources, btw).
### Steps to reproduce
Run Godot with the flags from [here](https://github.com/godotengine/godot/issues/88346#issuecomment-2289728374), and test with "prefer wayland" on and off (with restarts in between).
### Minimal reproduction project (MRP)
N/A | bug,platform:linuxbsd,topic:porting,performance | low | Major |
2,468,302,621 | tauri | [bug] I try to use libloading load Value Object in tauri, but it is show (exit code: 0xc0000005, STATUS_ACCESS_VIOLATION) | ### Describe the bug
this pic is I use `libloading` without tauri crate in cargo.toml

this pic is I use `libloading` in tauri

I cannot use Value Object in libloading
### Reproduction
Just like in Describe two pic, when using `libloading` to send or receive a Value Object to or from a Rust-style dynamic library, an error will occur.
### Expected behavior
The pic1 and pic2 console outputs should be the same
### Full `tauri info` output
```text
[✔] Environment
- OS: Windows 10.0.22631 X64
✔ WebView2: 127.0.2651.98
✔ MSVC: Visual Studio Community 2022
✔ rustc: 1.80.1 (3f5fd8dd4 2024-08-06)
✔ cargo: 1.80.1 (376290515 2024-07-16)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-x86_64-pc-windows-msvc (default)
- node: 20.11.1
- pnpm: 9.1.0
- yarn: 1.22.22
- npm: 10.2.4
[-] Packages
- tauri [RUST]: 1.7.1
- tauri-build [RUST]: 1.5.3
- wry [RUST]: 0.24.10
- tao [RUST]: 0.16.9
- @tauri-apps/api [NPM]: 1.6.0
- @tauri-apps/cli [NPM]: 1.6.0
[-] App
- build-type: bundle
- CSP: unset
- distDir: ../dist
- devPath: http://localhost:1420/
- framework: Vue.js
- bundler: Vite
```
### Stack trace
```text
[f72d1337] Created instance!
before: Object {"name": String("John Doe"), "age": Number(30), "phones": Array [String("555-555-5555"), String("555-555-5556")]}
error: process didn't exit successfully: `D:\code\plugin-system-example-master\target\debug\dyplugin.exe` (exit code: 0xc0000005, STATUS_ACCESS_VIOLATION)
```
### Additional context
_No response_ | type: bug,status: needs triage | low | Critical |
2,468,333,157 | pytorch | compiled autograd + dynamic shapes fails with constraint violation | example code:
```
import torch
@torch.compile(backend="aot_eager_decomp_partition")
def f(x):
return x.sin().sin()
with torch._dynamo.utils.maybe_enable_compiled_autograd(True):
x = torch.randn(2, 3, requires_grad=True)
torch._dynamo.mark_dynamic(x, 0)
out = f(x)
out.sum().backward()
x = torch.randn(4, 3, requires_grad=True)
torch._dynamo.mark_dynamic(x, 0)
breakpoint()
out = f(x)
out.sum().backward()
```
This fails with:
```
torch.fx.experimental.symbolic_shapes.ConstraintViolationError: Constraints violated (L['inputs'][1].size()[0])! For more information, run with TORCH_LOGS="+dynamic".
- Not all values of RelaxedUnspecConstraint(L['inputs'][1].size()[0]) are valid because L['inputs'][1].size()[0] was inferred to be a constant (2).
```
The proximal cause is that:
(1) compiled autograd is passing in a FakeTensor tangent in the backward that has only static shapes, when its shape is supposed to be dynamic: https://github.com/pytorch/pytorch/blob/main/torch/_functorch/_aot_autograd/runtime_wrappers.py#L1709
(2) when we trace the backward graph from AOTDispatcher, we do some compute like `aten.mul(activation, tangent)`. The activation has a (`s0, 3`) size, while the tangent has static shape `(2, 3)`, so we infer that `s0 == 2` and incorrectly specialize the shape.
I'm not entirely sure how compiled autograd figures out that it should be fakeifying tensors with dynamic or static shape, but maybe we need to properly plumb this information?
cc @ezyang @chauhang @penguinwu | triaged,oncall: pt2,module: dynamic shapes,module: compiled autograd | low | Critical |
2,468,344,687 | excalidraw | Google wont work | It says its whitelisted, can someone fix this now? Thank you for reading. | Embeddable | low | Minor |
2,468,347,545 | godot | Unable to load Array of custom classes directly from load/preload to statically typed variable | ### Tested versions
- Reproducible in v4.3.stable.official [77dcf97d8]
- Not present in v4.2.stable.official [46dc27791]
### System information
Godot v4.3.stable - Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3060 (NVIDIA; 31.0.15.3742) - AMD Ryzen 5 3600 6-Core Processor (12 Threads)
### Issue description
In 4.2 and earlier, one could load arrays of custom object types directly from a loaded resource file in one line. E.g. ```var myvar: Array[SomeResource] = preload("somefile.tres").someVariable```. Now, it gives the following error message: "Cannot assign a value of type Array[res://SomeResource.gd] to variable "myvar" with specified type Array[SomeResource]."
No error is brought up and code runs as usual if ```myvar``` is not statically typed. The error is also avoided if split up into two lines, e.g.
```gdscript
var myvar = preload("somefile.tres")
var yourvar: Array[SomeResource] = myvar.someVariable
```
### Steps to reproduce
In the minimum reproduction project, the ```node.gd``` file will display the error. It can be reproduced by
1. Creating a custom class
2. Creating a resource with an array of that class as a variable
3. Trying to load that data into a statically typed variable.
### Minimal reproduction project (MRP)
[Bug-Reporting.zip](https://github.com/user-attachments/files/16627056/Bug-Reporting.zip)
_Bugsquad edit:_ Fix formatting. | bug,topic:gdscript,confirmed,regression | low | Critical |
2,468,420,812 | vscode | conpty process blocked update on Windows | I attempted to update Insiders today using the usual "apply update and restart" action and ran into the following error. It looks like some console-related processes blocked update. No VS Code windows were open.

The command line of the process doing the blocking was:
```
"c:\Users\conno\AppData\Local\Programs\Microsoft VS Code Insiders\resources\app\node_modules.asar.unpacked\node-pty\build\Release\conpty\OpenConsole.exe" --headless --inheritcursor --width 80 --height 30 --signal 0x638 --server 0x69c
```
I am using the new `windowsUseConptyDll` setting, if that matters. | bug,install-update,windows,confirmed,terminal-conpty,papercut :drop_of_blood:,terminal-process | low | Critical |
2,468,421,495 | yt-dlp | Support ImageMagick for thumbnail post processing | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm requesting a feature unrelated to a specific site
- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Provide a description that is worded well enough to be understood
Kinda niche issue, thumbnails generated using --convert-thumbnail jpg do not play nicely with rockbox and display in black and white.
Im wondering if its possible to have ImageMagick support to handle the thumbnail post processing before embedding, simply running convert on the thumbnail from jpg to jpg makes it work
ImageMagick could also be used to handle cropping and other stuff like is being done with ffmpeg currently like in this thread: https://github.com/yt-dlp/yt-dlp/issues/429
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```
yt-dlp https://www.youtube.com/watch?v=YHDfpBwxeVU --convert-thumbnail jpg --embed-thumbnail -f bestaudio -x --audio-format mp3 --audio-quality 320k --write-thumbnail
[youtube] Extracting URL: https://www.youtube.com/watch?v=YHDfpBwxeVU
[youtube] YHDfpBwxeVU: Downloading webpage
[youtube] YHDfpBwxeVU: Downloading ios player API JSON
[youtube] YHDfpBwxeVU: Downloading android player API JSON
[youtube] YHDfpBwxeVU: Downloading player 410a4f15
WARNING: [youtube] YHDfpBwxeVU: nsig extraction failed: You may experience throttling for some formats
n = kqh9ZIBLVf5XJK1kn ; player = https://www.youtube.com/s/player/410a4f15/player_ias.vflset/en_US/base.js
WARNING: [youtube] YHDfpBwxeVU: nsig extraction failed: You may experience throttling for some formats
n = RC0lxQxGFQVnFZIw6 ; player = https://www.youtube.com/s/player/410a4f15/player_ias.vflset/en_US/base.js
[youtube] YHDfpBwxeVU: Downloading m3u8 information
[info] YHDfpBwxeVU: Downloading 1 format(s): 140
Deleting existing file 10 Things I hate about you - Nightcore Version [YHDfpBwxeVU].webp
[info] Downloading video thumbnail 41 ...
[info] Writing video thumbnail 41 to: 10 Things I hate about you - Nightcore Version [YHDfpBwxeVU].webp
[ThumbnailsConvertor] Converting thumbnail "10 Things I hate about you - Nightcore Version [YHDfpBwxeVU].webp" to jpg
Deleting original file 10 Things I hate about you - Nightcore Version [YHDfpBwxeVU].webp (pass -k to keep)
[download] 10 Things I hate about you - Nightcore Version [YHDfpBwxeVU].mp3 has already been downloaded
[ExtractAudio] Not converting audio 10 Things I hate about you - Nightcore Version [YHDfpBwxeVU].mp3; file is already in target format mp3
[EmbedThumbnail] ffmpeg: Adding thumbnail to "10 Things I hate about you - Nightcore Version [YHDfpBwxeVU].mp3"
``` | enhancement,triage,core:post-processor | low | Critical |
2,468,424,117 | go | runtime: confusing panic on parallel calls to yield function | ### Go version
go1.23.0-windows-amd64
### Output of `go env` in your module/workspace:
```shell
set GO111MODULE=on
set GOARCH=amd64
set GOBIN=
set GOCACHE=C:\Users\hauntedness\AppData\Local\go-build
set GOENV=C:\Users\hauntedness\AppData\Roaming\go\env
set GOEXE=.exe
set GOEXPERIMENT=rangefunc
set GOFLAGS=
set GOHOSTARCH=amd64
set GOHOSTOS=windows
set GOINSECURE=
set GOMODCACHE=C:\Users\hauntedness\go\pkg\mod
set GONOPROXY=
set GONOSUMDB=
set GOOS=windows
set GOPATH=C:\Users\hauntedness\go
set GOPRIVATE=
set GOPROXY=https://goproxy.cn,direct
set GOROOT=C:\Program Files\Go
set GOSUMDB=sum.golang.org
set GOTMPDIR=
set GOTOOLCHAIN=auto
set GOTOOLDIR=C:\Program Files\Go\pkg\tool\windows_amd64
set GOVCS=
set GOVERSION=go1.23.0
set GODEBUG=
set GOTELEMETRY=on
set GOTELEMETRYDIR=C:\Users\hauntedness\AppData\Roaming\go\telemetry
set GCCGO=gccgo
set GOAMD64=v1
set AR=ar
set CC=gcc
set CXX=g++
set CGO_ENABLED=1
set GOMOD=D:\temp\dot\go.mod
set GOWORK=
set CGO_CFLAGS=-O2 -g
set CGO_CPPFLAGS=
set CGO_CXXFLAGS=-O2 -g
set CGO_FFLAGS=-O2 -g
set CGO_LDFLAGS=-O2 -g
set PKG_CONFIG=pkg-config
set GOGCCFLAGS=-m64 -mthreads -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=C:\Users\HAUNTE~1\AppData\Local\Temp\go-build2719328347=/tmp/go-build -gno-record-gcc-switches
```
### What did you do?
```go
package main
import (
"fmt"
"sync"
)
func main() {
for i := range WaitAll(10) {
fmt.Println(i)
}
}
func WaitAll(total int) func(yield func(i int) bool) {
return func(yield func(i int) bool) {
wg := &sync.WaitGroup{}
wg.Add(total)
for i := range total {
go func(j int) {
defer wg.Done()
if !yield(j) {
panic("Shouldn't be broken.")
}
}(i)
}
wg.Wait()
}
}
```
See [https://go.dev/play/p/_X0iHMy6lH1](url)
### What did you see happen?
==================
panic: runtime error: range function continued iteration after loop body panic
goroutine 21 [running]:
main.main-range1(0x2)
D:/temp/dot/play/iter.go:9 +0xf3
main.main.WaitAll.func1.1(0x2)
D:/temp/dot/play/iter.go:21 +0x8c
created by main.main.WaitAll.func1 in goroutine 1
D:/temp/dot/play/iter.go:19 +0x8c
exit status 2
### What did you expect to see?
No panic | NeedsInvestigation,compiler/runtime | medium | Critical |
2,468,447,284 | flutter | Update Material 3 `DatePicker` default size to use updated tokens | ### Use case
Date Picker currently uses hard-coded default size values instead of using the values generated by the Material Design tokens.
https://github.com/flutter/flutter/blob/ea87865364c60d482fb53f064f07c6ef172b214a/packages/flutter/lib/src/material/date_picker.dart#L43-L51
https://github.com/flutter/flutter/blob/ea87865364c60d482fb53f064f07c6ef172b214a/dev/tools/gen_defaults/data/date_picker_modal.json#L6-L11
These tokens are out of sync we updated tokens database in the Flutter repository
https://github.com/flutter/flutter/pull/120149/files#diff-000e91c61f8c573a6e6e5875f0f72d46ded315376423746c0ab977b50d266150
### Proposal
Update Date Picker to use the updated Material Design tokens
Related to. https://github.com/flutter/flutter/issues/153397 | framework,f: material design,f: date/time picker,c: proposal,P2,team-design,triaged-design | low | Minor |
2,468,468,453 | next.js | Endless issues of Error: Cannot find module '....\.next\server\vendor-chunks\lib\worker.js' | ### Link to the code that reproduces this issue
https://github.com/jdoe802/pino-pretty-mongo-minimal
### To Reproduce
have been working on this for a few weeks now. I was getting errors like the following when trying to implement pino transport
```
⨯ uncaughtException: Error: Cannot find module 'C:\Users\<redact>\.next\server\vendor-chunks\lib\worker.js'
at Module._resolveFilename (node:internal/modules/cjs/loader:1145:15)
at Module._load (node:internal/modules/cjs/loader:986:27)
at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:174:12)
at MessagePort.<anonymous> (node:internal/main/worker_thread:186:26)
at [nodejs.internal.kHybridDispatch] (node:internal/event_target:820:20)
at MessagePort.<anonymous> (node:internal/per_context/messageport:23:28) {
code: 'MODULE_NOT_FOUND',
requireStack: []
}
Error: the worker thread exited
at Worker.onWorkerExit (webpack-internal:///(ssr)/./node_modules/thread-stream/index.js:201:32)
at Worker.emit (node:events:519:28)
at [kOnExit] (node:internal/worker:315:10)
at Worker.<computed>.onexit (node:internal/worker:229:20)
at Worker.callbackTrampoline (node:internal/async_hooks:130:17)
⨯ node_modules\thread-stream\index.js (201:0) @ Worker.onWorkerExit
⨯ uncaughtException: Error: the worker thread exited
at Worker.onWorkerExit (webpack-internal:///(ssr)/./node_modules/thread-stream/index.js:201:32)
at Worker.emit (node:events:519:28)
at [kOnExit] (node:internal/worker:315:10)
at Worker.<computed>.onexit (node:internal/worker:229:20)
at Worker.callbackTrampoline (node:internal/async_hooks:130:17)
```
this originally seemed to be fixed with the following workaround being added to the next.config file: (the worker.js file, indexes file, and wait file were all copied from /node_modules/thread-stream/ folder)
```
function pinoWebpackAbsolutePath(relativePath) {
console.log("relativepath:" + relativePath + " dirname:" + __dirname);
console.log(path.resolve(__dirname, relativePath));
return path.resolve(__dirname, relativePath);
}
globalThis.__bundlerPathsOverrides = {
'thread-stream-worker': pinoWebpackAbsolutePath('./worker.js'),
'indexes': pinoWebpackAbsolutePath('./indexes.js'),
'wait': pinoWebpackAbsolutePath('./wait.js'),
};
```
however after further inspection some logs were outputting correctly with pino transport and others were causing this message
```
Error: Cannot find module 'C:\Users\<redact>\.next\server\vendor-chunks\lib\worker.js'
at Module._resolveFilename (node:internal/modules/cjs/loader:1145:15)
at Module._load (node:internal/modules/cjs/loader:986:27)
at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:174:12)
at MessagePort.<anonymous> (node:internal/main/worker_thread:186:26)
at [nodejs.internal.kHybridDispatch] (node:internal/event_target:820:20)
at MessagePort.<anonymous> (node:internal/per_context/messageport:23:28)
Emitted 'error' event on Worker instance at:
at [kOnErrorMessage] (node:internal/worker:326:10)
at [kOnMessage] (node:internal/worker:337:37)
at MessagePort.<anonymous> (node:internal/worker:232:57)
at [nodejs.internal.kHybridDispatch] (node:internal/event_target:820:20)
at MessagePort.<anonymous> (node:internal/per_context/messageport:23:28)
at MessagePort.callbackTrampoline (node:internal/async_hooks:130:17)
at [kOnExit] (node:internal/worker:304:5)
at Worker.<computed>.onexit (node:internal/worker:229:20)
at Worker.callbackTrampoline (node:internal/async_hooks:130:17) {
code: 'MODULE_NOT_FOUND',
requireStack: []
}
```
The logging transport that is triggering all these issues:
```
import pino, {Logger} from "pino";
export const masterLogger = pino({
level: `${process.env.NEXT_PUBLIC_PINO_LOG_LEVEL ?? "debug"}`,
redact: ['email', 'profileName', 'password', 'address'],
//timestamp: () => `",timestamp":"${new Date(Date.now()).toISOString()}"`,
transport: {
target: 'pino-mongodb',
options: {
uri: process.env.MONGODB_URI,
database: 'dev',
collection: 'log-collection',
},
},
});
masterLogger.info('hello');
```
I've tried multiple solutions (commented out in the repo) Overall it is logging in some places but not in others in my full code repo. **It logs to console in all files when I don't add the transport streams.** After adding the transport streams and multiple workarounds, getting this. If next could just update their files so that worker.js was properly found in the first place many issues would be avoided.
### Current vs. Expected behavior
I expected the logging to work without throwing worker.js related issues
### Provide environment information
```bash
Operating System:
Platform: win32
Arch: x64
Version: Windows 11 Enterprise
Binaries:
Node: 20.16.0
npm: N/A
Yarn: N/A
pnpm: N/A
Relevant Packages:
next: 14.1.3
eslint-config-next: 14.1.3
react: 18.2.0
react-dom: 18.2.0
typescript: 5.4.2
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Output (export/standalone), Webpack
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
This issue is happening locally . | bug,Output (export/standalone),Webpack | low | Critical |
2,468,486,310 | pytorch | No way for low-overhead total norm in native PyTorch with large number of tensors | **Context**
Gradient norm clipping is a popular technique for stabilizing training, which requires computing the total norm with respect to the model's gradients. This involves a norm reduction across all of the gradients down to a single scalar.
PyTorch's `clip_grad_morm_` offers both single-tensor (`foreach=False`) and multi-tensor (`foreach=True`) implementations. However, even the multi-tensor `foreach` implementation incurs high CPU overhead when computing the total norm over a large number of tensors.
https://github.com/pytorch/pytorch/blob/3434a54fba537f67d152b50142b211d4aa059e67/torch/nn/utils/clip_grad.py#L81-L96
The foreach implementation involves:
1. `torch._foreach_norm(tensors)` to return a list of 0D scalars, representing the norm of each gradient
2. `torch.stack` to cat the 0D scalars into a 1D tensor
3. `torch.linalg.vector_norm()` to compute the norm of the norms, representing the final total norm
**Issue**
The foreach implementation incurs much unnecessary CPU overhead, making the clipping heavily CPU bound when operating on a large number of tensors. For example, for ~2000 tensors, the total norm calculation takes >18 ms on CPU, while only 1.3 ms on GPU (for a particular real workload -- larger tensors would make this slower).

Assuming `N` tensors, some inefficiencies in the existing implementation arise from:
- `_foreach_norm` must call `N` `aten::empty({})` to construct the `N` 0D scalar outputs ([code](https://github.com/pytorch/pytorch/blob/bedf96d7ffe74b34bcfe52c7ae1ae05f40d6c8ee/aten/src/ATen/native/cuda/ForeachReduceOp.cu#L444)).
- The `N` 0D scalars need to be `stack`d for the final norm reduction. `stack` requires 1D tensors, so each of the `N` 0D scalar gets unsqueezed again in `stack`.
Together, this leads to an extra `2N` dispatcher calls to handle the `N` intermediate scalars. Ideally, we can avoid materializing these `N` intermediates, especially as `torch.Tensor`s, where one option is a fused kernel.
Today, `torch.compile` cannot address this issue in a satisfying way. Default `torch.compile` cannot achieve horizontal fusion, leading to slower performance than eager mode. `torch.compile(mode="reduce-overhead")` does reduce overhead more than eager but results in 2x memory usage, likely due to copying the gradients into CUDA graph addresses. Note that we likely cannot mark the inputs as static because gradients are computed anew every iteration at possibly different addresses, and for my use case, we cannot compile the gradient allocation with `torch.compile` (due to FSDP).
Here is an example script for getting profiler traces of various implementations: P1529496302
The ask is to provide _some_ native way to make this total norm calculation not CPU-overhead bound, e.g. via a fused op.
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @crcrpar @mcarilli @janeyx99 @ezyang @chauhang @penguinwu | module: nn,triaged,module: mta,oncall: pt2 | low | Major |
2,468,498,112 | rust | suboptimal llvm-ir due to missing noalias annotations after mir inlining | <!--
Thank you for filing a regression report! 🐛 A regression is something that changed between versions of Rust but was not supposed to.
Please provide a short summary of the regression, along with any information you feel is relevant to replicate it.
-->
suboptimal llvm-ir due to missing noalias annotations after mir inlining
### Code
I tried this code:
```rust
#[inline(never)]
#[no_mangle]
pub unsafe fn noalias_ptr(x: *mut i32, y: *mut i32) -> i32 {
#[inline]
fn noalias_ref(x: &mut i32, y: &mut i32) -> i32 {
*x = 16;
*y = 12;
*x
}
let x = &mut *x;
let y = &mut *y;
noalias_ref(x, y)
}
```
I expected to see this happen: generated asm on x86-64
```asm
noalias_ptr:
mov dword ptr [rdi], 16
mov dword ptr [rsi], 12
mov eax, 16
ret
```
Instead, this happened: generated asm on x86-64
```asm
noalias_ptr:
mov dword ptr [rdi], 16
mov dword ptr [rsi], 12
mov eax, dword ptr [rdi]
ret
```
### Version it worked on
<!--
Provide the most recent version this worked on, for example:
It most recently worked on: Rust 1.47
-->
It most recently worked on: rust 1.64
### Version with regression
<!--
Provide the version you are using that has the regression.
-->
`rustc --version --verbose`:
```
rustc 1.80.0 (051478957 2024-07-21)
binary: rustc
commit-hash: 051478957371ee0084a7c0913941d2a8c4757bb9
commit-date: 2024-07-21
host: x86_64-unknown-linux-gnu
release: 1.80.0
LLVM version: 18.1.7
Compiler returned: 0
```
### Fun fact
changing `noalias_ref(x, y)` to `(noalias_ref as fn(_, _) -> _)(x, y)` "fixes" the issue in this case, since it seems to block the mir inliner
@rustbot modify labels: +regression-from-stable-to-stable-regression-untriaged | P-medium,C-bug,A-mir-opt-inlining,regression-untriaged | low | Minor |
2,468,505,624 | godot | Exported values stuck at default even when changed in inspector | ### Tested versions
- Tested in 4.3-rc3 and 4.3-stable
### System information
Windows 11; Godot 4.3-stable; Forward +
### Issue description
I have code that has several exported variables:
```gdscript
@export_group("Assists")
## The coyote time duration in seconds.
## The player will be able to jump for this extra amount of time after
## stopped touching the ground. It reduces the risk of the player falling off the platform
## when intended to jump.
@export_range(0.0, 1.0) var coyote_duration: float = 0.1
## Display the coyote timer debug label.
@export var debug_coyote: bool = false
```
The problem didn't occur initially. Now, when I change the exported value in the editor (inspector), it doesn't update during gameplay. Interestingly, it does update the value in the scene file.

During gameplay I added a quick debug print:
```gdscript
func start_coyote(delta):
print("Starting coyote with duration of %.3fs" % coyote_duration)
print("Delta is %.3fs" % delta)
var time_left = coyote_duration - delta
print("Time left is %.3fs" % time_left)
coyote_timer.start(time_left)
is_coyote = true
coyote_started.emit()
```
Even though the value is set to `0.28` in the scene, during runtime the value shows as the default `0.1`:
```
Starting coyote with duration of 0.100s
Delta is 0.017s
Time left is 0.083s
```
The same goes for the checkbox and other values. They seem to stop reacting to changes in the editor.
### Steps to reproduce
Not sure how to reproduce as this worked flawlessly for a while and then out of nowhere it stopped working without making any related code changes.
### Minimal reproduction project (MRP)
The whole project is available at GitHub (including assets): https://github.com/Xkonti/PlatformerToolkitGodot | bug,topic:editor,needs testing | low | Critical |
2,468,559,050 | flutter | Check .github/labeler.yml in packages metadata validator | There's a check in the flutter/packages CI now that makes sure we don't forget to add new packages to things like the README table and the dependabot config; we should also be checking that `.github/labeler.yml` is updated to include the new package in that check. | team,package,team-ecosystem,P2,triaged-ecosystem | low | Minor |
2,468,574,919 | vscode | Editor GPU: Persist texture atlas to disk | To save warming up the atlas every start up for every window, we could persist to disk and invalidate it when things like theme/font size/etc. change. | plan-item,editor-gpu | low | Minor |
2,468,597,019 | godot | Unfocused windows can't be dragged. | ### Tested versions
- Issue Version: v4.3.stable.official [77dcf97d8]
- Working Version: v4.2.2.stable.official [15073afe3]
### System information
Windows 10.0.22631 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 2070 SUPER (NVIDIA; 32.0.15.6070) - AMD Ryzen 7 5700X 8-Core Processor (16 Threads)
### Issue description
Provided is a video of the Project Manager, but the same issue happens with the Editor as well.
When the window is unfocused, and you go to drag the window - via the title bar, the window will not drag.
https://github.com/user-attachments/assets/d4baae6a-70d0-40b2-a4cc-74be1098b0d3
### Steps to reproduce
- Open Godot.
- Un-focus the window.
- Attempt to drag the window via the title bar.
### Minimal reproduction project (MRP)
N/A | bug,platform:windows,topic:porting,confirmed,regression,topic:gui | low | Minor |
2,468,603,966 | pytorch | [torchbind x compile] Can't register a torchbind operator that mutates a tensor | cc @ezyang @chauhang @penguinwu @bdhirsh | triaged,module: torchbind,oncall: pt2,module: pt2-dispatcher,vllm-compile | low | Minor |
2,468,619,051 | rust | Inefficient Match Statement Optimization for Unit-Only Enums with Fixed Offsets | **Description:**
The `match` statement in the provided Rust code generates a jump table with `-O3` optimizations, despite the fact that the discriminant values for the enum variants have a fixed offset from their target characters. This results in suboptimal code generation when a more efficient approach is possible.
**Reproduction:**
```rust
use std::fmt::{Display, Formatter};
#[derive(Copy, Clone)]
enum Letter {
A,
B,
C,
}
impl Display for Letter {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
let letter = match self {
Letter::A => 'a',
Letter::B => 'b',
Letter::C => 'c',
};
write!(f, "{}", letter)
}
}
```
**Expected Behavior:**
Since each discriminant value of the `Letter` enum is exactly 97 units away from the corresponding target characters in the `match` statement, the compiler should ideally optimize this code to use a direct calculation rather than generating a jump table.
**Example of Hand-Optimized Code:**
```rust
use std::fmt::{Display, Formatter};
#[derive(Copy, Clone)]
enum Letter {
A,
B,
C,
}
impl Display for Letter {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
let letter = ((*self as u8) + 97) as char;
write!(f, "{}", letter)
}
}
```
**Compiler Explorer Links:**
- **Original Code with Suboptimal Optimization:** [View on Compiler Explorer](https://godbolt.org/z/fKTMETrfs)
- **Hand-Optimized Code:** [View on Compiler Explorer](https://godbolt.org/z/oP4s643hT)
**Additional Notes:**
Although manual optimization achieves the desired performance, automatic optimization by the compiler is preferred to ensure both efficiency and maintainability of the code. | A-LLVM,I-slow,A-codegen,T-compiler,A-patterns,llvm-fixed-upstream,C-optimization | low | Major |
2,468,623,711 | ollama | Dynamic Functions Load | Hi All,
As this is the first post here, I'd like to say thank you for the great work everyone is doing! I love `ollama` and am using it permanently.
It is great that function support has been added and in relation to that I was wondering if you could extend it by adding an option to load preconfigured functions from a JSON file somewhere - i.e. `~/.config/ollama_func.json` or `~/.ollama/func.json` with something like:
```
{
"user_functions": [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather for a city",
"parameters": {
"type": "object",
"properties": {
"city": {
"type": "string",
"description": "The name of the city"
}
},
"required": [
"city"
]
}
}
}
]
}
```
and, on server (re)start, automatically add these to the list or create it if none exist.
Cheers,
Ivo | feature request | low | Minor |
2,468,658,055 | rust | Significantly worse codegen for SIMD shuffles after Rust 1.73.0 | See https://godbolt.org/z/jKczKPMMe.
The code is the following:
```rust
use std::arch::x86_64::*;
#[no_mangle]
pub unsafe fn custom_shuffle(x: __m128i) -> __m128i {
let mut tmp1 = _mm_unpacklo_epi8(x, _mm_setzero_si128());
tmp1 = _mm_shuffle_epi32::<78>(tmp1);
tmp1 = _mm_shufflelo_epi16::<27>(tmp1);
tmp1 = _mm_shufflehi_epi16::<27>(tmp1);
let mut tmp2 = _mm_unpackhi_epi8(x, _mm_setzero_si128());
tmp2 = _mm_shuffle_epi32::<78>(tmp2);
tmp2 = _mm_shufflelo_epi16::<27>(tmp2);
tmp2 = _mm_shufflehi_epi16::<27>(tmp2);
_mm_packus_epi16(tmp2, tmp1)
}
```
The regression happened between Rust 1.72.0 and Rust 1.73.0.
| A-LLVM,I-slow,P-medium,regression-untriaged,llvm-fixed-upstream,C-optimization | low | Minor |
2,468,737,604 | godot | Gamepad face buttons are not recognized correctly | ### Tested versions
- Reproducible in: 4.3.stable
- Not reproducible in: 4.2.2.stable
### System information
Godot v4.3.stable - macOS 14.5.0 - Vulkan (Forward+) - integrated Apple M1 - Apple M1 (8 Threads)
### Issue description
After updating to 4.3 from 4.2.2, the face buttons in my gamepad started being wrongly recognized by Godot.
- The Top Action is recognized as the Left Action.
- The Left Action is recognized as the Top Action.
- The Bottom Action is recognized as the Right Action.
- The Right Action is recognized as the Bottom Action.
The gamepad I use is an 8bitdo Pro 2 (https://www.8bitdo.com/pro2/), connected via wired connection to my computer.
I tried going to a 4.2.2 project, and it's still recognized properly there.
### Steps to reproduce
1. In a 4.3 project, go to the Input Map.
2. Try to set an input with a gamepad's face buttons.
Note that since I only have access to one gamepad, I'm not sure if this can be reproduced with other gamepads.
### Minimal reproduction project (MRP)
N/A | bug,platform:macos,topic:input,regression | low | Minor |
2,468,738,864 | stable-diffusion-webui | [Bug]: Gallery opens next image in Firefox (115.14.0esr) | ### Checklist
- [ ] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [X] The issue has been reported before but has not been fixed yet
### What happened?
When generating images in txt2img using Firefox, clicking on an image to enlarge opens the next one instead and the gallery switches to the next image too.
### Steps to reproduce the problem
1. Generate a batch of images.
2. Click on any image in gallery.
3. The next image in the gallery opens.
### What should have happened?
The image that was selected when clicked on should have opened.
### What browsers do you use to access the UI ?
Mozilla Firefox
### Sysinfo
[sysinfo-2024-08-15-18-45.json](https://github.com/user-attachments/files/16629052/sysinfo-2024-08-15-18-45.json)
### Console logs
```Shell
venv "C:\Users\XXXXXXX\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.10.1
Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2
Launching Web UI with arguments: --xformers --medvram-sdxl
ControlNet preprocessor location: C:\Users\XXXXXXX\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads
2024-08-15 20:48:24,058 - ControlNet - INFO - ControlNet v1.1.455
Loading weights [fe7578cb5e] from C:\Users\XXXXXXX\stable-diffusion-webui\models\Stable-diffusion\1.5\Real\realisticVisionV60B1_v60B1VAE.safetensors
Creating model from config: C:\Users\mir\stable-diffusion-webui\configs\v1-inference.yaml
C:\Users\XXXXXXX\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py:1150: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
warnings.warn(
2024-08-15 20:48:24,388 - ControlNet - INFO - ControlNet UI callback registered.
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 16.6s (prepare environment: 6.2s, import torch: 2.7s, import gradio: 1.5s, setup paths: 1.5s, initialize shared: 1.3s, other imports: 0.9s, load scripts: 1.6s, create ui: 0.5s, gradio launch: 0.2s).
Applying attention optimization: xformers... done.
Model loaded in 2.4s (create model: 0.5s, apply weights to model: 1.0s, calculate empty prompt: 0.8s).
100%|...| 20/20 [00:03<00:00, 5.34it/s]
100%|...| 20/20 [00:02<00:00, 8.23it/s]
```
### Additional information
I used fresh profile of Firefox 115.14.0esr.
It works on Microsoft Edge.
There was an old bug report of this 2y ago #5941 but it says it was fixed, but its bugged now. | bug-report,help-wanted | low | Critical |
2,468,742,405 | flutter | document how to SSH into Chromium VMs | We are experiencing a failure that only happens in CI. Understanding (and fixing) this failure is critical to our testing efforts.
Can someone with knowledge, explain how to access our machines? | team-infra,P1,infra: security,triaged-infra,:hourglass_flowing_sand: | medium | Critical |
2,468,774,364 | godot | GDShader documentation comments for per-`instance uniform` does not show in inspector | ### Tested versions
- Reproducible in v4.3.stable.mono.official [77dcf97d8]
### System information
Godot v4.3.stable.mono - Ubuntu 24.04 LTS 24.04 - X11 - Vulkan (Forward+) - integrated Intel(R) HD Graphics 5500 (BDW GT2) - Intel(R) Core(TM) i5-5300U CPU @ 2.30GHz (4 Threads)
### Issue description
Support for showing GDShader documentation comments in the inspector was added by #90161, addressing godotengine/godot-proposals#7846.
However the PR only handles regular `uniform` docs, not `instance uniform`.
I was told by @Calinou to open this:
> I suggest opening an issue on the main Godot repository for this. I consider this a bug, since I'd expect per-instance uniforms to be documentable the same way.
> Global uniforms don't appear in the inspector (but appear in the project settings instead), so documenting them would need a separate proposal.
### Steps to reproduce
```glsl
/** Should we also somehow show documentation for the whole shader file here? */
shader_type spatial;
/** [i]Already[/i] shows documentation for this per-[Material] [code]uniform[/code].
* Supports [b]BBCode[/b] and stuff.
*/
uniform vec4 material_color : source_color = vec4(1.0, 0.5, 0.0, 1.0);
/** [i]Should[/i] show documentation for this per-[code]instance[/code] uniform.
* Supporting [b]BBCode[/b] and stuff.
*/
instance uniform vec4 instance_color : source_color = vec4(0.0, 2.5, 0.0, 1.0);
void fragment() {
ALBEDO = material_color.rgb;
EMISSION = instance_color.rgb;
}
```
### Minimal reproduction project (MRP)
N/A | bug,topic:editor,topic:shaders | low | Critical |
2,468,776,503 | godot | Assigning imported ..gltf / .glb Root Type to a Custom Class which extends from another Custom Class fails. | ### Tested versions
- Reproduced in 4.2.stable, 4.3.rc1, 4.3.rc2, 4.3.rc3, and 4.3.stable.
### System information
Windows 10.0.22621 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 4080 (NVIDIA; 32.0.15.6081) - 13th Gen Intel(R) Core(TM) i7-13700KF (16 Threads)
### Issue description
If a class extends from another custom class, setting an imported .gltf / .glb to use said class as a root node fails.
The first class has the following code (Simplified for this example):
```gdscript
extends Node3D
class_name OuterClass
```
The second class is the following:
```gdscript
extends OuterClass
class_name InnerClass
# When attempting to load this class on Import, it generates the error "Cannot get class "OuterClass""
```
When attempting to assign the Root Type of an imported scene (in this example a .glb) to InnerClass, the editor reports the error that "Cannot get class 'OuterClass'."
However, when changing the second class to use a file path instead of a class name after extends, the code works fine.
```gdscript
extends "res://outerClass.gd"
class_name innerClass
# This one works.
```
The class assignment seems to work fine when creating a new node in the editor and only produces this error when assigning the node type to an imported file.
I would expect that assigning this class to a .gltf / .glb should not be dependent on whether a class name or file path is used for the extension.
Note: I only tested this with .gltf and .glb files.
### Steps to reproduce
To view the error you can open the minimal reproduction project and try to reimport the file "Cube.gltf". Doing so will report an error on the output bar. This error can be fixed by changing the class of the cube to "innerClassWorking".
### Minimal reproduction project (MRP)
[ClassAssignmentTest.zip](https://github.com/user-attachments/files/16629326/ClassAssignmentTest.zip)
| bug,topic:import | low | Critical |
2,468,802,423 | react-native | Flatten margin styles are not properly overriding when applied from specific to general properties | ### Description
By flattening the styles `{ marginTop: 15 }` and `{ marginVertical: 0 }`, it results in a component with `margin: 0`, however a "ghost" hidden component retains the `marginTop: 15`. A complete description can be found here https://github.com/HathorNetwork/hathor-wallet-mobile/issues/532.
### Steps to reproduce
Use the flatten style syntax with a `flag` in a component, like: `style={[{ marginTop: 15 }, flag && { marginVertical: 0 }]}`
### React Native Version
0.75.1 (Verified it happens on latest version)
### Affected Platforms
Runtime - Android, Runtime - iOS
### Output of `npx react-native info`
```text
System:
OS: macOS 14.5
CPU:
Memory:
Shell:
version: "5.9"
path:
Binaries:
Node:
version: 20.16.0
path:
Yarn:
version: 3.7.0
path:
npm:
version: 10.8.1
path:
Watchman: Not Found
Managers:
CocoaPods: Not Found
SDKs:
iOS SDK:
Platforms:
- DriverKit 23.5
- iOS 17.5
- macOS 14.5
- tvOS 17.5
- visionOS 1.2
- watchOS 10.5
Android SDK:
API Levels:
- "30"
- "31"
- "32"
- "33"
- "33"
Build Tools:
- 30.0.2
- 30.0.3
- 31.0.0
- 33.0.0
- 33.0.2
- 34.0.0
System Images:
- android-30 | Google APIs ARM 64 v8a
- android-31 | Google APIs ARM 64 v8a
- android-32 | Google APIs ARM 64 v8a
- android-33 | Google APIs ARM 64 v8a
- android-33 | Google Play ARM 64 v8a
Android NDK: Not Found
IDEs:
Android Studio: 2022.1 AI-221.6008.13.2211.9619390
Xcode:
version: 15.4/15F31d
path:
Languages:
Java:
version: 11.0.19
path:
Ruby:
version: 2.6.10
path:
npmPackages:
"@react-native-community/cli": Not Found
react:
installed: 18.2.0
wanted: 18.2.0
react-native:
installed: 0.72.5
wanted: 0.72.5
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: Not found
newArchEnabled: Not found
iOS:
hermesEnabled: Not found
newArchEnabled: Not found
```
### Stacktrace or Logs
```text
Not needed.
```
### Reproducer
https://snack.expo.dev/@alexruzenhack/margin-rules-application
https://github.com/dream-sports-labs/reproducer-react-native
### Screenshots and Videos
The images are in this issue: https://github.com/HathorNetwork/hathor-wallet-mobile/issues/532 | Issue: Author Provided Repro | low | Major |
2,468,815,859 | godot | PS5 controller doesn't work in 4.3-stable | ### Tested versions
- Reproducible in: 4.3-stable. - Not reproducible in: 4.2-stable.
### System information
Godot v4.3.stable - macOS 14.3.1 - GLES3 (Compatibility) - Apple M1 Max - Apple M1 Max (10 Threads)
### Issue description
PS5 controller no longer registers any inputs after updating from Godot 4.2-stable to 4.3-stable. No errors in console. Installed 4.2 again and confirmed the PS5 controller works again in that version.
### Steps to reproduce
Run "2D Platformer Demo" in Godot 4.3-stable w/ a PS5 controller connected and try playing w/ PS5 controller.
### Minimal reproduction project (MRP)
[2d_platformer.zip](https://github.com/user-attachments/files/16629678/2d_platformer.zip)
| bug,platform:macos,confirmed,topic:input,regression | low | Critical |
2,468,851,015 | ollama | Full(er) JSON Schema support for tool calling | Currently `parameters` in tool definition is a very limited subset of a JSON Schema. This makes it non-compatible with OpenAI (https://github.com/ollama/ollama/issues/6155) and it general it makes it really hard to use it because you cannot pass JSON Schema as `parameters` but have to manually map to the API structure tool definition expects. Good enough if you are manually making a tool definition, but hard if you have automatic process to generate JSON Schema (for example, I use https://github.com/invopop/jsonschema Go package to generate JSON Schema from Go struct automatically, which works great with other API providers like OpenAI).
So I would suggest that API is relaxed to get embedded JSON Schema and not just fixed structure it currently allows. | feature request,api | low | Major |
2,468,853,266 | PowerToys | Feature request: Right click on folder to add to path | ### Description of the new feature / enhancement
I often install software with winget that doesn't add the program to the file system path. I use powertoys to add these objects to my path. It would be nice if I could right click on a folder in explorer and add it to the path as well.
### Scenario when this would be used?
Installing software which doesn't modify the path so it can be called from powersehll
### Supporting information
_No response_ | Idea-New PowerToy,Needs-Triage | low | Minor |
2,468,869,713 | rust | Nit: E0599 help suggests removing arguments instead of argument list | ### Code
```Rust
#[allow(dead_code)]
fn main() {
struct X {
y: bool,
}
let y = true;
let _ = X{ y }.y();
}
```
### Current output
```Shell
Compiling playground v0.0.1 (/playground)
error[E0599]: no method named `y` found for struct `X` in the current scope
--> src/main.rs:7:20
|
3 | struct X {
| -------- method `y` not found for this struct
...
7 | let _ = X{ y }.y();
| ^-- help: remove the arguments
| |
| field, not a method
For more information about this error, try `rustc --explain E0599`.
error: could not compile `playground` (bin "playground") due to 1 previous error
```
### Desired output
```Shell
Compiling playground v0.0.1 (/playground)
error[E0599]: no method named `y` found for struct `X` in the current scope
--> src/main.rs:7:20
|
3 | struct X {
| -------- method `y` not found for this struct
...
7 | let _ = X{ y }.y();
| ^-- help: remove the argument list
| |
| field, not a method
For more information about this error, try `rustc --explain E0599`.
error: could not compile `playground` (bin "playground") due to 1 previous error
```
### Rationale and extra context
the arguments are the individual elements of the arguments list, this is technically incorrect.
### Other cases
_No response_
### Rust Version
```Shell
1.80.1
```
### Anything else?
_No response_
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"chansuke"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | A-diagnostics,T-compiler | low | Critical |
2,468,879,302 | pytorch | Dev environment does not support windows | ### 🐛 Describe the bug
Hi,
Just a quick question, why the project does not seem to support the development on Windows?
- I had many difficulties installing the tools required for CUDA development, forcing me to skip using a GPU during development.
- The CI failed on few of my pull requests due to linting but linting on Windows is near impossible:
- There is not public `lintrunner` wheel on Pypi for Windows amd64.
- The CLANGFORMAT fails because no Windows configuration is provided in `tools/linter/adapters/s3_init_config.json`.
I don't know if that could be of any help but `clang-format` and `clang-tidy` at least are available as python wheels from pypi.
There is a reason for sure, could you communicate on it?
I tried my best to fill the gaps but my understanding of the PyTorch repo is not enough. For `tools/linter/adapters/s3_init_config.json` I don't really know what I am supposed to do. In the meantime, I'll move to linux or a dev-container.
Many thanks!
### Versions
main
cc @malfet @seemethere @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex | module: build,module: windows,triaged | low | Critical |
2,468,939,691 | pytorch | Binary Cross Entropy crashes kernel on apple silicon with singleton dimensions | ### 🐛 Describe the bug
`torch.nn.functional.binary_cross_entropy` crashes when the values have singleton dimensions. This error cannot be caught.
Sample code:
```python
import torch
import torch.nn.functional
device = "mps"
pred = torch.ones([2, 1, 3], device=device, requires_grad=True) * 0.5
target = torch.ones([2, 1, 3], device=device)
loss = torch.nn.functional.binary_cross_entropy(pred, target, weight=None, reduction="none")
loss.sum().backward()
```
This just sets up running binary_cross_entropy on predictions of 0.5 and targets of 1. I wasn't able to capture the result of this exact example, but here's an stderr dump from a similar situation in my project:
```
error: input types 'tensor<64x637x1xf32>' and 'tensor<64x637xf32>' are not broadcast compatible
LLVM ERROR: Failed to infer result type(s).
```
I am guessing the apple silicon implementation incorrectly squashes those singleton dimensions (e.g. from 64x637x1 to 64x637).
In the given example, if you change device to cpu or change the shape to [2, 3] then it succeeds. If you change shape to [2, 3, 1] or [1, 2, 3] then it still crashes.
### Versions
```
Collecting environment information...
PyTorch version: 2.3.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.1 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.3.9.4)
CMake version: Could not collect
Libc version: N/A
Python version: 3.11.9 (v3.11.9:de54cf5be3, Apr 2 2024, 07:12:50) [Clang 13.0.0 (clang-1300.0.29.30)] (64-bit runtime)
Python platform: macOS-14.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M3 Pro
Versions of relevant libraries:
[pip3] mypy==1.10.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.0
[pip3] torch==2.3.0
[conda] Could not collect
```
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen | module: crash,triaged,module: mps | low | Critical |
2,468,979,965 | pytorch | RuntimeError: free_upper_bound + pytorch_used_bytes[device] | ### 🐛 Describe the bug
I wasn't sure if this belonged here, but since the error message specify to create a bug report for Pytorch I will
Traceback
```
Traceback (most recent call last):
File "C:\AI Stuff\ComfyUI\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AI Stuff\ComfyUI\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AI Stuff\ComfyUI\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AI Stuff\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Bringing-Old-Photos-Back-to-Life\nodes.py", line 300, in run
return RestoreOldPhotos.restore(image, bopbtl_models, scratch_mask)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AI Stuff\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Bringing-Old-Photos-Back-to-Life\nodes.py", line 291, in restore
restored_image = model.inference(transformed_image, transformed_mask)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AI Stuff\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Bringing-Old-Photos-Back-to-Life\Global\models\mapping_model.py", line 344, in inference
label_feat_map=self.mapping_net.inference_forward(label_feat.detach(),inst_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AI Stuff\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Bringing-Old-Photos-Back-to-Life\Global\models\NonLocal_feature_mapping_model.py", line 195, in inference_forward
x2 = self.NL_scale_1.inference_forward(x1,mask)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AI Stuff\ComfyUI\ComfyUI\custom_nodes\ComfyUI-Bringing-Old-Photos-Back-to-Life\Global\models\networks.py", line 773, in inference_forward
concat_1=self.F_Combine(concat_1)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AI Stuff\ComfyUI\ComfyUI\NEWvenv\Lib\site-packages\torch\nn\modules\module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AI Stuff\ComfyUI\ComfyUI\NEWvenv\Lib\site-packages\torch\nn\modules\module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AI Stuff\ComfyUI\ComfyUI\NEWvenv\Lib\site-packages\torch\nn\modules\conv.py", line 458, in forward
return self._conv_forward(input, self.weight, self.bias)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\AI Stuff\ComfyUI\ComfyUI\NEWvenv\Lib\site-packages\torch\nn\modules\conv.py", line 454, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: free_upper_bound + pytorch_used_bytes[device] <= device_total INTERNAL ASSERT FAILED at "C:\\actions-runner\\_work\\pytorch\\pytorch\\builder\\windows\\pytorch\\c10\\cuda\\CUDAMallocAsyncAllocator.cpp":542, please report a bug to PyTorch.
```
### Versions
Versions
```
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Home
GCC version: Could not collect
Clang version: Could not collect
CMake version: version 3.30.2
Libc version: N/A
Python version: 3.11.9 (tags/v3.11.9:de54cf5, Apr 2 2024, 10:12:12) [MSC v.1938 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.22635-SP0
Is CUDA available: True
CUDA runtime version: 12.6.20
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 560.81
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture=9
CurrentClockSpeed=3600
DeviceID=CPU0
Family=107
L2CacheSize=3072
L2CacheSpeed=
Manufacturer=AuthenticAMD
MaxClockSpeed=3600
Name=AMD Ryzen 5 5500
ProcessorType=3
Revision=20480
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] onnx==1.16.2
[pip3] onnxruntime==1.18.1
[pip3] onnxruntime-gpu==1.18.1
[pip3] open_clip_torch==2.26.1
[pip3] pytorch-lightning==2.4.0
[pip3] torch==2.4.0+cu121
[pip3] torchaudio==2.4.0+cu121
[pip3] torchmetrics==1.4.1
[pip3] torchsde==0.2.6
[pip3] torchvision==0.19.0+cu121
```
cc @ptrblck @msaroufim | module: cuda,triaged,module: CUDACachingAllocator | low | Critical |
2,468,986,915 | rust | `std::thread::sleep` does not document its interaction with signals | ### Location
`std::thread::sleep`
### Summary
POSIX permits the C function `sleep` to be implemented using the `SIGARLM` signal ([`sleep(3)`](https://www.man7.org/linux/man-pages/man3/sleep.3.html)), meaning it is non-portable to mix use of that signal with sleep. The `nanosleep` function, which `std::thread::sleep` uses, is required by POSIX not to interfere with signals ([`nanosleep(2)`](https://www.man7.org/linux/man-pages/man2/nanosleep.2.html)), so it should in fact be OK currently to mix `std::thread::sleep` with `SIGALRM`. However, this is not actually documented. Given that the Rust function is called "sleep" it is easy to be concerned that there might be a problem.
Could `std::thread::sleep` make a documented commitment not to interfere with `SIGALRM`? | T-libs-api,A-docs,T-libs | low | Minor |
2,469,006,166 | next.js | Conditional building pages with export: output is not supported | ### Link to the code that reproduces this issue
[Sandbox Link](https://codesandbox.io/p/devbox/nostalgic-benz-kvvqk2?workspaceId=e31ff887-e28d-4cc0-9110-18046c9484ab&layout=%257B%2522sidebarPanel%2522%253A%2522EXPLORER%2522%252C%2522rootPanelGroup%2522%253A%257B%2522direction%2522%253A%2522horizontal%2522%252C%2522contentType%2522%253A%2522UNKNOWN%2522%252C%2522type%2522%253A%2522PANEL_GROUP%2522%252C%2522id%2522%253A%2522ROOT_LAYOUT%2522%252C%2522panels%2522%253A%255B%257B%2522type%2522%253A%2522PANEL_GROUP%2522%252C%2522contentType%2522%253A%2522UNKNOWN%2522%252C%2522direction%2522%253A%2522vertical%2522%252C%2522id%2522%253A%2522clzvrr1cs00063b6jx24oaw7r%2522%252C%2522sizes%2522%253A%255B70%252C30%255D%252C%2522panels%2522%253A%255B%257B%2522type%2522%253A%2522PANEL_GROUP%2522%252C%2522contentType%2522%253A%2522EDITOR%2522%252C%2522direction%2522%253A%2522horizontal%2522%252C%2522id%2522%253A%2522EDITOR%2522%252C%2522panels%2522%253A%255B%257B%2522type%2522%253A%2522PANEL%2522%252C%2522contentType%2522%253A%2522EDITOR%2522%252C%2522id%2522%253A%2522clzvrr1cs00023b6jz4wx13u9%2522%257D%255D%257D%252C%257B%2522type%2522%253A%2522PANEL_GROUP%2522%252C%2522contentType%2522%253A%2522SHELLS%2522%252C%2522direction%2522%253A%2522horizontal%2522%252C%2522id%2522%253A%2522SHELLS%2522%252C%2522panels%2522%253A%255B%257B%2522type%2522%253A%2522PANEL%2522%252C%2522contentType%2522%253A%2522SHELLS%2522%252C%2522id%2522%253A%2522clzvrr1cs00043b6j1yr2tsbu%2522%257D%255D%257D%255D%257D%252C%257B%2522type%2522%253A%2522PANEL_GROUP%2522%252C%2522contentType%2522%253A%2522DEVTOOLS%2522%252C%2522direction%2522%253A%2522vertical%2522%252C%2522id%2522%253A%2522DEVTOOLS%2522%252C%2522panels%2522%253A%255B%257B%2522type%2522%253A%2522PANEL%2522%252C%2522contentType%2522%253A%2522DEVTOOLS%2522%252C%2522id%2522%253A%2522clzvrr1cs00053b6j7yg3ceqi%2522%257D%255D%257D%255D%252C%2522sizes%2522%253A%255B50%252C50%255D%257D%252C%2522tabbedPanels%2522%253A%257B%2522clzvrr1cs00023b6jz4wx13u9%2522%253A%257B%2522tabs%2522%253A%255B%257B%2522id%2522%253A%2522clzvrr1cs00013b6jjq8oosyl%2522%252C%2522mode%2522%253A%2522permanent%2522%252C%2522type%2522%253A%2522FILE%2522%252C%2522filepath%2522%253A%2522%252FREADME.md%2522%252C%2522state%2522%253A%2522IDLE%2522%257D%252C%257B%2522id%2522%253A%2522clzvrtsvp00133b6j9ayc82e6%2522%252C%2522mode%2522%253A%2522permanent%2522%252C%2522type%2522%253A%2522FILE%2522%252C%2522initialSelections%2522%253A%255B%257B%2522startLineNumber%2522%253A5%252C%2522startColumn%2522%253A25%252C%2522endLineNumber%2522%253A5%252C%2522endColumn%2522%253A25%257D%255D%252C%2522filepath%2522%253A%2522%252Fnext.config.mjs%2522%252C%2522state%2522%253A%2522IDLE%2522%257D%255D%252C%2522id%2522%253A%2522clzvrr1cs00023b6jz4wx13u9%2522%252C%2522activeTabId%2522%253A%2522clzvrtsvp00133b6j9ayc82e6%2522%257D%252C%2522clzvrr1cs00053b6j7yg3ceqi%2522%253A%257B%2522id%2522%253A%2522clzvrr1cs00053b6j7yg3ceqi%2522%252C%2522activeTabId%2522%253A%2522clzvrvlw7000y3b6jp815rk7h%2522%252C%2522tabs%2522%253A%255B%257B%2522type%2522%253A%2522UNASSIGNED_PORT%2522%252C%2522port%2522%253A3000%252C%2522id%2522%253A%2522clzvrvlw7000y3b6jp815rk7h%2522%252C%2522mode%2522%253A%2522permanent%2522%257D%255D%257D%252C%2522clzvrr1cs00043b6j1yr2tsbu%2522%253A%257B%2522tabs%2522%253A%255B%257B%2522id%2522%253A%2522clzvrr1cs00033b6jrpt7sicf%2522%252C%2522mode%2522%253A%2522permanent%2522%252C%2522type%2522%253A%2522TASK_LOG%2522%252C%2522taskId%2522%253A%2522dev%2522%257D%255D%252C%2522id%2522%253A%2522clzvrr1cs00043b6j1yr2tsbu%2522%252C%2522activeTabId%2522%253A%2522clzvrr1cs00033b6jrpt7sicf%2522%257D%257D%252C%2522showDevtools%2522%253Atrue%252C%2522showShells%2522%253Atrue%252C%2522showSidebar%2522%253Atrue%252C%2522sidebarPanelSize%2522%253A15%257D)
### To Reproduce
1. Run `npm run build`
2. You get the error `Page "${page}" is missing "generateStaticParams()" so it cannot be used with "output: export" config.`
### Current vs. Expected behavior
When building a static site with output: "export", I expect the empty dynamic routes to be ignored.
The current behavior is to throw an error on build, forcing me to change the name of the file from page.tsx to something else thus making the build ignore this dynamic route page generation.
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: #1 SMP PREEMPT_DYNAMIC Sun Aug 6 20:05:33 UTC 2023
Available memory (MB): 4102
Available CPU cores: 2
Binaries:
Node: 20.9.0
npm: 9.8.1
Yarn: 1.22.19
pnpm: 8.10.2
Relevant Packages:
next: 15.0.0-canary.115 // Latest available version is detected (15.0.0-canary.115).
eslint-config-next: N/A
react: 19.0.0-rc-187dd6a7-20240806
react-dom: 19.0.0-rc-187dd6a7-20240806
typescript: 5.3.3
Next.js Config:
output: export
```
### Which area(s) are affected? (Select all that apply)
Output (export/standalone)
### Which stage(s) are affected? (Select all that apply)
next build (local)
### Additional context
Structuring multiple sites on a single NextJS code base gets tough with the current behavior.
Also some sites are generated daily and without specific set of routes provided in the code (loading the dynamic routes from a CMS), days when a dynamic route will be empty is highly likely.
This issue can be fixed by changing the `hasGenerateStaticParams` condition to allow empty arrays (currently checking the length).
Breadcrumbs: `next > src > build > index.ts > Line 2140 `
**I'm unfamiliar with the implications of this condition on other export methods, take it only as a suggestion**
I would be happy to get a workaround without editing the code for each build. | bug,Output (export/standalone) | low | Critical |
2,469,033,323 | TypeScript | Allow type annotations in .js files in preparation for Type Annotations proposal | ### 🔍 Search Terms
allow js file extension
### ✅ Viability Checklist
- [X] This wouldn't be a breaking change in existing TypeScript/JavaScript code
- [X] This wouldn't change the runtime behavior of existing JavaScript code
- [X] This could be implemented without emitting different JS based on the types of the expressions
- [X] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, new syntax sugar for JS, etc.)
- [X] This isn't a request to add a new utility type: https://github.com/microsoft/TypeScript/wiki/No-New-Utility-Types
- [X] This feature would agree with the rest of our Design Goals: https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals
### ⭐ Suggestion
Add an option to allow removing the error "[some feature] can only be used in TypeScript files"
Previous discussion: #10939
The compiler option would allow consuming `.js` files as if they were `.ts` files, but with some differences (see below). All type-checking, type-stripping, and code emitting features would remain as-is.
### 📃 Motivating Example
The [Type Annotations proposal](https://github.com/tc39/proposal-type-annotations) is the definite future of JavaScript.
This feature would:
* Enable incremental adoption of the Type Annotations proposal by users
* Enable incremental support of Type Annotations in the TypeScript compiler
* Possibly help push that proposal forward a little more quickly by increased usage
* Spread word that this proposal exists by gaining compiler support in tsc and its release notes
It's true that the exact syntax isn't definite yet, and it will probably be at least somewhat different than current TypeScript syntax.
However, that's exactly why adding initial support for plain TypeScript would not interfere with its eventually sealed syntax.
In other words, the initial support would simply parse it as "typed-javascript" but the TypedJavaScript parser would currently be exactly the same as the TypeScript parser for the time being, and allow gradual differentiation.
It would also be a good opportunity to allow TypedJavaScript mode to disallow namespaces, enums, constructor fields, etc.
### 💻 Use Cases
1. What do you want to use this for?
Writing future-ready JavaScript.
3. What shortcomings exist with current approaches?
It doesn't take into account the pending type annotation proposal.
5. What workarounds are you using in the meantime?
I can *almost* get this working with some VS Code hacks, but then `tsc` complains as soon as I run it.
| Suggestion,Awaiting More Feedback | low | Critical |
2,469,039,776 | yt-dlp | Make --concat-playlist work with playlists that have mixed subtitles | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm requesting a feature unrelated to a specific site
- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Provide a description that is worded well enough to be understood
When using for example this command
`yt-dlp https://www.youtube.com/playlist?list=PL2BytBFJP1lWUNUYEwD9i5oQ4LicAntSu --embed-sub --sub-lang en --concat-playlist always `
it fails with "ERROR: The files have different streams/codecs and cannot be concatenated. Either select different formats or --recode-video them to a common format"
Problem being that Intro and Credits in this Playlist don't contain a subtitle stream.
It would be nice if yt-dlp would identify this problematic before using concat and automatically adding empty bogus-subtitle-streams to the input-files so that the --concat-playlist can be successful.
Fixing this manually afterwards even with ffmpeg gives me much headaches.
When using ffmpeg concat it also fails because the same reason.
Couldn't find a solution so far in bringing those files together with embedded subtitles.
### Provide verbose output that clearly demonstrates the problem
- [ ] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [ ] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[EmbedSubtitle] There aren't any subtitles to embed
ERROR: The files have different streams/codecs and cannot be concatenated. Either select different formats or --recode-video them to a common format
```
| enhancement,triage,core:post-processor | low | Critical |
2,469,046,801 | godot | Windows ARM64 version of Godot 4.3 duplicates UI when selecting Renderer on new project | ### Tested versions
Reproducible in any Windows ARM64 version of version 4.3
### System information
Windows 11 ARM64, Snapdragon X Elite with 15.6" OLED with 2880 x 1620
### Issue description
UI duplicates and shifts UI hit points in the project selection UI.
It's very hard to select anything else when it's in this state.
Resizing the window will force a UI refresh and it will fix the issue.
### Steps to reproduce
1. Start Godot
2. Click the "+Create" button
3. Select "Mobile" Renderer
This is what it looks like.

### Minimal reproduction project (MRP)
Doesn't apply. | bug,platform:windows,topic:rendering,topic:porting | low | Major |
2,469,055,602 | ollama | cuda error out of memory | ### What is the issue?
Hello Team,
Below is the attached server log; I am trying to run llama3.1 70B on
5700x, 23GB RAM and p100 16GB,
the model loads successfully, but as soon as the prompt is sent, within seconds, I receive the error:
"_Error: error reading llm response: read tcp 127.0.0.1:49245->127.0.0.1:49210: wsarecv: An existing connection was forcibly closed by the remote host._"
I have set OLLAMA_MAX_VRAM in environment variables, but it is not in the server logs below.
lama3.1 normal size is working fine; anything larger results in the same.
```
2024/08/16 07:56:25 routes.go:1125: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://127.0.0.1:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:C:\\Users\\Dummy\\.ollama\\models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR:C:\\Users\\Dummy\\AppData\\Local\\Programs\\Ollama\\ollama_runners OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-08-16T07:56:25.534+10:00 level=INFO source=images.go:782 msg="total blobs: 35"
time=2024-08-16T07:56:25.537+10:00 level=INFO source=images.go:790 msg="total unused blobs removed: 0"
time=2024-08-16T07:56:25.539+10:00 level=INFO source=routes.go:1172 msg="Listening on 127.0.0.1:11434 (version 0.3.6)"
time=2024-08-16T07:56:25.540+10:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [rocm_v6.1 cpu cpu_avx cpu_avx2 cuda_v11.3]"
time=2024-08-16T07:56:25.540+10:00 level=INFO source=gpu.go:204 msg="looking for compatible GPUs"
time=2024-08-16T07:56:25.692+10:00 level=INFO source=gpu.go:288 msg="detected OS VRAM overhead" id=GPU-1c56ec58-85cd-2097-8b24-bca0994cb6a5 library=cuda compute=6.0 driver=12.4 name="Tesla P100-PCIE-16GB" overhead="254.6 MiB"
time=2024-08-16T07:56:25.693+10:00 level=INFO source=types.go:105 msg="inference compute" id=GPU-1c56ec58-85cd-2097-8b24-bca0994cb6a5 library=cuda compute=6.0 driver=12.4 name="Tesla P100-PCIE-16GB" total="15.9 GiB" available="15.6 GiB"
[GIN] 2024/08/16 - 07:56:25 | 200 | 0s | 127.0.0.1 | HEAD "/"
[GIN] 2024/08/16 - 07:56:25 | 200 | 18.1911ms | 127.0.0.1 | POST "/api/show"
time=2024-08-16T07:56:26.010+10:00 level=INFO source=memory.go:309 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=29 layers.split="" memory.available="[15.6 GiB]" memory.required.full="39.3 GiB" memory.required.partial="15.2 GiB" memory.required.kv="640.0 MiB" memory.required.allocations="[15.2 GiB]" memory.weights.total="36.5 GiB" memory.weights.repeating="35.7 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="324.0 MiB" memory.graph.partial="1.1 GiB"
time=2024-08-16T07:56:26.022+10:00 level=INFO source=server.go:393 msg="starting llama server" cmd="C:\\Users\\Dummy\\AppData\\Local\\Programs\\Ollama\\ollama_runners\\cuda_v11.3\\ollama_llama_server.exe --model C:\\Users\\Dummy\\.ollama\\models\\blobs\\sha256-a677b4a4b70c45e702b1d600f7905e367733c53898b8be60e3f29272cf334574 --ctx-size 2048 --batch-size 512 --embedding --log-disable --n-gpu-layers 29 --no-mmap --parallel 1 --port 49305"
time=2024-08-16T07:56:26.026+10:00 level=INFO source=sched.go:445 msg="loaded runners" count=1
time=2024-08-16T07:56:26.026+10:00 level=INFO source=server.go:593 msg="waiting for llama runner to start responding"
time=2024-08-16T07:56:26.026+10:00 level=INFO source=server.go:627 msg="waiting for server to become available" status="llm server error"
INFO [wmain] build info | build=3535 commit="1e6f6554" tid="20688" timestamp=1723758986
INFO [wmain] system info | n_threads=8 n_threads_batch=-1 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="20688" timestamp=1723758986 total_threads=16
INFO [wmain] HTTP server listening | hostname="127.0.0.1" n_threads_http="15" port="49305" tid="20688" timestamp=1723758986
llama_model_loader: loaded meta data with 29 key-value pairs and 724 tensors from C:\Users\Dummy\.ollama\models\blobs\sha256-a677b4a4b70c45e702b1d600f7905e367733c53898b8be60e3f29272cf334574 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Meta Llama 3.1 70B Instruct
llama_model_loader: - kv 3: general.finetune str = Instruct
llama_model_loader: - kv 4: general.basename str = Meta-Llama-3.1
llama_model_loader: - kv 5: general.size_label str = 70B
llama_model_loader: - kv 6: general.license str = llama3.1
llama_model_loader: - kv 7: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam...
llama_model_loader: - kv 8: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ...
llama_model_loader: - kv 9: llama.block_count u32 = 80
llama_model_loader: - kv 10: llama.context_length u32 = 131072
llama_model_loader: - kv 11: llama.embedding_length u32 = 8192
llama_model_loader: - kv 12: llama.feed_forward_length u32 = 28672
llama_model_loader: - kv 13: llama.attention.head_count u32 = 64
llama_model_loader: - kv 14: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 15: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 16: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 17: general.file_type u32 = 2
llama_model_loader: - kv 18: llama.vocab_size u32 = 128256
llama_model_loader: - kv 19: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 20: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 21: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 22: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 23: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 24: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 25: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 128009
llama_model_loader: - kv 27: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv 28: general.quantization_version u32 = 2
llama_model_loader: - type f32: 162 tensors
llama_model_loader: - type q4_0: 561 tensors
llama_model_loader: - type q6_K: 1 tensors
time=2024-08-16T07:56:26.287+10:00 level=INFO source=server.go:627 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 131072
llm_load_print_meta: n_embd = 8192
llm_load_print_meta: n_layer = 80
llm_load_print_meta: n_head = 64
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 8
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 28672
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 131072
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 70B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 70.55 B
llm_load_print_meta: model size = 37.22 GiB (4.53 BPW)
llm_load_print_meta: general.name = Meta Llama 3.1 70B Instruct
llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token = 128009 '<|eot_id|>'
llm_load_print_meta: LF token = 128 'Ä'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: Tesla P100-PCIE-16GB, compute capability 6.0, VMM: no
llm_load_tensors: ggml ctx size = 0.68 MiB
llm_load_tensors: offloading 29 repeating layers to GPU
llm_load_tensors: offloaded 29/81 layers to GPU
llm_load_tensors: CUDA_Host buffer size = 24797.81 MiB
llm_load_tensors: CUDA0 buffer size = 13312.82 MiB
```
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.3.6 | bug,nvidia,memory | low | Critical |
2,469,058,824 | pytorch | Very large memory increase when combining bfloat16 autocast with torch.compile | ### 🐛 Describe the bug
Use of autocast causes a slight but unexpected and tolerable increase in memory, but the combination of autocast with torch.compile causes very large increase (8x with a 2048x2048 F.linear, and continues to grow as dimensions increase). Strangely, it can be attenuated by using a broken down version of F.linear. I tested with bfloat16, but the results are the same with float16 autocast.
```
import torch
import torch.nn as nn
device = 'cuda'
torch.set_float32_matmul_precision('high') #has no effect
tensor = torch.rand((50, 64, 2048), device=device)
add_req_grad = torch.tensor([1.0], requires_grad=True).to(device)
fc = nn.Linear(2048, 2048).to(device)
tensor += add_req_grad
outputs = []
for i in range(tensor.size(1)):
input = tensor[:, i, :]
out = fc(input)
outputs.append(out)
print(f"compile: False, autocast: False, modify input: True, mem allocated: {torch.cuda.memory_allocated()/1e9} GB")
print()
torch.cuda.empty_cache()
tensor = torch.rand((50, 64, 2048), device=device)
add_req_grad = torch.tensor([1.0], requires_grad=True).to(device)
fc = nn.Linear(2048, 2048).to(device)
fc = torch.compile(fc)
tensor += add_req_grad
outputs = []
for i in range(tensor.size(1)):
input = tensor[:, i, :]
out = fc(input)
outputs.append(out)
print(f"compile: True, autocast: False, modify input: True, mem allocated: {torch.cuda.memory_allocated()/1e9} GB")
print()
torch.cuda.empty_cache()
tensor = torch.rand((50, 64, 2048), device=device)
add_req_grad = torch.tensor([1.0], requires_grad=True).to(device)
fc = nn.Linear(2048, 2048).to(device)
fc = torch.compile(fc)
with torch.autocast(device_type=device, dtype=torch.bfloat16):
tensor += add_req_grad
outputs = []
for i in range(tensor.size(1)):
input = tensor[:, i, :]
out = fc(input)
outputs.append(out)
print(f"compile: True, autocast: True, modify input: True, mem allocated: {torch.cuda.memory_allocated()/1e9} GB")
print()
torch.cuda.empty_cache()
tensor = torch.rand((50, 64, 2048), device=device)
add_req_grad = torch.tensor([1.0], requires_grad=True).to(device)
fc = nn.Linear(2048, 2048).to(device)
with torch.autocast(device_type=device, dtype=torch.bfloat16):
tensor += add_req_grad
outputs = []
for i in range(tensor.size(1)):
input = tensor[:, i, :]
out = fc(input)
outputs.append(out)
print(f"compile: False, autocast: True, modify input: True, mem allocated: {torch.cuda.memory_allocated()/1e9} GB")
print()
torch.cuda.empty_cache()
tensor = torch.rand((50, 64, 2048), device=device)
fc = nn.Linear(2048, 2048).to(device)
fc = torch.compile(fc)
with torch.autocast(device_type=device, dtype=torch.bfloat16):
outputs = []
for i in range(tensor.size(1)):
input = tensor[:, i, :]
out = fc(input)
outputs.append(out)
print(f"compile: True, autocast: True, modify input: False, mem allocated: {torch.cuda.memory_allocated()/1e9} GB")
print()
torch.cuda.empty_cache()
def CustomFLinear(base_activations, weights, bias=None):
weights = weights.unsqueeze(0)
intermediate_values = base_activations.unsqueeze(1) * weights
output_activations = intermediate_values.sum(dim=2)
if bias is not None:
output_activations = output_activations + bias
return output_activations
class CustomLinear(nn.Module):
def __init__(self, in_features, out_features, bias=True, device='cuda', dtype=None):
self.factory_kwargs = {'device': device, 'dtype': dtype}
super(CustomLinear, self).__init__()
self.in_features = in_features
self.out_features = out_features
self.weight = nn.Parameter(torch.empty((out_features, in_features), **self.factory_kwargs))
if bias:
self.bias = torch.empty(out_features, **self.factory_kwargs)
else:
self.register_parameter('bias', None)
def forward(self, input):
out = CustomFLinear(input, self.weight, self.bias)
return out
tensor = torch.rand((50, 64, 2048), device=device)
add_req_grad = torch.tensor([1.0], requires_grad=True).to(device)
fc = CustomLinear(2048, 2048).to(device)
fc = torch.compile(fc)
with torch.autocast(device_type=device, dtype=torch.bfloat16):
tensor += add_req_grad
outputs = []
for i in range(tensor.size(1)):
input = tensor[:, i, :]
out = fc(input)
outputs.append(out)
print('** Custom/Manual linear used **')
print(f"compile: True, autocast: True, modify input: True, mem allocated: {torch.cuda.memory_allocated()/1e9} GB")
print()
torch.cuda.empty_cache()
```
Output is:
compile: False, autocast: False, modify input: True, mem allocated: 0.078782976 GB
compile: True, autocast: False, modify input: True, mem allocated: 0.078782976 GB
compile: True, autocast: True, modify input: True, mem allocated: 0.632439296 GB
compile: False, autocast: True, modify input: True, mem allocated: 0.1207424 GB
compile: True, autocast: True, modify input: False, mem allocated: 0.112353792 GB
** Custom/Manual linear used **
compile: True, autocast: True, modify input: True, mem allocated: 0.1291392 GB
### Error logs
_No response_
### Minified repro
_No response_
### Versions
2.3.1
cc @mcarilli @ptrblck @leslie-fang-intel @jgong5 @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire | good first issue,module: memory usage,triaged,module: amp (automated mixed precision),oncall: pt2,module: inductor | low | Critical |
2,469,066,221 | vscode | Allow multi-select for REPL / interactive window | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
Currently there is multi-select enabled for VS Code notebooks:
<img width="285" alt="Screenshot 2024-08-15 at 3 09 06 PM" src="https://github.com/user-attachments/assets/98cbaa56-5b8a-4330-9105-50e222019b8d">
It would be great to have this in REPL / interactive window as well and would help boost developer productivity. Right now, it is not supported:
<img width="214" alt="Screenshot 2024-08-15 at 3 09 24 PM" src="https://github.com/user-attachments/assets/a60bef1e-eafc-4e82-bef1-66cc47b1d31c">
| feature-request,interactive-window | low | Minor |
2,469,103,995 | godot | Recognized Godot version when capturing in capture software (Steelseries GG) is wrong | ### Tested versions
- Reproducible in 4.3-stable_mono_win64
### System information
Windows 11 - 4.3-stable_mono_win64 - Vulkan (Forward+)
### Issue description
When I record/capture the Godot window in Steelseries GG it still says I'm capturing Godot_v4.2.2-stable_mono_win64 even though I am using 4.3-stable_mono_win64.
Not sure if this is a general thing as to what version Godot is recognized as.
### Steps to reproduce
Open Godot and check Steelseries GG software "Game detection".
### Minimal reproduction project (MRP)
You can use any project.
Images:



| platform:windows,discussion,topic:thirdparty,needs testing | low | Minor |
2,469,107,042 | godot | Generated Visual Studio project can only build "Editor x64" configuration | ### Tested versions
- Reproducible in 4.3
- Not reproducible in 4.2
### System information
Windows 10, Visual Studio 2022 Community
### Issue description
The solution and project files generated using the `scons vsproj=yes` are only able to build the "editor" configuration on x64 architecture. Any other configuration ("template_debug", "template_release") and x86 are unable to build the program using IDE.
### Steps to reproduce
1. Open terminal of your choice in the root directory of the Godot 4.3 source code
2. Execute `scons vsproj=yes` and wait
3. Open the godot.sln generated file using Visual Studio
4. Notice that you are able to build the project using default settings
5. Change the configuration to "template_debug" and try to build it
6. Notice following warning in the log: "warning MSB8005: The property 'NMakeBuildCommandLine' is not defined. Skipping...
7. Notice that only godot.windows.editor.x86_64.generated.props was generated (I'm assuming the files for other valid platforms/configurations should also be generated as it contains required NMake commands)
### Minimal reproduction project (MRP)
N/A | bug,platform:windows,topic:buildsystem | low | Critical |
2,469,109,161 | flutter | Make Stroke Text a First Class Feature of `Text`/`TextStyle` | ### Use case
I have text displayed over graphics (.svgs, textures, other widgets) in many places in my app. To ensure that the text remains readable despite the multi-colored background, I make it white with a black stroke:

(here the bars fill up and as they fill the text goes from having a black background to having a bright colorful background)
As far as I can tell, this is the best/only way to do that in Flutter right now:
```dart
final String text = 'Woooooh look at me, I\'m stroked text!';
return Stack(
children: [
Text(
text,
style: TextStyle(
foreground: Paint()
..style = PaintingStyle.stroke
..strokeWidth = 5 // Width needs to be double the intended weight...
..color = Colors.black,
),
),
Text(
text,
textAlign: textAlign,
style: TextStyle(
color: Colors.white,
// You need to strip out the shadows, if present, as they'll show up against the stroke.
shadows: [],
),
),
],
);
```
This isn't great because it's:
1. Verbose/sloppy
2. Not very discoverable (each developer has to discover and implement this hack themselves)
3. Provides limited control for stroke alignment, etc
4. Messes up semantic info unless you think to explicitly exclude the duplicate widget from semantics
Similar issues:
https://github.com/flutter/flutter/issues/24108 (same request, but closed due to uncertainty about implementation-stale enough I think this is worth revisiting)
https://github.com/flutter/flutter/issues/137064 (exact ask in this issue is unclear-several unrelated issues were raised-but that issue looks like it is now focused on handling fonts with multiple paths per glyph)
### Proposal
Add explicit fields to `TextStyle` for specifying stroke.
I'm not super opinionated on the exact solution, but here is one way this could be implemented:
```dart
return Text(
'Woooooh look at me, I\'m stroked text!',
style: TextStyle(
color: Colors.white,
// A single `BorderSide` (not `Border` as there isn't a coherent definition of multiple sides for text).
border: BorderSide(
color: Colors.black,
width: 2.5,
style = BorderStyle.solid,
strokeAlign = BorderSide.strokeAlignOutside,
),
),
);
```
The advantages are that it reuses core primitives from other parts of Flutter and gives the developer a lot of control over the stroke behavior. The disadvantage is it'd require supporting all the different border side features out of the gate which may be a significant technical lift.
Alternatively, you could go with a special-case approach that exposes only the features the team wants to support as attributes on `TextStyle` itself (color and width?) | c: new feature,framework,c: proposal,P3,team-framework,triaged-framework | low | Major |
2,469,144,840 | rust | Type checking bug. Trait appears to use inferred type from bounds rather than specified associated type | <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I tried this code:
https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=9b8e4a4b196fdb3320c694d180649731
I expected to see this happen: borrow should be known to return (), as it is specified to return an associated type that is specified just three lines earlier.
Instead, this happened: A compiler error citing an equivalent type saying it isn't known to be ().
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.81.0-nightly (6e2780775 2024-07-05)
binary: rustc
commit-hash: 6e2780775f5cea9328d37f4b8d0ee79db0056267
commit-date: 2024-07-05
host: x86_64-unknown-linux-gnu
release: 1.81.0-nightly
LLVM version: 18.1.7
```
| C-bug,T-types | low | Critical |
2,469,153,088 | godot | TileMap Terrains do not show all of the selected options that were selected for that particular terrain | ### Tested versions
-Reproducible - 4.2.2.stable
-Reproducible - 4.3. stable
### System information
Windows 10 - Godot 4.3
### Issue description
after setting up a TileSet Tiles with terrains, the TileMap-> Terrains does not show all of the selected options that were selected for that particular terrain. Do they still register? Yes. The issue is minor but annoying. I just want the whole list of what tiles contain that terrain.
Two main issue.
1. The list is missing some tiles.
- A solution to show all the tiles, be it repeated or not, that contain that terrain type on all the Terrain lists.
2. The list shows tiles that do not contain the terrain in question.
- A solution to show only the tiles where the center is the terrain in question. Not the corners or side
### Steps to reproduce
Create Node: TileMapLayer. Set a TileSet. In the TileSet add Elemet and add a couple of Terrains. Under a TileSet, add a tile map and add the terrains for each terrain type. When switching over to TileMap under the Terrains, The tiles show some but not all the tiles that contain the tile that was selected. The engine still counts the tile and still spawn the tile.
### Minimal reproduction project (MRP)
[TileMapLayer-Issue.zip](https://github.com/user-attachments/files/16631205/TileMapLayer-Issue.zip)
| bug,topic:editor,topic:2d | low | Minor |
2,469,168,348 | transformers | Q-GaLore Support | ### Feature request
Add support for https://github.com/VITA-Group/Q-GaLore (https://arxiv.org/abs/2407.08296)
### Motivation
Q-GaLore allows more memory-efficient training
### Your contribution
M/A | Feature request,optimization | low | Minor |
2,469,178,744 | rust | Is anyone running our gdb-based Android debuginfo tests? | This is mostly directed at @chriswailes, @maurer, and @mgeisler because you are the target maintainers for our Android targets.
I've been looking through our debuginfo tests, and I've found that a few of them were disabled years ago never revisited. Some of those tests are disabled only on Android. Based on the history linked from https://github.com/rust-lang/rust/issues/10381, the last attempt to re-enable some of the mass-ignored android debuginfo tests was in 2015. So I'm not expecting any of you to really know the history here.
But currently (and I do not know for how long) we do not run _any_ debuginfo tests on Android: https://github.com/rust-lang-ci/rust/actions/runs/10409799292/job/28830241725#step:25:23109
It seems like this is because `compiletest` looks for a `gdb`: https://github.com/rust-lang/rust/blob/0f442e265c165c0a78633bef98de18517815150c/src/tools/compiletest/src/lib.rs#L1038-L1085 but since `compiletest` doesn't find one, we skip the entire suite.
There is a fair amount of code in compiletest for handling debuginfo tests on Android. Since none of any of the code associated with running debuginfo tests on Android is currently being exercised, I suspect all of that code is just a time bomb for whoever eventually decides that we need those tests to be enabled.
In https://github.com/rust-lang/rust/pull/128913 I am removing a number of `ignore-test` annotations from the test suite, and replacing them with more specific ignores where required. Because the entire suite is ignored for Android, I cannot do this.
So I have a few questions:
* Is anyone running the debuginfo test suite on Android?
* Would anyone object if I blindly remove all the other `ignore-android` annotations from the test suite?
* Is anyone planning on working to re-enable the Android debuginfo tests? I've poked around, and it uses a bunch of Android SDK/emulator arcana that I do not understand. For example, I tried installing `gdb-multiarch` in the image, which results in the test suite managing to execute `arm-linux-androideabi-gdb` (which I cannot find in the container) but if I add `set architecture` to the gdb commands, the output is just i686/x86_64.
* In my searching around for explanations about how to debug on Android, I found some mentions that one should be using lldb instead. Is that right? | A-testsuite,A-debuginfo,O-android,T-compiler,C-discussion | low | Critical |
2,469,194,911 | godot | BaseMaterial3D Shadow to Opacity does not work as expected with transparency modes other than Alpha | - *Related to https://github.com/godotengine/godot/issues/91496.*
### Tested versions
- Reproducible in: 4.3.rc 4359c28fe
### System information
Godot v4.3.rc (4359c28fe) - Fedora Linux 40 (KDE Plasma) - X11 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 4090 (nvidia; 555.58.02) - 13th Gen Intel(R) Core(TM) i9-13900K (32 Threads)
### Issue description
BaseMaterial3D Shadow to Opacity does not work as expected with transparency modes other than Alpha.
I would expect the shadow opacity to affect the alpha scissor threshold (and ideally not force alpha transparency if alpha scissor transparency is chosen, so you can avoid transparency sorting issues inherent to alpha transparency).
The same goes for alpha hash, although I don't expect this one to be needed as often in this scenario.
I've only tested this in Forward+ so far, so I don't know if this applies to Mobile or Compatibility.
### Transparency = Alpha (or Disabled, as alpha is forced here)

### Transparency = Alpha Scissor
*I would expect this mode to look like Alpha, but with a hard edge.*

### Transparency = Alpha Hash
*I would expect this mode to look like Alpha, but with dithered edges.*

### Transparency = Depth Prepass
*I would expect this mode to look like Alpha, but with fully opaque areas being drawn as opaque to reduce the visibility of transparency sorting issues.*

With an alpha albedo texture applied:
### Transparency = Alpha

### Transparency = Alpha Scissor (threshold = 0.2)

### Transparency = Alpha Hash

### Transparency = Depth Prepass

### Steps to reproduce
- Add a DirectionalLight3D node with shadows enabled.
- Add a MeshInstance3D with a BoxMesh (to use as a shadow caster).
- Add a second MeshInstance3D with a PlaneMesh (to use as a shadow receiver). Add a StandardMaterial3D to it, enable **Shadow to Opacity.** In this mode, opacity is determined by how much shadow is received (fully shaded areas are opaque).
- Change the plane's material transparency mode to **Alpha Scissor**, **Alpha Hash** or **Opaque Prepass*, and notice the transparency no longer working as expected.
### Minimal reproduction project (MRP)
[Non-Overlapping Transprancy 2.zip](https://github.com/user-attachments/files/16627725/Non-Overlapping.Transprancy.2.zip) | bug,topic:rendering,topic:3d | low | Minor |
2,469,231,221 | PowerToys | Win+H remapping forces the user to hold the new shortcut | ### Microsoft PowerToys version
0.83.0
### Installation method
Microsoft Store
### Running as admin
None
### Area(s) with issue?
Keyboard Manager
### Steps to reproduce
Remap the Speech Recognition (Win+H) shortcut to anything else
Open any program that has text typing (ex. notepad)
Use new shortcut, by pressing it (not holding)
It should turn on and off in an instant

### ✔️ Expected Behavior
After pressing the shortcut, Speech Recognition should wait for the user to make some input for a few seconds
### ❌ Actual Behavior
Speech Recognition instantly turns off right after the user stops holding the new shortcut
https://github.com/user-attachments/assets/837d7cb2-4624-415c-819c-4e95372ec85d
### Other Software
_No response_ | Issue-Bug,Product-Keyboard Shortcut Manager,Needs-Triage | low | Minor |
2,469,240,076 | electron | [Feature Request]: Support all InputEvent types with webContents.sendInputEvent() | ### Preflight Checklist
- [X] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [X] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [X] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a feature request that matches the one I want to file, without success.
### Problem Description
Currently `webContents.sendInputEvent()` only supports 3 `InputEvent` types, `MouseInputEvent`, `MouseWheelInputEvent` and `KeyboardInputEvent`, aka only 3 out of the 40 available (not counting `undefined`), and simply throws an exception if any of the 37 others are used: `Error: Invalid event object`
This severely limits the actual extent at which you can use `webContents.sendInputEvent()` to dispatch trusted input events in a `webContents`
### Proposed Solution
Support more `InputEvent` types in `webContents.sendInputEvent()` or provide another way to dispatch arbitrary trusted events to a `webContents` (I'm not aware of any way to do this without using very hacky solutions)
### Alternatives Considered
As stated, I'm not aware of any actual way to dispatch trusted events outside of `webContents.sendInputEvent()` (not considering very hacky solutions), and even if such a thing exists, I do still think `webContents.sendInputEvent()` should support most if not all `InputEvent` types.
It would be possible to simply dispatch untrusted events, but these, again, are untrusted.
### Additional Information
_No response_ | enhancement :sparkles: | low | Critical |
2,469,282,420 | ant-design | Table组件如果大量使用Tooltip, 渲染表格会比正常表格慢上许多, 同时内存占用容易堆积, 无法及时释放 | ### Reproduction link
[](https://stackblitz.com/edit/react-5megdp-oc85f6?file=demo.tsx)
### Steps to reproduce
点击show data 按钮, 显示25列左右的50条数据. 通过手动切换修改是否render Tooltip来渲染单元格, 可以发现不增加Tooltip渲染速度明显更快
### What is expected?
当遇到很多列(25列左右)时, 使用Tooltip渲染速度可以稍微变快, 同时内存占用减少
### What is actually happening?
使用Tooltip渲染速度越来越慢, 内存占用会持续增加
| Environment | Info |
| --- | --- |
| antd | 5.20.0 |
| React | 18.3.1 |
| System | macos |
| Browser | chrome最新版 |
---
在项目当中, 如果来回切换当前页和其他页面, 通过任务管理器发现, 内存占用会持续增加,并且响应速度越来越慢, 不添加Tooltip则不会出现这种问题.
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | help wanted,Inactive,⚡️ Performance,unconfirmed | low | Major |
2,469,300,319 | pytorch | Axioms sometimes fail to apply to discharge GuardOnDataDependent error | ### 🐛 Describe the bug
Repro:
```
import torch
from torch._dynamo.comptime import comptime
@torch._dynamo.config.patch(do_not_emit_runtime_asserts=True, capture_scalar_outputs=True)
@torch.compile(dynamic=True, fullgraph=True, backend="eager")
def cf_printlocals(x):
u5, u3 = x[2:].tolist()
u6, *u10 = x.tolist()
u4 = x[1].item()
u9, u8, *u11 = x[:-1].tolist()
torch._check(u3 != 1)
torch._check(u5 != u6 + 2 * u4)
torch._check_is_size(u6)
torch._check_is_size(u4)
torch._check_is_size(u5)
torch._check((u6 + 2*u4) % u5 == 0)
torch._check(u3 == (u6 + 2 * u4) // u5)
comptime.print({
"u5": u5,
"u3": u3,
"u6": u6,
"u10": u10,
"u4": u4,
"u9": u9,
"u8": u8,
"u11": u11,
})
u2 = torch.randn(u5, u3)
u0 = torch.zeros(u6)
torch._check_is_size(u4)
u1 = torch.zeros(u4 * 2)
stk = torch.cat([u0, u1], dim=0)
return torch.stack([stk, stk]).view(2, *u2.size())
cf_printlocals(torch.tensor([20, 2, 3, 8]))
```
We end up with cursed log:
```
I0815 19:18:03.032000 2574157 torch/fx/experimental/symbolic_shapes.py:5221] [0/0] runtime_assert Eq(u1, ((u2 + 2*u6)//u0)) [guard added] at nn.py:17 in cf_printlocals (_dynamo/utils.py:2092 in r
un_node), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_GUARD_ADDED="Eq(u1, ((u2 + 2*u6)//u0))"
{'u5': u0, 'u3': u1, 'u6': u2, 'u10': (u3, u4, u5), 'u4': u6, 'u9': u7, 'u8': u8, 'u11': (u9)}
I0815 19:18:03.038000 2574157 torch/fx/experimental/symbolic_shapes.py:5221] [0/0] runtime_assert u1 >= 0 [guard added] at nn.py:28 in cf_printlocals (_prims_common/__init__.py:584 in validate_di
m_length), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_GUARD_ADDED="u1 >= 0"
I0815 19:18:03.163000 2574157 torch/fx/experimental/symbolic_shapes.py:5221] [0/0] runtime_assert Eq(2*u2 + 4*u6, 2*u0*u1) [guard added] at nn.py:33 in cf_printlocals (_prims_common/__init__.py:9
02 in infer_size), for more info run with TORCHDYNAMO_EXTENDED_DEBUG_GUARD_ADDED="Eq(2*u2 + 4*u6, 2*u0*u1)"
W0815 19:18:03.190000 2574157 torch/fx/experimental/symbolic_shapes.py:5239] [0/0] failed during evaluate_expr(Eq(u1, ((u2 + 2*u6)//u0)), hint=None, expect_rational=True, size_oblivious=True, for
cing_spec=False
```
in other words, we successfully runtime assert the expression, but then we fail to evaluate it anyway. Spooky.
### Versions
main
cc @chauhang @penguinwu | triaged,oncall: pt2,module: dynamic shapes | low | Critical |
2,469,316,277 | rust | Missed optimization in if with constant branches | I assume the following two functions behave identically, and therefore should result in the same codegen. https://godbolt.org/z/vj9ornqbG
```rust
#[no_mangle]
pub fn cast(x: bool) -> f32 {
x as u8 as f32
}
#[no_mangle]
pub fn ifelse(x: bool) -> f32 {
if x { 1. } else { 0. }
}
```
However, `ifelse` fails to be optimised to a cast.
```asm
cast:
cvtsi2ss xmm0, edi
ret
.LCPI1_0:
.long 0x3f800000
ifelse:
test edi, edi
jne .LBB1_1
xorps xmm0, xmm0
ret
.LBB1_1:
movss xmm0, dword ptr [rip + .LCPI1_0]
ret
``` | A-LLVM,I-slow,C-optimization | low | Minor |
2,469,332,014 | pytorch | PythonMod vs Mod spookiness | ### 🐛 Describe the bug
```
import torch
import torch._dynamo
torch._dynamo.config.capture_scalar_outputs = True
@torch.compile(backend="eager", fullgraph=True)
def f(x):
u0, u1 = x.tolist()
torch._check(u0 % u1 == 0)
torch._check_is_size(u0)
torch._check_is_size(u1)
if u0 % u1 == 0:
return torch.tensor(True)
else:
return torch.tensor(False)
f(torch.tensor([3, 5]))
```
this fails
```
torch._dynamo.exc.UserError: Consider annotating your code using torch._check*(). Could not guard on data-dependent expression Eq(Mod(u0, u1), 0) (unhinted: Eq(Mod(u0, u1), 0)). (Size-like symbols: u0, u1)
Potential framework code culprit (scroll up for full backtrace):
File "/data/users/ezyang/c/pytorch/torch/_dynamo/variables/tensor.py", line 1103, in evaluate_expr
return guard_scalar(self.sym_num)
For more information, run with TORCH_LOGS="dynamic"
For extended logs when we create symbols, also add TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="u0,u1"
If you suspect the guard was triggered from C++, add TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
For more debugging help, see https://docs.google.com/document/d/1HSuTTVvYH1pTew89Rtpeu84Ht3nQEFTYhAX3Ypa_xJs/edit?usp=sharing
User Stack (most recent call last):
(snipped, see stack below for prefix)
File "/data/users/ezyang/c/pytorch/wz.py", line 12, in f
if u0 % u1 == 0:
```
But why?! We asserted on it? Well...
```
I0815 19:54:10.984000 2924862 torch/fx/experimental/symbolic_shapes.py:5198] [0/0] runtime_assert Eq(PythonMod(u0, u1), 0) [guard added] at wz.
```
at the time we guarded, we did not know u0/u1 were non-negative. So we generated an assert with PythonMod explicitly. But then when we later learned they were non-negative, we forgot to further simplify PythonMod into Mod. Oops!
### Versions
main
cc @chauhang @penguinwu | triaged,oncall: pt2,module: dynamic shapes | low | Critical |
2,469,333,785 | pytorch | type hints mismatch method comment in torch.distributed.fsdp._exec_order_utils.py | ### 📚 The doc issue
# version
torch realease 2.4
# python script to report
torch.distributed.fsdp._exec_order_utils.py
# methods
```python
def get_handle_to_backward_prefetch(
self,
current_handle: FlatParamHandle,
) -> Optional[FlatParamHandle]:
"""
Returns a :class:`list` of the handles keys of the handles to backward
prefetch given the current handles key. If there are no valid handles
keys to prefetch, then this returns an empty :class:`list`.
"""
current_index = current_handle._post_forward_index
if current_index is None:
return None
target_index = current_index - 1
target_handle: Optional[FlatParamHandle] = None
for _ in range(self._backward_prefetch_limit):
if target_index < 0:
break
target_handle = self.handles_post_forward_order[target_index]
target_index -= 1
return target_handle
def get_handle_to_forward_prefetch(
self,
current_handle: FlatParamHandle,
) -> Optional[FlatParamHandle]:
"""
Returns a :class:`list` of the handles keys of the handles to forward
prefetch given the current handles key. If there are no valid handles
keys to prefetch, then this returns an empty :class:`list`.
"""
current_index = current_handle._pre_forward_order_index
if current_index is None:
return None
target_index = current_index + 1
target_handle: Optional[FlatParamHandle] = None
for _ in range(self._forward_prefetch_limit):
if target_index >= len(self.handles_pre_forward_order):
break
target_handle = self.handles_pre_forward_order[target_index]
target_index += 1
return target_handle
```
# issue
The return type `Optional[FlatParamHandle]` mismatches the `class: list` in the comment
### Suggest a potential alternative/fix
# fix
replace the `class: list` with `class: Optional[FlatParamHandle]`
# my confusion
Since FSDP1 explicitly sets the `backward_prefetch_limit = 1` and `forward_prefetch_limit = 1`:
```python
# torch/distributed/fsdp/fully_sharded_data_parallel.py
backward_prefetch_limit = 1
forward_prefetch_limit = 1
_init_core_state(
self,
sharding_strategy,
mixed_precision,
cpu_offload,
limit_all_gathers,
use_orig_params,
backward_prefetch_limit,
forward_prefetch_limit,
)
# torch/distributed/fsdp/_init_utils.py
state._exec_order_data = exec_order_utils._ExecOrderData(
state._debug_level,
backward_prefetch_limit,
forward_prefetch_limit,
)
# torch/distributed/fsdp/_exec_order_utils.py
def get_handle_to_backward_prefetch(
self,
current_handle: FlatParamHandle,
) -> Optional[FlatParamHandle]:
"""
Returns a :class:`list` of the handles keys of the handles to backward
prefetch given the current handles key. If there are no valid handles
keys to prefetch, then this returns an empty :class:`list`.
"""
current_index = current_handle._post_forward_index
if current_index is None:
return None
target_index = current_index - 1
target_handle: Optional[FlatParamHandle] = None
for _ in range(self._backward_prefetch_limit):
if target_index < 0:
break
target_handle = self.handles_post_forward_order[target_index]
target_index -= 1
return target_handle
def get_handle_to_forward_prefetch(
self,
current_handle: FlatParamHandle,
) -> Optional[FlatParamHandle]:
"""
Returns a :class:`list` of the handles keys of the handles to forward
prefetch given the current handles key. If there are no valid handles
keys to prefetch, then this returns an empty :class:`list`.
"""
current_index = current_handle._pre_forward_order_index
if current_index is None:
return None
target_index = current_index + 1
target_handle: Optional[FlatParamHandle] = None
for _ in range(self._forward_prefetch_limit):
if target_index >= len(self.handles_pre_forward_order):
break
target_handle = self.handles_pre_forward_order[target_index]
target_index += 1
return target_handle
```
why do we need the unshard limiter anyway ?
cc @svekars @brycebortree @zhaojuanmao @mrshenli @rohan-varma @awgu @fegin @kwen2501 @chauhang | module: docs,triaged,module: fsdp | low | Critical |
2,469,351,487 | langchain | Syntax error, incorrect syntax near '{'."} for Azure Cosmos DB No SQL tutorial | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Following tutorial here https://python.langchain.com/v0.2/docs/integrations/vectorstores/azure_cosmos_db_no_sql/
After fixing the code as instructed here by adding
```python
cosmos_database_properties={
"id" : Config.COSMOS_DB_NAME
}
```
Another issue occur and the message indicates some syntax error.
### Error Message and Stack Trace (if applicable)
page_content='Direct Preference Optimization:
Your Language Model is Secretly a Reward Model
Rafael Rafailov∗†Archit Sharma∗†Eric Mitchell∗†
Stefano Ermon†‡Christopher D. Manning†Chelsea Finn†
†Stanford University‡CZ Biohub
{rafailov,architsh,eric.mitchell}@cs.stanford.edu
Abstract
While large-scale unsupervised language models (LMs) learn broad world knowl-
edge and some reasoning skills, achieving precise control of their behavior is
difficult due to the completely unsupervised nature of their training. Existing
methods for gaining such steerability collect human labels of the relative quality of
model generations and fine-tune the unsupervised LM to align with these prefer-
ences, often with reinforcement learning from human feedback (RLHF). However,
RLHF is a complex and often unstable procedure, first fitting a reward model that
reflects the human preferences, and then fine-tuning the large unsupervised LM
using reinforcement learning to maximize this estimated reward without drifting' metadata={'source': 'https://arxiv.org/pdf/2305.18290', 'page': 0}
Traceback (most recent call last):
File "/home/pii/adabit/proj/proj/src/db/cosmos_db_langchain.py", line 114, in <module>
results = vectorstore.similarity_search(query)
File "/home/pii/miniconda3/envs/proj/lib/python3.9/site-packages/langchain_community/vectorstores/azure_cosmos_db_no_sql.py", line 338, in similarity_search
docs_and_scores = self.similarity_search_with_score(
File "/home/pii/miniconda3/envs/proj/lib/python3.9/site-packages/langchain_community/vectorstores/azure_cosmos_db_no_sql.py", line 322, in similarity_search_with_score
docs_and_scores = self._similarity_search_with_score(
File "/home/pii/miniconda3/envs/proj/lib/python3.9/site-packages/langchain_community/vectorstores/azure_cosmos_db_no_sql.py", line 298, in _similarity_search_with_score
items = list(
File "/home/pii/miniconda3/envs/proj/lib/python3.9/site-packages/azure/core/paging.py", line 124, in __next__
return next(self._page_iterator)
File "/home/pii/miniconda3/envs/proj/lib/python3.9/site-packages/azure/core/paging.py", line 76, in __next__
self._response = self._get_next(self.continuation_token)
File "/home/pii/miniconda3/envs/proj/lib/python3.9/site-packages/azure/cosmos/_query_iterable.py", line 99, in _fetch_next
block = self._ex_context.fetch_next_block()
File "/home/pii/miniconda3/envs/proj/lib/python3.9/site-packages/azure/cosmos/_execution_context/execution_dispatcher.py", line 110, in fetch_next_block
raise e
File "/home/pii/miniconda3/envs/proj/lib/python3.9/site-packages/azure/cosmos/_execution_context/execution_dispatcher.py", line 102, in fetch_next_block
return self._execution_context.fetch_next_block()
File "/home/pii/miniconda3/envs/proj/lib/python3.9/site-packages/azure/cosmos/_execution_context/base_execution_context.py", line 79, in fetch_next_block
self._ensure()
File "/home/pii/miniconda3/envs/proj/lib/python3.9/site-packages/azure/cosmos/_execution_context/base_execution_context.py", line 64, in _ensure
results = self._fetch_next_block()
File "/home/pii/miniconda3/envs/proj/lib/python3.9/site-packages/azure/cosmos/_execution_context/base_execution_context.py", line 175, in _fetch_next_block
return self._fetch_items_helper_with_retries(self._fetch_function)
File "/home/pii/miniconda3/envs/proj/lib/python3.9/site-packages/azure/cosmos/_execution_context/base_execution_context.py", line 147, in _fetch_items_helper_with_retries
return _retry_utility.Execute(self._client, self._client._global_endpoint_manager, callback)
File "/home/pii/miniconda3/envs/proj/lib/python3.9/site-packages/azure/cosmos/_retry_utility.py", line 87, in Execute
result = ExecuteFunction(function, *args, **kwargs)
File "/home/pii/miniconda3/envs/proj/lib/python3.9/site-packages/azure/cosmos/_retry_utility.py", line 149, in ExecuteFunction
return function(*args, **kwargs)
File "/home/pii/miniconda3/envs/proj/lib/python3.9/site-packages/azure/cosmos/_execution_context/base_execution_context.py", line 145, in callback
return self._fetch_items_helper_no_retries(fetch_function)
File "/home/pii/miniconda3/envs/proj/lib/python3.9/site-packages/azure/cosmos/_execution_context/base_execution_context.py", line 126, in _fetch_items_helper_no_retries
(fetched_items, response_headers) = fetch_function(new_options)
File "/home/pii/miniconda3/envs/proj/lib/python3.9/site-packages/azure/cosmos/_cosmos_client_connection.py", line 1065, in fetch_fn
return self.__QueryFeed(
File "/home/pii/miniconda3/envs/proj/lib/python3.9/site-packages/azure/cosmos/_cosmos_client_connection.py", line 3092, in __QueryFeed
result, last_response_headers = self.__Post(path, request_params, query, req_headers, **kwargs)
File "/home/pii/miniconda3/envs/proj/lib/python3.9/site-packages/azure/cosmos/_cosmos_client_connection.py", line 2811, in __Post
return synchronized_request.SynchronizedRequest(
File "/home/pii/miniconda3/envs/proj/lib/python3.9/site-packages/azure/cosmos/_synchronized_request.py", line 204, in SynchronizedRequest
return _retry_utility.Execute(
File "/home/pii/miniconda3/envs/proj/lib/python3.9/site-packages/azure/cosmos/_retry_utility.py", line 85, in Execute
result = ExecuteFunction(function, global_endpoint_manager, *args, **kwargs)
File "/home/pii/miniconda3/envs/proj/lib/python3.9/site-packages/azure/cosmos/_retry_utility.py", line 149, in ExecuteFunction
return function(*args, **kwargs)
File "/home/pii/miniconda3/envs/proj/lib/python3.9/site-packages/azure/cosmos/_synchronized_request.py", line 155, in _Request
raise exceptions.CosmosHttpResponseError(message=data, response=response)
azure.cosmos.exceptions.CosmosHttpResponseError: (BadRequest) Message: {"errors":[{"severity":"Error","location":{"start":26,"end":27},"code":"SC1001","message":"Syntax error, incorrect syntax near '{'."}]}
ActivityId: 6f7cd23d-76b5-48c0-b125-5359559ef992, Microsoft.Azure.Documents.Common/2.14.0
Code: BadRequest
Message: Message: {"errors":[{"severity":"Error","location":{"start":26,"end":27},"code":"SC1001","message":"Syntax error, incorrect syntax near '{'."}]}
ActivityId: 6f7cd23d-76b5-48c0-b125-5359559ef992, Microsoft.Azure.Documents.Common/2.14.0
### Description
* I am trying to run the tutorial https://python.langchain.com/v0.2/docs/integrations/vectorstores/azure_cosmos_db_no_sql/
### System Info
System Information
------------------
> OS: Linux
> OS Version: #93~20.04.1-Ubuntu SMP Wed Sep 6 16:15:40 UTC 2023
> Python Version: 3.9.19 | packaged by conda-forge | (main, Mar 20 2024, 12:50:21)
[GCC 12.3.0]
Package Information
-------------------
> langchain_core: 0.2.25
> langchain: 0.2.12
> langchain_community: 0.2.11
> langsmith: 0.1.96
> langchain_nomic: 0.1.2
> langchain_openai: 0.1.20
> langchain_text_splitters: 0.2.2
> langchainhub: 0.1.20
> langgraph: 0.2.3 | Ɑ: vector store,🤖:bug | low | Critical |
2,469,368,937 | ollama | Significant Drop in Prompt Adherence in Updated Gemma2 Model | ### What is the issue?
I recently noticed that the Gemma2 model was updated 5 weeks ago, resulting in a new version of gemma2:9b-instruct-fp16:
- Older Version (6 weeks ago): gemma2:9b-instruct-fp16 - **9de55d4bf6ae** - 18 GB
- Updated Version (5 weeks ago): gemma2:9b-instruct-fp16 - **28e6684b0850** - 18 GB
After switching to the updated version 28e6684b0850, I've observed a significant decrease in the model's ability to adhere to prompts in my specific downstream tasks.
Could you please clarify why there are two different versions of gemma2:9b-instruct-fp16 and what changes were made between these versions? Should I revert to the older version 9de55d4bf6ae to maintain the previous performance?
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.2.8 | bug | low | Major |
2,469,416,145 | tauri | [docs] How to Embedding Additional Files at IOS? | Thks for awesome project,
I tried to create IOS app,it has a default sqlite database , How to Embedding Additional Files?
I do it `https://v2.tauri.app/develop/resources/` step by step , I got a error when permission add .
src-tauri/capabilities/main.json
```
"fs:allow-read-text-file",
"fs:allow-resource-read-recursive"
```
<img width="1748" alt="image" src="https://github.com/user-attachments/assets/b1a17642-0bf5-449c-8b0b-5aae5bcd817b">
| type: documentation | low | Critical |
2,469,439,760 | TypeScript | Variables (extracted from a discriminated union object) lose narrowed types after being exported | ### 🔎 Search Terms
discriminate union export destructure destructuring
### 🕗 Version & Regression Information
- This is the behavior in every version I tried (including the nightly one), and I reviewed the FAQ for entries about (export, destructure, discriminate, union).
### ⏯ Playground Link
https://www.typescriptlang.org/play/?#code/GYVwdgxgLglg9mABAQwhApgBygZSgJxjAHMB5MAGwE8AKAfQA8AuRAZwKOIEpEBvRAL4AoUJFgIUaLFAByIChQCCYACYBVVemBF0K8tXrNEYeRUQAfROBVadKnv2Ejw0eEgC2IKCGQKqAUTBkACMKXQAFZBh8AB4AFQA+GgA3XxB0FjiuFl4hRER0INDdFgJ0gBo8xFSKdJYZBDkFELD4hMqBCz4qwpaSxGBfVnRK-Jq641Mu61swXQ7u-JhgRBoTBUQAXm3qtPQLSxntOZUtnfH0Byr8-HRvfCR+XuKVFkGKYfLd2ozJjYEANxVTroD77XL5G53EAPPgFIphV6IMojb7pQRA-LCJwQBDsOHPREsfxfC4sABqgi2iE83l81ECfRUkWiNAARKw4O59uxCCQ0eg2VwgUJlqt-Fd8uSAYgAPSyxAAC3Qt0QMCgatYbA4JC+cAA1lVUBhsHg+WRKLRycKhCCwYtENK5Qrlar1Zq-mZDppjvNEAajVJsE0lKoNDZfXpLTRrUCnOgGJg4PgNbiwPingi-RcqZsaV4fH5GS8Wfh2ZzudrzQKhUJRSsaITdJKBTL5UqVft3TAtbzOF11l6rD67F8AO7oaIqQMm3A6i0GC7C52IfwAJTXpDXtoK9ohrZXrq7GqGcA9ff5lkH0xHJxn0hDynUt90+loS7bCvXm+3wiAA
### 💻 Code
```ts
function acceptStringOnly(_x: string) { }
function acceptNullAndUndefinedOnly(_x: null | undefined) { }
function mutuallyEnabledPair<T>(value: T): {
enabled: true,
value: NonNullable<T>,
} | {
enabled: false,
value: null | undefined,
} {
if (null === value || undefined === value) {
return { enabled: false, value: null };
} else {
return { enabled: true, value };
}
}
const { enabled: E, value: V } = mutuallyEnabledPair("some string value");
if (E) {
V; // here it is string, ok
acceptStringOnly(V);
} else {
V; // here it is null | undefined, ok
acceptNullAndUndefinedOnly(V);
}
export const { enabled, value } = mutuallyEnabledPair("some string value")
if (enabled) {
value; // here it is string | null | undefined, weird
acceptStringOnly(value); // ERROR
} else {
value; // here it also is string | null | undefined
acceptNullAndUndefinedOnly(value); // ERROR
}
```
### 🙁 Actual behavior
When variables are **destructured** from an object of a **discriminated union type** and they are **exported**, those variables lose their narrowed types defined by the discriminated union type. Meanwhile, if the variables are NOT exported, their narrowed types will be preserved, which is the expected behavior, whether the variables are exported or not.
### 🙂 Expected behavior
Preserve narrowed types for variables that are:
- **Destructured** from an object of a **discriminated union type**
- **Exported**.
### Additional information about the issue
I guess this issue may be more or less related to [this one](https://github.com/microsoft/TypeScript/issues/50139).
| Help Wanted,Possible Improvement | low | Critical |
2,469,464,619 | godot | Game is extremely laggy after updating to 4.3 | ### Tested versions
4.3 stable
### System information
Godot v4.3.stable - Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 960 (NVIDIA; 31.0.15.5244) - Intel(R) Core(TM) i5-4590 CPU @ 3.30GHz (4 Threads)
### Issue description
I updated from Godot 4.2.2 to 4.3 and it lags a lot when I open my game. The game was running perfectly fine but became unplayable after I updated.
I'm using forward+, my OS is Windows 10, it's a 3D game, the lag happens in every scene, and the only error I saw was the Godot Jolt plugin not being updated. I switched the physics engine back to the default and removed the plugin but there was still lag.
When I make a new project in 4.3 with a simple 3D scene there is zero lag.
I don't have any post processing in my scene and I have a default sky for my environment. My models are also simple so I don't really know what could be causing it.


### Steps to reproduce
Use 4.3 stable
### Minimal reproduction project (MRP)
I can't seem to reproduce it | bug,needs testing,regression,performance | low | Critical |
2,469,475,624 | godot | Using `duplicate()` on node that changes child node order during _init() causes scripts to be applied to the wrong nodes. | ### Tested versions
4.3 stable
### System information
Windows 11 version 10.0.22631 - Vulkan (Forward+) - dedicated AMD Radeon RX 7900 XTX (Advanced Micro Devices, Inc.; 31.0.22023.1014) - AMD Ryzen 7 7800X3D 8-Core Processor (16 Threads)
### Issue description
Given this hierarchy:
Node A
v Node B
| > Node C
| > Node D
Given this circumstance:
- Node A duplicates Node B.
- Node B causes it's child nodes to be reorder during `_init()` (either by adding an INTERNAL_MODE_FRONT child, or re-ordering Node C and Node D)
- Node C has a script attached
The following occurs: Node C's script will be erroneously added to whichever child has index 0 (either a new node, or the re-ordered Node D), rather than Node C directly. This will throw an error when the node at index 0 has a different type than Node C.
### Steps to reproduce
Attempt to `duplicate()` a node which re-orders it's children during `_init()`. Scripts will be applied to nodes at the original child indices before any reordering.
### Minimal reproduction project (MRP)
[minimal.zip](https://github.com/user-attachments/files/16633051/minimal.zip)
`duplicate()` is called inside the `_ready()` function of node.gd | bug,discussion,topic:core | low | Critical |
2,469,511,886 | rust | Decide about generics in arbitrary self types | Over [here](https://github.com/rust-lang/rust/issues/44874#issuecomment-2292369151), @adetaylor gave an update about arbitrary self types that included this bit:
> During the preparation of the RFC it was broadly felt that we should ban "generic" self types but we didn't really define what "generic" meant, and I didn't pin it down enough.
>
> Some of the commentary:
>
> * [Arbitrary self types v2 rfcs#3519 (comment)](https://github.com/rust-lang/rfcs/pull/3519#discussion_r1390267286)
> * [Arbitrary self types v2 rfcs#3519 (comment)](https://github.com/rust-lang/rfcs/pull/3519#discussion_r1435282566)
> * [Arbitrary self types v2 rfcs#3519 (comment)](https://github.com/rust-lang/rfcs/pull/3519#discussion_r1474221554)
> * [A zulip comment](https://rust-lang.zulipchat.com/#narrow/stream/213817-t-lang/topic/Arbitrary.20self.20types.20v2.20RFC/near/416086169)
> * I've a feeling there's more which I've been unable to find, including the initial impetus to ban generics from arbitrary self types.
>
> It seems to be widely felt that:
>
> ```rust
> impl SomeType {
> fn m<R: Deref<Target=Self>>(self: R) {}
> }
> ```
>
> would be confusing, but (per those comments) it's actually pretty hard to distinguish that from various legitimate cases for arbitrary self types with generic receivers:
>
> ```rust
> impl SomeType {
> fn method1(self: MyBox<Self, impl Allocator>) {}
> fn method2<const ID: u64>(self: ListArc<Self, ID>) {}
> }
> ```
>
> I played around with different tests here on the `self` type in `wfcheck.rs` but none of them yield the right sort of filter of good/bad cases (which is unsurprising since we haven't quite defined what that means, but I thought I might hit inspiration).
>
> From those comment threads, the most concrete proposal (from @joshtriplett) is:
>
> > just disallow the case where the top-level receiver type itself is not concrete.
>
> I plan to have a crack at that, unless folks have other thoughts. cc @Nadrieril who had opinions here.
>
> If we do this, I [might need to revisit the diagnostics mentioned in the bits of RFC _removed_ by this commit](https://github.com/rust-lang/rfcs/pull/3519/commits/fad04aee7e432acc31fb39464300debfa9abd244).
We never made a decision about this. Perhaps if we could offer more clarity, it would save @adetaylor some time here, so it may be worth us discussing.
@rustbot labels +T-lang +I-lang-nominated -needs-triage +C-discussion
cc @rust-lang/lang @adetaylor
| T-lang,C-discussion | medium | Major |
2,469,524,269 | stable-diffusion-webui | [Bug]: Error while using img2img | ### Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
When using img2img, you cannot get the preview image, and sometimes the entire img2img does not work.
An error message appears in the upper left corner: ErrorUnexpected token '<', "<html> <h"... is not valid JSON
### Steps to reproduce the problem
img2img
### What should have happened?
img2img works normally
### What browsers do you use to access the UI ?
Google Chrome
### Sysinfo
Python 3.10.14 (main, Apr 6 2024, 18:45:05) [GCC 9.4.0]
Version: v1.8.0
Commit hash: bef51aed032c0aaa5cfd80445bc4cf0d85b408b5
Launching Web UI with arguments: --xformers --enable-insecure-extension-access --share --gradio-queue --no-half-vae --opt-channelslast --theme dark --no-gradio-queue
==============================================================================
You are running torch 2.0.1+cu118.
The program is tested to work with torch 2.1.2.
To reinstall the desired version, run with commandline flag --reinstall-torch.
Beware that this will cause a lot of large files to be downloaded, as well as
there are reports of issues with training tab on the latest version.
Use --skip-version-check commandline argument to disable this check.
==============================================================================
=================================================================================
You are running xformers 0.0.20.
The program is tested to work with xformers 0.0.23.post1.
To reinstall the desired version, run with commandline flag --reinstall-xformers.
Use --skip-version-check commandline argument to disable this check.
### Console logs
```Shell
!apt update
!apt -y install python3.10
!apt -y install libpython3.10-dev
!apt -y install build-essential
!apt -y install ffmpeg
!curl -sS https://bootstrap.pypa.io/get-pip.py | python3.10
!python3.10 -m pip install torch==2.0.1+cu118 torchvision==0.15.2+cu118 torchaudio==2.0.2+cu118 torchtext==0.15.2 torchdata==0.6.1 --extra-index-url https://download.pytorch.org/whl/cu118 -U
!python3.10 -m pip install xformers==0.0.20 triton==2.0.0 -U
!python3.10 -m pip install httpx==0.24.1
#!python3.10 -m pip install insightface -U
!python3.10 -m pip install matplotlib -U
!python3.10 -m pip install ipython -U
from IPython import get_ipython
get_ipython().run_line_magic('matplotlib', 'inline')
%cd /notebooks/stable-diffusion-webui
!python3.10 launch.py --xformers --enable-insecure-extension-access --share --gradio-queue --no-half-vae --opt-channelslast --theme dark --no-gradio-queue
```
### Additional information
_No response_ | bug-report | low | Critical |
2,469,545,712 | langchain | Certain parser types not available | The following code:
from langchain.output_parsers import PydanticOutputParser, YamlOutputParser,OutputFixingParser, RetryOutputParser, BaseOutputParser
from langchain_core.output_parsers import BaseOutputParser, BaseGenerationOutputParser, YamlOutputParser


Yaml Output parser is not available in langchain: langchain_core.output_parsers package & Base Output parser is not there in langchain.output_parsers.
Using google colab notebook with the following other packages & environment variables:
%pip install -qU langchain-openai
%pip install -U langsmith
%pip install "unstructured[md]"
!pip install -qU langchain-community
!pip install wikipedia
!pip install langchain_groq
!rm .langchain.db
%pip install bs4
import time
import os
import tiktoken
import openai
import json
from langchain.globals import set_llm_cache
from langchain_openai import OpenAI
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor, create_tool_calling_agent, load_tools
from langchain_core.prompts import ChatPromptTemplate
from langchain.callbacks.manager import get_openai_callback
from typing import List
from langchain_core.messages import BaseMessage, ToolMessage
from langchain_core.language_models import BaseChatModel, SimpleChatModel
from langchain_core.messages import AIMessageChunk, BaseMessage, HumanMessage
from langchain_core.outputs import ChatGeneration, ChatGenerationChunk,ChatResult, Generation
from langchain_core.runnables import run_in_executor
from langsmith.wrappers import wrap_openai
from langsmith import traceable
from langsmith import Client
from langsmith.evaluation import evaluate
from langchain_groq import ChatGroq
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.tools import tool
from langchain_core.rate_limiters import InMemoryRateLimiter
from langchain_core.messages import AIMessage, HumanMessage, ToolMessage
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough, RunnableLambda, RunnableParallel, RunnableGenerator
from langchain_community.llms.llamafile import Llamafile
from langchain_core.messages import (
AIMessage,
HumanMessage,
SystemMessage,
trim_messages,
filter_messages,
merge_message_runs,
)
from langchain_core.chat_history import InMemoryChatMessageHistory
from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain_community.cache import SQLiteCache
from typing import Any, Dict, Iterator, List, Mapping, Optional, Iterable
from langchain_core.callbacks.manager import CallbackManagerForLLMRun
from langchain_core.language_models.llms import LLM
from langchain_core.outputs import GenerationChunk
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.prompts import PromptTemplate
from langchain_core.pydantic_v1 import BaseModel, Field, validator
from langchain.output_parsers.json import SimpleJsonOutputParser
from langchain_core.exceptions import OutputParserException
from langchain_community.document_loaders import UnstructuredHTMLLoader, BSHTMLLoader,UnstructuredMarkdownLoader
from langchain_core.documents import Document
from pathlib import Path
from pprint import pprint
from langchain_community.document_loaders import JSONLoader
& environment variables:
os.environ["GROQ_API_KEY"] = "******************************************************************"
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_API_KEY"] = "******************************************"
os.environ["TAVILY_API_KEY"] = "***********************************************"
os.environ["OPENAI_API_KEY"] = "*********************************************************"
| 🤖:bug,Ɑ: parsing | low | Major |
2,469,547,851 | yt-dlp | [Tiktok] friends-only video download returns HTTP Error 404 when passing proper cookies | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Brazil
### Provide a description that is worded well enough to be understood
Download a Tiktok video from a friends-only user should work with cookie, but it's returning "HTTP Error 404: Not Found"
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['--cookies', './cookie.txt', '-vU', 'https://www.tiktok.com/@juliette.cox/video/7324221393862708522']
[debug] Encodings: locale utf-8, fs utf-8, pref utf-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.08.06 from yt-dlp/yt-dlp [4d9231208]
[debug] Python 3.12.4 (CPython x86_64 64bit) - Linux-6.9.9-zen1-1-zen-x86_64-with-glibc2.39 (OpenSSL 3.3.1 4 Jun 2024, glibc 2.39)
[debug] exe versions: ffmpeg 7.0.1 (setts), ffprobe 7.0.1, rtmpdump 2.4
[debug] Optional libraries: certifi-2024.07.04, requests-2.32.3, sqlite3-3.46.0, urllib3-1.26.18, websockets-12.0
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets
[debug] Loaded 1830 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.08.06 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.08.06 from yt-dlp/yt-dlp)
[TikTok] Extracting URL: https://www.tiktok.com/@juliette.cox/video/7324221393862708522
[TikTok] 7324221393862708522: Downloading webpage
[debug] [TikTok] Found universal data for rehydration
[debug] Extractor gave empty title. Creating a generic title
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id
[debug] Default format spec: bestvideo*+bestaudio/best
[info] 7324221393862708522: Downloading 1 format(s): h264_540p_748270-1
[debug] Invoking http downloader on "https://api16-normal-c-useast2a.tiktokv.com/aweme/v1/play/?video_id=v12025gd0000co6fvavog65v7i3sqc0g&line=0&is_play_url=1&file_id=9db0c2aac0d248d6855d59f3e35a1571&item_id=7324221393862708522&signaturev3=dmlkZW9faWQ7ZmlsZV9pZDtpdGVtX2lkLjVjZGZjYTczNzU3Yzg4NTQwNzI5M2M3ZjgwZDJhMzVi&ply_type=3&shp=9e36835a&shcp=280c9438"
ERROR: unable to download video data: HTTP Error 404: Not Found
Traceback (most recent call last):
File "/usr/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 3483, in process_info
success, real_download = self.dl(temp_filename, info_dict)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 3203, in dl
return fd.download(name, new_info, subtitle)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/site-packages/yt_dlp/downloader/common.py", line 466, in download
ret = self.real_download(filename, info_dict)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/site-packages/yt_dlp/downloader/http.py", line 369, in real_download
establish_connection()
File "/usr/lib/python3.12/site-packages/yt_dlp/downloader/http.py", line 120, in establish_connection
ctx.data = self.ydl.urlopen(request)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 4165, in urlopen
return self._request_director.send(req)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/site-packages/yt_dlp/networking/common.py", line 117, in send
response = handler.send(request)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/site-packages/yt_dlp/networking/_helper.py", line 208, in wrapper
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/site-packages/yt_dlp/networking/common.py", line 340, in send
return self._send(request)
^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/site-packages/yt_dlp/networking/_requests.py", line 365, in _send
raise HTTPError(res, redirect_loop=max_redirects_exceeded)
yt_dlp.networking.exceptions.HTTPError: HTTP Error 404: Not Found
```
| account-needed,site-bug,triage | low | Critical |
2,469,550,795 | pytorch | [PixelShuffle] PixelShuffle doesn't support channels_last feature for 5D inputs | ### 🐛 Describe the bug
PixelShuffle doesn't support channels_last feature for 5D inputs.
Error message: RuntimeError: Unsupported memory format. Supports only ChannelsLast, Contiguous
UT to reproduce the issue:
'''
import torch
pixel_shuffle = torch.nn.PixelShuffle(2)
input = torch.randn(2, 2, 4, 4, 4)
input = input.to(memory_format=torch.channels_last_3d)
output = pixel_shuffle(input)
'''
### Versions
numpy==1.26.4
torch==2.3.1
torch_tb_profiler==0.4.0
torchaudio==2.3.0
torchdata==0.7.1
torchtext==0.18.0a0
torchvision==0.18.1a0
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @jamesr66a | module: nn,triaged,module: memory format | low | Critical |
2,469,622,458 | vscode | When scrolling slowly focus moves to the wrong file | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
Version: 1.92.2
Commit: fee1edb8d6d72a0ddff41e5f71a671c23ed924b9
Date: 2024-08-14T17:29:30.058Z
Electron: 30.1.2
ElectronBuildId: 9870757
Chromium: 124.0.6367.243
Node.js: 20.14.0
V8: 12.4.254.20-electron.0
OS: Darwin arm64 23.6.0
Steps to Reproduce:
1. Open a multi-file diff viewer with at least one file that has a large number of changes (more than view port)
2. Scroll up slowly, and the file with lot of changes comes into view but the scroll position is wrong
See the following recording
* I'm on `diffElementViewModel.ts` at line 35 and try to scroll up very slowly,
* however there's a big jump in the scroll position to a different file and completely different position.

| bug,multi-diff-editor | low | Critical |
2,469,628,744 | go | runtime: use `tophash == emptyRest` to decrease search times in `mapaccess1_faststr` and `mapaccess2_faststr` | ### Proposal Details
In function `mapaccess2_faststr ` and `mapaccess2_faststr` , I think we can add follow code in `dohash ` block to decrease the search in bucket & over flow buckets.
```
if b.tophash[i] == emptyRest {
return unsafe.Pointer(&zeroVal[0])
}
```
Position: https://github.com/golang/go/blob/527610763b882b56c52d77acf1b75c2f4233c973/src/runtime/map_faststr_noswiss.go#L98
I am pleasure to change it if you think it is good. | Performance,NeedsDecision,compiler/runtime | low | Minor |
2,469,638,859 | material-ui | [colorManipulator] lighten/darken a CSS var() color | ### Summary
Hi, I'm working on my own design system implementation with MUI v6 (base + system). Previously, with utilities in colorManipulator I can easily lighten/darken a color defined in palette. With CSS variables, now it's difficult and I have to augment the palette a lot. For `alpha` the solution is with pure CSS function `rgba`, probably for `lighten`/`darken` I should go with the same way? Or, by `getComputedStyle` I can run the same function just as before?
### Examples
```diff
- color: darken(theme.palette.primary.main),
+ color: darken(theme.vars.palette.primary.main),
```
### Motivation
_No response_
**Search keywords**: colorManipulator lighten darken var CSS color | new feature,package: system,customization: theme | low | Major |
2,469,640,905 | godot | Scenes with 3D MultiMeshes print errors on load | ### Tested versions
4.3
### System information
Windows 11
### Issue description
https://github.com/godotengine/godot/blob/ee363af0ed65f6ca9a906f6065d0bd9c19dcf9e4/scene/resources/multimesh.cpp#L309 prints an error if the `instance_count` is set before `set_transform_format`. This will be true for any non empty 3D `MultiMesh` stored in a scene file.
### Steps to reproduce
Add a 3D `MultiMesh` with some instances to a scene.
Save scene.
Reopen scene.
`<scene\resources\multimesh.cpp(309): MultiMesh::set_transform_format> Condition "instance_count > 0" is true.`
### Minimal reproduction project (MRP)
- | bug,needs testing,topic:3d | low | Critical |
2,469,652,612 | godot | Switching to DX12 renderer creates bugs in Godot 4.3 editor UI rendering | ### Tested versions
4.3 Stable
### System information
Windows 10 22H2 + NVIDIA RTX 3060 Ti
### Issue description
I opened the old project from 4.2 in 4.3 and found a D3D12 rendering option, enabled and rebooted. While opening each menu, the editor first renders the whole window on it, and only on refresh (for example, by moving around between several menu items) the render gets correct information. The issue only exists if DX12 is enabled as Windows rendering.
https://github.com/user-attachments/assets/9103d6ff-925e-4ab9-a4bd-9babc47d1b8a
### Steps to reproduce
1. Open project from 4.2 in 4.3
2. Open project settings
3. Enable DX12 as rendering API for Windows
4. Apply and restart the editor
5. Try to open any menu
or
1. Create new project in 4.3
2. Switch to D3D12, apply and restart
3. Use this project for some time (took me ~30 min)
4. Try to open menu items
### Minimal reproduction project (MRP)
The testing scene is too big to upload here | bug,platform:windows,topic:rendering,topic:gui | low | Critical |
2,469,664,037 | flutter | Awaited GoRouter "push" does not resolve when pressing browser's back button | ### Steps to reproduce
Using the sample code below:
1. Press Hello World! to navigate to SettingsPage.
2. Press browser's back button.
### Expected results
After pressing browser's back button, `await context.push('/settings', ...);` should be resolved.
### Actual results
After pressing browser's back button, `await context.push('/settings', ...);` is NOT resolved.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
import 'package:go_router/go_router.dart';
void main() {
GoRouter.optionURLReflectsImperativeAPIs = true;
runApp(const MainApp());
}
class MainApp extends StatelessWidget {
const MainApp({
super.key,
});
@override
Widget build(BuildContext context) {
return MaterialApp.router(
routerConfig: GoRouter(
initialLocation: '/',
routes: <RouteBase>[
GoRoute(
path: '/',
builder: (_, __) => const HomePage(),
routes: [
// This setup does not work also.
// GoRoute(
// path: 'settings',
// builder: (_, state) => SettingsPage(
// extra: state.extra as String?,
// ),
// ),
],
),
GoRoute(
path: '/settings',
builder: (_, state) => SettingsPage(
extra: state.extra as String?,
),
),
],
),
);
}
}
//--------------------------------------------------------------------------------------------------
class HomePage extends StatelessWidget {
const HomePage({
super.key,
});
Future<void> _onGoToSettings(BuildContext context) async {
print('>>>>> GO TO SETTINGS');
await context.push(
'/settings',
extra: 'from Home',
);
print('>>>>> BACK TO HOME');
}
@override
Widget build(BuildContext context) {
return Scaffold(
body: Center(
child: TextButton(
onPressed: () => _onGoToSettings(context),
child: const Text('Hello World!'),
),
),
);
}
}
//--------------------------------------------------------------------------------------------------
class SettingsPage extends StatelessWidget {
const SettingsPage({
this.extra,
super.key,
});
final String? extra;
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(),
body: Center(
child: Text('Hello $extra'),
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
https://github.com/user-attachments/assets/9c56abe8-72b6-479c-962d-30a0b3c86009
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[!] Flutter (Channel [user-branch], 3.22.3, on macOS 14.6.1 23G93 darwin-arm64, locale en-PH)
! Flutter version 3.22.3 on channel [user-branch] at
/Users/rickimaru/Documents/GitHub/flutter
Currently on an unknown channel. Run `flutter channel` to switch to an official channel.
If that doesn't fix the issue, reinstall Flutter by following instructions at
https://flutter.dev/docs/get-started/install.
! Upstream repository unknown source is not a standard remote.
Set environment variable "FLUTTER_GIT_URL" to unknown source to dismiss this error.
• Framework revision b0850beeb2 (4 weeks ago), 2024-07-16 21:43:41 -0700
• Engine revision 235db911ba
• Dart version 3.4.4
• DevTools version 2.34.3
• If those were intentional, you can disregard the above warnings; however it is
recommended to use "git" directly to perform update checks and upgrades.
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /Users/rickimaru/Library/Android/sdk
• Platform android-34, build-tools 34.0.0
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11572160)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 15.4)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 15F31d
• CocoaPods version 1.15.0
[✓] Chrome - develop for the web
• CHROME_EXECUTABLE = /Applications/Brave Browser.app/Contents/MacOS/Brave Browser
[✓] Android Studio (version 2023.3)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11572160)
[✓] VS Code (version 1.92.2)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.94.0
[✓] Connected device (3 available)
• macOS (desktop) • macos • darwin-arm64 • macOS 14.6.1
23G93 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 14.6.1
23G93 darwin-arm64
• Chrome (web) • chrome • web-javascript • Brave Browser
127.1.68.141
! Error: Browsing on the local area network for Rick Krystianne’s iPhone. Ensure the
device is unlocked and attached with a cable or associated with the same local area
network as this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code -27)
[✓] Network resources
• All expected network resources are available.
! Doctor found issues in 1 category.
```
</details>
| platform-web,package,has reproducible steps,P2,p: go_router,team-go_router,triaged-go_router,found in release: 3.24 | low | Critical |
2,469,700,988 | TypeScript | Number.prototype.toFixed, Number.prototype.toExponential,Number.prototype.toPrecision comments error | ### 🔎 Search Terms
is:issue Number toFixed
### 🕗 Version & Regression Information
- This changed between versions ______ and _______
- This changed in commit or PR _______
- This is the behavior in every version I tried, and I reviewed the FAQ for entries about _________
- I was unable to test this on prior versions because _______
### ⏯ Playground Link
_No response_
### 💻 Code
```ts
// Your code here
```
### 🙁 Actual behavior
https://github.com/microsoft/TypeScript/blob/52395892e0c4ee8a22b4fa6190bad46d81e66651/src/lib/es5.d.ts#L550
https://github.com/microsoft/TypeScript/blob/52395892e0c4ee8a22b4fa6190bad46d81e66651/src/lib/es5.d.ts#L556
https://github.com/microsoft/TypeScript/blob/52395892e0c4ee8a22b4fa6190bad46d81e66651/src/lib/es5.d.ts#L562
### 🙂 Expected behavior
The right range is 0 to 100 for toFixed and toExponential, 1 to 100 for toPrecision.
### Additional information about the issue
_No response_ | Bug,Help Wanted | low | Critical |
2,469,716,777 | vscode | When updating, download new version before deleting old | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.92.2
- OS Version: WSL2
Steps to Reproduce:
1. With automatic updating on, get into a situation in which there is a new version of vscode ready
2. Shut down all existing vscode windows
3. Turn off your internet (hop on an airplane, go to a cabin in the woods, whatever)
4. Type `code` from the shell
What happens is that vscode deletes itself, tries to download the new version, and fails. Thus you're stuck without a working version of vscode.
| bug,install-update | low | Critical |
2,469,745,485 | rust | `rust-docs`/`rustup doc` extremely frequently links to `doc.rust-lang.org` instead of local files, makes offline browsing experience very poor | Prefacing this by saying I don't know how to properly triage this. It's likely that I haven't found all of the different cases where this happens.
Problem: `rustup doc` is severely broken when viewed without an internet connection, due to tons of links incorrectly linking to `doc.rust-lang.org` instead of the locally installed documentation files.
All of this was found on Rust 1.80.1 on the x86_64-unknown-linux-gnu toolchain, but I'd expect other targets to fare similarly.
I will mainly use examples I found browsing `rustup doc --std`, but this applies just as much to `core`, `alloc`, etc. It's possible it also applies to some of the mdbooks, but I'd consider that a separate problem.
The issue has multiple parts:
1. `source` links extremely often do not link to the local files.
Example: the `source` link of `std::alloc` (module) correctly links to the local files. The `source` link of `std::alloc::alloc` (function) links to `doc.rust-lang.org`.
Rudimentary grepping for `doc.rust-lang.org` links in the HTML files of `/std/` seem to suggest that this happens in the *vast* majority of cases (so many I had to start filtering out `/src/` links when grepping for other cases). It might be related to things like re-exports or other item-path/visibility related things. (see below)
2. Some item links such as re-exports link to `doc.rust-lang.org` instead of local files.
Example: The re-export of `core::arch::*` in `std::arch` links to `doc.rust-lang.org` instead of the local files for `core`.
3. Some relative links in doc comments link to `doc.rust-lang.org` instead of local files.
Example: In the doc comments for `std::alloc::Global`, the link to `Allocator` (written in-source as ``[`Allocator`]``) correctly links to the local file for the allocator trait, but the link to `free functions in alloc` (written in-source as ``[free functions in `alloc`](self#functions)``) links to `doc.rust-lang.org`.
4. Some documentation *explicitly* links to `doc.rust-lang.org` instead of local files for things like links to the nomicon or the reference.
Example: Documentation to `std::ffi::c_void` explicitly links to `https://doc.rust-lang.org/nomicon/ffi.html#representing-opaque-structs`. This is of course more understandable, since that's *not* a link generated by rustdoc, but arguably these links should also instead link to the local copy of the nomicon, though finding and fixing these links will probably require some manual effort.
Other than the *countless* number of `source` links that go to `doc.rust-lang.org` instead of the local `/src/` files, here's a (deduplicated) list of all the non-local links in the HTML documentation for `std` only:
<details>
<summary>
<code>$ rg -oNI 'doc.rust-lang.org/.*?\.html' std/**/*.html | rg -v '/src/' | sort -u</code>
</summary>
```
doc.rust-lang.org/1.80.1/alloc/alloc/index.html
doc.rust-lang.org/1.80.1/alloc/index.html
doc.rust-lang.org/1.80.1/core/arch/index.html
doc.rust-lang.org/1.80.1/core/core_arch/x86/cpuid/struct.CpuidResult.html
doc.rust-lang.org/1.80.1/core/core_arch/x86/struct.__m128bh.html
doc.rust-lang.org/1.80.1/core/core_arch/x86/struct.__m128d.html
doc.rust-lang.org/1.80.1/core/core_arch/x86/struct.__m128.html
doc.rust-lang.org/1.80.1/core/core_arch/x86/struct.__m128i.html
doc.rust-lang.org/1.80.1/core/core_arch/x86/struct.__m256bh.html
doc.rust-lang.org/1.80.1/core/core_arch/x86/struct.__m256d.html
doc.rust-lang.org/1.80.1/core/core_arch/x86/struct.__m256.html
doc.rust-lang.org/1.80.1/core/core_arch/x86/struct.__m256i.html
doc.rust-lang.org/1.80.1/core/core_arch/x86/struct.__m512bh.html
doc.rust-lang.org/1.80.1/core/core_arch/x86/struct.__m512d.html
doc.rust-lang.org/1.80.1/core/core_arch/x86/struct.__m512.html
doc.rust-lang.org/1.80.1/core/core_arch/x86/struct.__m512i.html
doc.rust-lang.org/1.80.1/core/error/struct.Source.html
doc.rust-lang.org/1.80.1/core/ffi/c_str/struct.Bytes.html
doc.rust-lang.org/1.80.1/core/ffi/index.html
doc.rust-lang.org/1.80.1/core/macro.assert_unsafe_precondition.html
doc.rust-lang.org/1.80.1/core/macro.panic.html
doc.rust-lang.org/1.80.1/core/num/index.html
doc.rust-lang.org/1.80.1/core/prelude/rust_2021/index.html
doc.rust-lang.org/1.80.1/core/prelude/rust_2024/index.html
doc.rust-lang.org/1.80.1/core/ptr/metadata/traitalias.Thin.html
doc.rust-lang.org/1.80.1/core/slice/sort/struct.TimSortRun.html
doc.rust-lang.org/1.80.1/core/slice/struct.GetManyMutError.html
doc.rust-lang.org/1.80.1/core/slice/trait.SlicePattern.html
doc.rust-lang.org/1.80.1/core/str/index.html
doc.rust-lang.org/1.80.1/libc/unix/type.gid_t.html
doc.rust-lang.org/1.80.1/libc/unix/type.pid_t.html
doc.rust-lang.org/1.80.1/libc/unix/type.uid_t.html
doc.rust-lang.org/1.80.1/reference/items/traits.html
doc.rust-lang.org/book/ch07-02-defining-modules-to-control-scope-and-privacy.html
doc.rust-lang.org/book/ch09-02-recoverable-errors-with-result.html
doc.rust-lang.org/book/ch19-03-advanced-traits.html
doc.rust-lang.org/cargo/reference/build-scripts.html
doc.rust-lang.org/nightly/edition-guide/rust-2024/index.html
doc.rust-lang.org/nightly/nightly-rustc/rustc_middle/mir/enum.MirPhase.html
doc.rust-lang.org/nightly/rust-by-example/compatibility/raw_identifiers.html
doc.rust-lang.org/nomicon/atomics.html
doc.rust-lang.org/nomicon/exotic-sizes.html
doc.rust-lang.org/nomicon/ffi.html
doc.rust-lang.org/nomicon/other-reprs.html
doc.rust-lang.org/nomicon/panic-handler.html
doc.rust-lang.org/reference/attributes/diagnostics.html
doc.rust-lang.org/reference/behavior-considered-undefined.html
doc.rust-lang.org/reference/destructors.html
doc.rust-lang.org/reference/identifiers.html
doc.rust-lang.org/reference/items/modules.html
doc.rust-lang.org/reference/macros-by-example.html
doc.rust-lang.org/reference/names/preludes.html
doc.rust-lang.org/reference/runtime.html
doc.rust-lang.org/reference/subtyping.html
doc.rust-lang.org/reference/type-coercions.html
doc.rust-lang.org/reference/type-layout.html
doc.rust-lang.org/rust-by-example/mod/split.html
doc.rust-lang.org/rustc/platform-support.html
```
</details>
The number of unique, non-source links to `doc.rust-lang.org` I found across all 5 standard library crates (`std`, `core`, `alloc`, `proc_macro`, `test`) is somewhere on the order of over 300. Many of them are duplicated across many different parts of the documentation, for example due to item links in trait implementations. | T-rustdoc,A-docs,C-bug,T-libs | low | Critical |
2,469,755,693 | vscode | Persistent `rg.exe` blocking Visual Studio Code Updates | For a long time _(dating back to Windows 10 and early Visual Studio Code releases)_, I've consistently encountered an issue where every Visual Studio Code update fails. I'm currently using Windows 11 23H2 and the latest version (1.92) of Visual Studio Code. After investigating, I identified that the problem stems from `rg.exe`, located at:
```powershell
$HOME\AppData\Local\Programs\Microsoft VS Code\resources\app\node_modules.asar.unpacked\@vscode\ripgrep\bin\rg.exe
```
I've tried shutting it down via the Taskbar, using PowerToys Unlock with File Locksmith, and even SysInternals Process Explorer, but nothing can terminate or stop this file. After restarting, I have only a few seconds to delete it, even without opening VS Code. It seems something triggers it to run and it remains active indefinitely. However, if I’m quick enough to delete it after a restart, the VS Code update proceeds without issues.
Any advice on resolving this would be greatly appreciated.



| bug,install-update,search | low | Minor |
2,469,762,369 | ui | [bug]: react-remove-scroll not imported correctly from scrolling elements | ### Describe the bug
When trying to use any element with a scroll bar (dropdown, sheet, drawer, etc.) the same error is shown relating to `react-remove-scroll`
### Affected component/components
Dropdown, Sheet, Drawer
### How to reproduce
- Install `npx shadcn-ui@latest add dropdown-menu`
- Copy-paste the first example on the page for [dropdowns](https://ui.shadcn.com/docs/components/dropdown-menu)
- Build the page or run `pnpm dev`
- The error from the "logs" section will be shown
(As the issue happens with the built-in example from the website, codesandbox is omitted)
### Codesandbox/StackBlitz link
_No response_
### Logs
```bash
The following error is shown every tine an element with a scrollbar is used:
Error: Cannot find package 'C:\Users\Name\Company\Documents\Code\granite-ui\node_modules\.pnpm\@radix-ui+react-menu@2.1.1_@types+react-dom@18.3.0_@types+react@18.3.3_react-dom@18.3.1_react@18.3.1__react@18.3.1\node_modules\react-remove-scroll\dist\es5\index.js' imported from C:\Users\Name\Company\Documents\Code\granite-ui\node_modules\.pnpm\@radix-ui+react-menu@2.1.1_@types+react-dom@18.3.0_@types+react@18.3.3_react-dom@18.3.1_react@18.3.1__react@18.3.1\node_modules\@radix-ui\react-menu\dist\index.mjs
Did you mean to import "react-remove-scroll/dist/es5/index.js"?
```
```
### System Info
```bash
OS Name / Microsoft Windows 10 Pro
Version / 10.0.19045 Build 19045
Chrome Version / 127.0.6533.100 (Official Build) (64-bit) (cohort: Stable)
Package.json:
"dependencies": {
"@radix-ui/react-dropdown-menu": "^2.1.1",
...
"next": "14.2.5",
"njwt": "^2.0.1",
"react": "^18.3.1",
"react-dom": "^18.3.1",
"react-remove-scroll": "^2.5.10"
}
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,469,767,202 | flutter | [flutter_markdown] Custom Builder for 'hr' Tag Not Invoked | ### What package does this bug report belong to?
flutter_markdown
### What target platforms are you seeing this bug on?
iOS
### Have you already upgraded your packages?
Yes
### Dependency versions
<details><summary>pubspec.lock</summary>
```lock
# Generated by pub
# See https://dart.dev/tools/pub/glossary#lockfile
packages:
args:
dependency: transitive
description:
name: args
sha256: "7cf60b9f0cc88203c5a190b4cd62a99feea42759a7fa695010eb5de1c0b2252a"
url: "https://pub.dev"
source: hosted
version: "2.5.0"
async:
dependency: transitive
description:
name: async
sha256: "947bfcf187f74dbc5e146c9eb9c0f10c9f8b30743e341481c1e2ed3ecc18c20c"
url: "https://pub.dev"
source: hosted
version: "2.11.0"
boolean_selector:
dependency: transitive
description:
name: boolean_selector
sha256: "6cfb5af12253eaf2b368f07bacc5a80d1301a071c73360d746b7f2e32d762c66"
url: "https://pub.dev"
source: hosted
version: "2.1.1"
characters:
dependency: transitive
description:
name: characters
sha256: "04a925763edad70e8443c99234dc3328f442e811f1d8fd1a72f1c8ad0f69a605"
url: "https://pub.dev"
source: hosted
version: "1.3.0"
clock:
dependency: transitive
description:
name: clock
sha256: cb6d7f03e1de671e34607e909a7213e31d7752be4fb66a86d29fe1eb14bfb5cf
url: "https://pub.dev"
source: hosted
version: "1.1.1"
collection:
dependency: transitive
description:
name: collection
sha256: ee67cb0715911d28db6bf4af1026078bd6f0128b07a5f66fb2ed94ec6783c09a
url: "https://pub.dev"
source: hosted
version: "1.18.0"
cupertino_icons:
dependency: "direct main"
description:
name: cupertino_icons
sha256: ba631d1c7f7bef6b729a622b7b752645a2d076dba9976925b8f25725a30e1ee6
url: "https://pub.dev"
source: hosted
version: "1.0.8"
fake_async:
dependency: transitive
description:
name: fake_async
sha256: "511392330127add0b769b75a987850d136345d9227c6b94c96a04cf4a391bf78"
url: "https://pub.dev"
source: hosted
version: "1.3.1"
flutter:
dependency: "direct main"
description: flutter
source: sdk
version: "0.0.0"
flutter_lints:
dependency: "direct dev"
description:
name: flutter_lints
sha256: "9e8c3858111da373efc5aa341de011d9bd23e2c5c5e0c62bccf32438e192d7b1"
url: "https://pub.dev"
source: hosted
version: "3.0.2"
flutter_markdown:
dependency: "direct main"
description:
name: flutter_markdown
sha256: a23c41ee57573e62fc2190a1f36a0480c4d90bde3a8a8d7126e5d5992fb53fb7
url: "https://pub.dev"
source: hosted
version: "0.7.3+1"
flutter_test:
dependency: "direct dev"
description: flutter
source: sdk
version: "0.0.0"
leak_tracker:
dependency: transitive
description:
name: leak_tracker
sha256: "7f0df31977cb2c0b88585095d168e689669a2cc9b97c309665e3386f3e9d341a"
url: "https://pub.dev"
source: hosted
version: "10.0.4"
leak_tracker_flutter_testing:
dependency: transitive
description:
name: leak_tracker_flutter_testing
sha256: "06e98f569d004c1315b991ded39924b21af84cf14cc94791b8aea337d25b57f8"
url: "https://pub.dev"
source: hosted
version: "3.0.3"
leak_tracker_testing:
dependency: transitive
description:
name: leak_tracker_testing
sha256: "6ba465d5d76e67ddf503e1161d1f4a6bc42306f9d66ca1e8f079a47290fb06d3"
url: "https://pub.dev"
source: hosted
version: "3.0.1"
lints:
dependency: transitive
description:
name: lints
sha256: cbf8d4b858bb0134ef3ef87841abdf8d63bfc255c266b7bf6b39daa1085c4290
url: "https://pub.dev"
source: hosted
version: "3.0.0"
markdown:
dependency: "direct main"
description:
name: markdown
sha256: ef2a1298144e3f985cc736b22e0ccdaf188b5b3970648f2d9dc13efd1d9df051
url: "https://pub.dev"
source: hosted
version: "7.2.2"
matcher:
dependency: transitive
description:
name: matcher
sha256: d2323aa2060500f906aa31a895b4030b6da3ebdcc5619d14ce1aada65cd161cb
url: "https://pub.dev"
source: hosted
version: "0.12.16+1"
material_color_utilities:
dependency: transitive
description:
name: material_color_utilities
sha256: "0e0a020085b65b6083975e499759762399b4475f766c21668c4ecca34ea74e5a"
url: "https://pub.dev"
source: hosted
version: "0.8.0"
meta:
dependency: transitive
description:
name: meta
sha256: "7687075e408b093f36e6bbf6c91878cc0d4cd10f409506f7bc996f68220b9136"
url: "https://pub.dev"
source: hosted
version: "1.12.0"
path:
dependency: transitive
description:
name: path
sha256: "087ce49c3f0dc39180befefc60fdb4acd8f8620e5682fe2476afd0b3688bb4af"
url: "https://pub.dev"
source: hosted
version: "1.9.0"
sky_engine:
dependency: transitive
description: flutter
source: sdk
version: "0.0.99"
source_span:
dependency: transitive
description:
name: source_span
sha256: "53e943d4206a5e30df338fd4c6e7a077e02254531b138a15aec3bd143c1a8b3c"
url: "https://pub.dev"
source: hosted
version: "1.10.0"
stack_trace:
dependency: transitive
description:
name: stack_trace
sha256: "73713990125a6d93122541237550ee3352a2d84baad52d375a4cad2eb9b7ce0b"
url: "https://pub.dev"
source: hosted
version: "1.11.1"
stream_channel:
dependency: transitive
description:
name: stream_channel
sha256: ba2aa5d8cc609d96bbb2899c28934f9e1af5cddbd60a827822ea467161eb54e7
url: "https://pub.dev"
source: hosted
version: "2.1.2"
string_scanner:
dependency: transitive
description:
name: string_scanner
sha256: "556692adab6cfa87322a115640c11f13cb77b3f076ddcc5d6ae3c20242bedcde"
url: "https://pub.dev"
source: hosted
version: "1.2.0"
term_glyph:
dependency: transitive
description:
name: term_glyph
sha256: a29248a84fbb7c79282b40b8c72a1209db169a2e0542bce341da992fe1bc7e84
url: "https://pub.dev"
source: hosted
version: "1.2.1"
test_api:
dependency: transitive
description:
name: test_api
sha256: "9955ae474176f7ac8ee4e989dadfb411a58c30415bcfb648fa04b2b8a03afa7f"
url: "https://pub.dev"
source: hosted
version: "0.7.0"
vector_math:
dependency: transitive
description:
name: vector_math
sha256: "80b3257d1492ce4d091729e3a67a60407d227c27241d6927be0130c98e741803"
url: "https://pub.dev"
source: hosted
version: "2.1.4"
vm_service:
dependency: transitive
description:
name: vm_service
sha256: "3923c89304b715fb1eb6423f017651664a03bf5f4b29983627c4da791f74a4ec"
url: "https://pub.dev"
source: hosted
version: "14.2.1"
sdks:
dart: ">=3.4.1 <4.0.0"
flutter: ">=3.19.0"
```
</details>
### Steps to reproduce
## 1. Copy Paste this file and add SwMarkdownWidget to your tree.
```
import 'package:flutter/material.dart';
import 'package:flutter_markdown/flutter_markdown.dart';
import 'package:markdown/markdown.dart' as md;
const String md_content = '''
# Example of horizontal rules
---
This is a paragraph.
***
This is another paragraph.
___
''';
class SwMarkDownWidget extends StatelessWidget {
final String mdContent;
const SwMarkDownWidget({
super.key,
required this.mdContent,
});
@override
Widget build(BuildContext context) {
return MarkdownBody(
data: mdContent,
builders: {
'hr': HorizontalRuleBuilder(),
},
);
}
}
class HorizontalRuleBuilder extends MarkdownElementBuilder {
@override
bool isBlockElement() => true;
@override
Widget visitText(md.Text text, TextStyle? preferredStyle) {
debugPrint("visitText: ${text.text}"); // Debug statement
return super.visitText(text, preferredStyle) ?? Container();
}
@override
Widget? visitElementAfter(md.Element element, TextStyle? preferredStyle) {
debugPrint("visitElementAfter: ${element.tag}");
return Container(
color: Colors.red,
height: 2,
margin: const EdgeInsets.symmetric(
vertical: 10,
), // Add some spacing around the hr
);
}
}
```
## 2. Add SwMarkdownWidget to your tree.
```
SwMarkdownWidget(mdContent : md_content)
```
### Expected results
We expect the horizontal rule to appear as what the custom builder builds.

### Actual results
The Markdown widget don't appear to use the builder as the horizontal rule doesn't change from the default behavior.

### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
import 'package:flutter_markdown/flutter_markdown.dart';
import 'package:markdown/markdown.dart' as md;
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
// This widget is the root of your application.
@override
Widget build(BuildContext context) {
return const MaterialApp(
title: 'Flutter Demo',
home: MyHomePage(),
);
}
}
class MyHomePage extends StatelessWidget {
const MyHomePage({super.key});
@override
Widget build(BuildContext context) {
return const SwMarkDownWidget(mdContent : md_content);
}
}
const String md_content = '''
# Example of horizontal rules
---
This is a paragraph.
***
This is another paragraph.
___
''';
class SwMarkDownWidget extends StatelessWidget {
final String mdContent;
const SwMarkDownWidget({
super.key,
required this.mdContent,
});
@override
Widget build(BuildContext context) {
return MarkdownBody(
data: mdContent,
builders: {
'hr': HorizontalRuleBuilder(),
},
);
}
}
class HorizontalRuleBuilder extends MarkdownElementBuilder {
@override
bool isBlockElement() => true;
@override
Widget visitText(md.Text text, TextStyle? preferredStyle) {
debugPrint("visitText: ${text.text}"); // Debug statement
return super.visitText(text, preferredStyle) ?? Container();
}
@override
Widget? visitElementAfter(md.Element element, TextStyle? preferredStyle) {
debugPrint("visitElementAfter: ${element.tag}");
return Container(
color: Colors.red,
height: 2,
margin: const EdgeInsets.symmetric(
vertical: 10,
), // Add some spacing around the hr
);
}
}
```
</details>
### Screenshots or Videos
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
I know there is an issue, that is because I had to downgrade Flutter version manually. But that is not the origine of the problem.
<details open><summary>Doctor output</summary>
```console
flutter doctor -v
[!] Flutter (Channel [user-branch], 3.22.1, on macOS 14.2.1 23C71 darwin-arm64, locale en-US)
! Flutter version 3.22.1 on channel [user-branch] at /Users/romainpattyn/Development/flutter
Currently on an unknown channel. Run `flutter channel` to switch to an official channel.
If that doesn't fix the issue, reinstall Flutter by following instructions at https://flutter.dev/docs/get-started/install.
! Upstream repository unknown source is not a standard remote.
Set environment variable "FLUTTER_GIT_URL" to unknown source to dismiss this error.
• Framework revision a14f74ff3a (3 months ago), 2024-05-22 11:08:21 -0500
• Engine revision 55eae6864b
• Dart version 3.4.1
• DevTools version 2.34.3
• If those were intentional, you can disregard the above warnings; however it is recommended to use "git" directly to perform update
checks and upgrades.
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /Users/romainpattyn/Library/Android/sdk
• Platform android-34, build-tools 34.0.0
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11572160)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 15.4)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 15F31d
• CocoaPods version 1.15.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2023.3)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11572160)
[✓] VS Code (version 1.92.1)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.94.0
[✓] Connected device (5 available)
• iPhone de Romain (mobile) • 00008110-00014CEC1E51801E • ios • iOS 17.5.1 21F90
• iPhone 15 (mobile) • 1C459C9B-830D-45C7-B148-CF86317EDCBE • ios •
com.apple.CoreSimulator.SimRuntime.iOS-17-5 (simulator)
• macOS (desktop) • macos • darwin-arm64 • macOS 14.2.1 23C71 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 14.2.1 23C71 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 127.0.6533.120
[✓] Network resources
• All expected network resources are available.
! Doctor found issues in 1 category.
```
</details>
| package,team-ecosystem,has reproducible steps,P2,p: flutter_markdown,triaged-ecosystem,found in release: 3.24 | low | Critical |
2,469,777,735 | pytorch | [torch.jit] RuntimeError: false INTERNAL ASSERT FAILED at "../torch/csrc/jit/ir/node_hashing.cpp":148, please report a bug to PyTorch. | ### 🐛 Describe the bug
I encountered an internal error in PyTorch when attempting to use `torch.jit.script` and `torch.jit.trace` **with the same function**. Below is a simplified version of the code that reproduces the issue:
```python
import torch
def create_tensors():
a = torch.tensor([[1 + 2j, 2 + 3j], [3 + 4j, 4 + 5j]], dtype=torch.cdouble)
b = torch.tensor([[1 + 2j, 2 + 3j], [3 + 4j, 4 + 5j]], dtype=torch.cfloat)
return a, b
create_tensors = torch.jit.script(create_tensors)
traced_create_tensors = torch.jit.trace(create_tensors, ())
```
The error messages are as follows:
```
Traceback (most recent call last):
File "/data/test1.py", line 12, in <module>
traced_create_tensors = torch.jit.trace(create_tensors, ())
File "/data/anacondas/envs/torch/lib/python3.10/site-packages/torch/jit/_trace.py", line 1002, in trace
traced_func = _trace_impl(
File "/data/anacondas/envs/torch/lib/python3.10/site-packages/torch/jit/_trace.py", line 766, in _trace_impl
traced = torch._C._create_function_from_trace(
RuntimeError: false INTERNAL ASSERT FAILED at "../torch/csrc/jit/ir/node_hashing.cpp":148, please report a bug to PyTorch.
```
I confirm that the error is reproducible with the nightly build version `2.5.0.dev20240815+cpu` (Please find the colab [here](https://colab.research.google.com/drive/1G2_fnfMl2Q9W_vhgScIQjLCkzZU6qJn1?usp=sharing)). Also, the following code would not trigger the internal error:
```python
# RuntimeError: Add new condition, expected Float, Complex, Int, or Bool but gotcomplex
def create_tensors():
a = torch.tensor([[1 + 2j, 2 + 3j], [3 + 4j, 4 + 5j]], dtype=torch.cdouble)
return a
# success
def create_tensors():
a = torch.tensor([[1, 2], [3, 4]], dtype=torch.half)
b = torch.tensor([[1, 2], [3, 4]], dtype=torch.double)
return a, b
```
### Versions
PyTorch version: 2.5.0.dev20240815+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.3 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-107-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 64
On-line CPU(s) list: 0-63
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz
Stepping: 6
CPU MHz: 3500.000
CPU max MHz: 3500.0000
CPU min MHz: 800.0000
BogoMIPS: 5800.00
Virtualization: VT-x
L1d cache: 1.5 MiB
L1i cache: 1 MiB
L2 cache: 40 MiB
L3 cache: 48 MiB
NUMA node0 CPU(s): 0-15,32-47
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI Syscall hardening, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] onnx==1.16.2
[pip3] onnxruntime==1.19.0
[pip3] onnxscript==0.1.0.dev20240816
[pip3] pytorch-triton==3.0.0+dedb7bdf33
[pip3] torch==2.5.0.dev20240815+cu121
[pip3] torch-xla==2.4.0
[pip3] torch_xla_cuda_plugin==2.4.0
[pip3] torchaudio==2.4.0.dev20240815+cu121
[pip3] torchvision==0.20.0.dev20240815+cu121
[pip3] triton==3.0.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] pytorch-triton 3.0.0+dedb7bdf33 pypi_0 pypi
[conda] torch 2.5.0.dev20240815+cu121 pypi_0 pypi
[conda] torch-xla 2.4.0 pypi_0 pypi
[conda] torch-xla-cuda-plugin 2.4.0 pypi_0 pypi
[conda] torchaudio 2.4.0.dev20240815+cu121 pypi_0 pypi
[conda] torchvision 0.20.0.dev20240815+cu121 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel | oncall: jit | low | Critical |
2,469,779,565 | godot | Vulkan (MoltenVK) rendering backends crashing on older Macbook (Intel HD 530, OpenCore) | ### Tested versions
4.3 stable
### System information
14.4.1 2.9 GHz Quad-Core Intel Core i7 Radeon Pro 460 4 GB Intel HD Graphics 530 1536 MB
### Issue description
4.3 stable keep crash when create or open project
### Steps to reproduce
download 4.3 stable
open it
create a new project
it always crash
### Minimal reproduction project (MRP)
[Archive.zip](https://github.com/user-attachments/files/16634861/Archive.zip)
| bug,platform:macos,topic:rendering,crash,regression | low | Critical |
2,469,780,351 | godot | TileMapLayer-Custom layer(array)- Tileset UI does not refresh | ### Tested versions
-Reproducible in v4.3.stable.official [77dcf97d8]
### System information
Windows 10 - v4.3.stable.official [77dcf97d8] - Compatibility/Forward+
### Issue description
TileMapLayer-custom data layer.
When adding or attempting to modify custom data layer on TileMapLayer node in the TileSet paint menu everything below "Painting" element(Customdata|Array(size x) does not refresh automatically when you change values.
You can manually refresh the UI by click on any of the elements.
None of the UI elements do not "array size" or "array type".
### Steps to reproduce
add node TileMapLayer
add custom data layer : array
go to TileSet and attempt to set the values of the custom layer
### Minimal reproduction project (MRP)
[test_project.zip](https://github.com/user-attachments/files/16634863/test_project.zip)
| bug,topic:editor,topic:2d | low | Minor |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.