id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k ⌀ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,662,426,167 | PowerToys | Shortcut key combinations are missing in the Power Toys Shortcut Guide | ### Microsoft PowerToys version
0.85.1
### Installation method
GitHub
### Running as admin
None
### Area(s) with issue?
Shortcut Guide
### Steps to reproduce
The Power Toys "Shortcut Guide" is missing important keystrokes, such as ⊞+p.
### ✔️ Expected Behavior
I was expecting an exhaustive list of keyboard shortcuts with the ⊞ key.
### ❌ Actual Behavior
Some shortcuts are missing
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,662,427,942 | kubernetes | `volumes[*].hostPath.type: Socket` doesn’t prevent the kubelet from creating a directory instead of waiting for a UNIX socket to be created. | ### What happened?
[`hostPath` volumes have different types](https://kubernetes.io/docs/concepts/storage/volumes/#hostpath-volume-types).
By default, no check is done. Whatever is found is bind-mounted and, if nothing exists, a directory is created on the host.
But, if the type is `Socket`, the kubelet is supposed to check that a UNIX socket exists at the expected location before starting the pod.
Yet, if the container restarts after the UNIX socket has disappeared, then a directory is created instead.
This leads to an unrecoverable issue because the directory will prevent the server from eventually re-creating the UNIX socket.
### What did you expect to happen?
If a container mounting a `Socket` `hostPath` restarts after the socket has been deleted, it should hang until the socket comes back.
### How can we reproduce it (as minimally and precisely as possible)?
Create a pod that mounts a socket:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: foo
labels:
app: foo
spec:
replicas: 1
selector:
matchLabels:
app: foo
template:
metadata:
labels:
app: foo
spec:
containers:
- name: sleeper
image: busybox
command:
- sleep
- infinity
volumeMounts:
- name: socket
mountPath: /tmp/foo.sock
volumes:
- name: socket
hostPath:
path: /tmp/foo.sock
type: Socket
```
The pod initially remains pending because there’s no UNIX socket at `/tmp/foo.sock`:
```console
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
foo-976c59756-gw2zd 0/1 ContainerCreating 0 97s
$ kubectl describe pod/foo-976c59756-gw2zd
Name: foo-976c59756-gw2zd
Namespace: foo
Priority: 0
Service Account: default
Node: gke-gke-lenaic-ubuntu-containerd-38134859-476h/10.0.96.131
Start Time: Fri, 15 Nov 2024 16:33:09 +0100
Labels: app=foo
pod-template-hash=976c59756
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/foo-976c59756
Containers:
sleeper:
Container ID:
Image: busybox
Image ID:
Port: <none>
Host Port: <none>
Command:
sleep
infinity
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/tmp/foo.sock from socket (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-52bxl (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
socket:
Type: HostPath (bare host directory volume)
Path: /tmp/foo.sock
HostPathType: Socket
kube-api-access-52bxl:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: cloud.google.com/gke-nodepool=ubuntu-containerd
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m20s default-scheduler Successfully assigned foo/foo-976c59756-gw2zd to gke-gke-lenaic-ubuntu-containerd-38134859-476h
Warning FailedMount 12s (x9 over 2m20s) kubelet MountVolume.SetUp failed for volume "socket" : hostPath type check failed: /tmp/foo.sock is not a socket file
```
This is expected.
Then, create the UNIX socket on the node:
```console
# timeout 1 nc -lU /tmp/foo.sock
# ls -l /tmp/foo.sock
srwxr-xr-x 1 root root 0 Nov 15 15:38 /tmp/foo.sock
```
The pod transitions to `Running` state and the containers are started:
```console
$ kubectl describe pod/foo-976c59756-gw2zd
Name: foo-976c59756-gw2zd
Namespace: foo
Priority: 0
Service Account: default
Node: gke-gke-lenaic-ubuntu-containerd-38134859-476h/10.0.96.131
Start Time: Fri, 15 Nov 2024 16:33:09 +0100
Labels: app=foo
pod-template-hash=976c59756
Annotations: <none>
Status: Running
IP: 10.4.1.6
IPs:
IP: 10.4.1.6
Controlled By: ReplicaSet/foo-976c59756
Containers:
sleeper:
Container ID: containerd://3a6b9dab9a721eba41c8bbb8d5783ec952e1c37e8bafef634d0ece8fa2b8b2a1
Image: busybox
Image ID: docker.io/library/busybox@sha256:768e5c6f5cb6db0794eec98dc7a967f40631746c32232b78a3105fb946f3ab83
Port: <none>
Host Port: <none>
Command:
sleep
infinity
State: Running
Started: Fri, 15 Nov 2024 16:39:23 +0100
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/tmp/foo.sock from socket (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-52bxl (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
socket:
Type: HostPath (bare host directory volume)
Path: /tmp/foo.sock
HostPathType: Socket
kube-api-access-52bxl:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: cloud.google.com/gke-nodepool=ubuntu-containerd
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 6m46s default-scheduler Successfully assigned foo/foo-976c59756-gw2zd to gke-gke-lenaic-ubuntu-containerd-38134859-476h
Warning FailedMount 2m36s (x10 over 6m46s) kubelet MountVolume.SetUp failed for volume "socket" : hostPath type check failed: /tmp/foo.sock is not a socket file
Normal Pulling 33s kubelet Pulling image "busybox"
Normal Pulled 32s kubelet Successfully pulled image "busybox" in 1.273s (1.273s including waiting). Image size: 2166802 bytes.
Normal Created 32s kubelet Created container sleeper
Normal Started 32s kubelet Started container sleeper
```
This is still fine.
Then, delete the UNIX socket from the node:
```console
# rm /tmp/foo.sock
# ls -l /tmp/foo.sock
ls: cannot access '/tmp/foo.sock': No such file or directory
```
And crash the container that mounts the socket to force its restart:
```console
# pkill -KILL -f "sleep infinity"
```
Then, once the container has restarted, `/tmp/foo.sock` becomes a directory:
```console
# ls -l /tmp/foo.sock
total 0
# ls -ld /tmp/foo.sock
drwxr-xr-x 2 root root 4096 Nov 15 15:46 /tmp/foo.sock
```
### Anything else we need to know?
In real life, this bug affects situations where we have two pods communicating through a UNIX socket shared via a `hostPath` volume:
* A “server” pod has a `hostPath` volume for a directory on the host and creates a UNIX socket inside this directory.
* A “client” pod has a `hostPath` volume for the UNIX socket.
If the container of the “client” pod restarts while the “server” pod is cleanly redeployed (and the “server” process deletes the UNIX socket when shutdown), then a directory is created and it will prevent the “server” pod from re-creating it forever.
### Kubernetes version
<details>
```console
$ kubectl version
Client Version: v1.31.2
Kustomize Version: v5.4.2
Server Version: v1.31.1-gke.1678000
```
</details>
### Cloud provider
<details>
GKE
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
PRETTY_NAME="Ubuntu 22.04.5 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.5 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy
$ uname -a
Linux gke-gke-lenaic-ubuntu-containerd-38134859-476h 5.15.0-1067-gke #73-Ubuntu SMP Sat Aug 31 04:29:32 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/storage,needs-triage | low | Critical |
2,662,440,633 | rust | rustdoc: single comma is put on newline if it occurs after item in double ticks | ### Problem
Here is a part of the documentation in my fork of the `colstodian` crate's `lib.rs`:
```rust
//! If you also have alpha (i.e. four values instead of three), then
//! [`Color::srgba_u8`], [`Color::srgba_f32`], and [`Color::linear_srgba`]
//! are the equivalents of the above with an alpha component.
```
And here is how that gets rendered/formatted in the docs:

Note the lonely comma on the 2nd line, this should be connected/stay with the `Color::srgba_f32` item at the end of the previous line.
I presume this is a CSS issue rather.
### Version
```text
cargo 1.84.0-nightly (4a2d8dc63 2024-11-09)
release: 1.84.0-nightly
commit-hash: 4a2d8dc636445b276288543882e076f254b3ae95
commit-date: 2024-11-09
host: x86_64-unknown-linux-gnu
libgit2: 1.8.1 (sys:0.19.0 vendored)
libcurl: 8.9.0-DEV (sys:0.4.74+curl-8.9.0 vendored ssl:OpenSSL/1.1.1w)
ssl: OpenSSL 1.1.1w 11 Sep 2023
os: Ubuntu 24.10.0 (oracular) [64-bit]
``` | T-rustdoc,C-bug,A-rustdoc-ui,T-rustdoc-frontend,E-needs-investigation | low | Minor |
2,662,450,021 | vscode | Improve wording and flow of restart EH blocker | * run a debug session
* update an extension and trigger EH restart
* 😕 be confused about the wording here

How about a less technical language. Like
```
Restart Extension Blocked
A debug session is still running and would be terminated otherwise
[OK]
[Restart Anyways]
``` | feature-request,polish,extensions,verification-needed,author-verification-requested | low | Critical |
2,662,571,203 | vscode | variables view is not working as expected | cc @jooyoungseo
enter a variable in the interactive repl, like `x=12`. run the focus variables view command. note that no `x` is within that table.

| bug,accessibility,notebook-variables | low | Minor |
2,662,576,447 | vscode | terminal quick fixes don't work for screen reader users sometimes | From @jooyoungseo
waiting for the logs

| bug,windows,terminal-quick-fix | low | Major |
2,662,620,149 | stable-diffusion-webui | [Bug]: Issue with Python 3.10.6 and Sharkcodecs launcher64.exe | ### Checklist
- [ ] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
It's took me about 2 weeks to work out what was causing the issue and I have now been able to isolate it to SharkCodecs launcher64.exe. This is a handy all in one codec pack. I have been used it for many years:
https://shark007.net/x64components1.html
When I first tried to install SD it was failing to run Python. Loads of errors. Couldnt see PIP. Couldnt install PIP.
Downloading charset_normalizer-3.4.0-cp310-cp310-win_amd64.whl (102 kB)
Downloading idna-3.10-py3-none-any.whl (70 kB)
Downloading MarkupSafe-3.0.2-cp310-cp310-win_amd64.whl (15 kB)
Downloading urllib3-2.2.3-py3-none-any.whl (126 kB)
ERROR: Could not install packages due to an OSError: [WinError 5] Access is denied
Consider using the `--user` option or check the permissions.
C:\Users\Admin>python --version
Python 3.10.6
C:\Users\Admin>python -m ensurepip --default-pip
Looking in links: c:\Users\Admin\AppData\Local\Temp\tmpecnskf41
ERROR: Could not install packages due to an OSError: [WinError 5] Access is denied
Consider using the `--user` option or check the permissions.
Traceback (most recent call last):
File "C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\ensurepip\__main__.py", line 5, in <module>
sys.exit(ensurepip._main())
File "C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\ensurepip\__init__.py", line 287, in _main
return _bootstrap(
File "C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\ensurepip\__init__.py", line 203, in _bootstrap
return _run_pip([*args, *_PACKAGE_NAMES], additional_paths)
File "C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\ensurepip\__init__.py", line 104, in _run_pip
return subprocess.run(cmd, check=True).returncode
File "C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 524, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['C:\\Users\\Admin\\AppData\\Local\\Programs\\Python\\Python310\\python.exe', '-W', 'ignore::DeprecationWarning', '-c', '\nimport runpy\nimport sys\nsys.path = [\'C:\\\\Users\\\\Admin\\\\AppData\\\\Local\\\\Temp\\\\tmpecnskf41\\\\setuptools-63.2.0-py3-none-any.whl\', \'C:\\\\Users\\\\Admin\\\\AppData\\\\Local\\\\Temp\\\\tmpecnskf41\\\\pip-22.2.1-py3-none-any.whl\'] + sys.path\nsys.argv[1:] = [\'install\', \'--no-cache-dir\', \'--no-index\', \'--find-links\', \'C:\\\\Users\\\\Admin\\\\AppData\\\\Local\\\\Temp\\\\tmpecnskf41\', \'setuptools\', \'pip\']\nrunpy.run_module("pip", run_name="__main__", alter_sys=True)\n']' returned non-zero exit status 1.
Anywayyyzzzzzz.
New Windows install and decided to install SD first. It works... :) No issues - but as soon as I run the Codecs pack Launcher.exe it seems to mess up Python or permissions or something. I cant see what its doing. I have contacted Shark Codecs owner for help incase its doing something but its only adding codecs for MPC-BE player.
If you happen to have SharkCodecs installed first by running the launcher64.exe at any point - SD will fail to install and Python will chuck up errors. Although Git and Python install fine.
All windows PATHS look OK. All envs look OK. Permissions look OK.
%USERPROFILE%\AppData\Local\Microsoft\WindowsApps;
C:\Users\Admin\AppData\Local\Programs\Python\Python310\
C:\Users\Admin\AppData\Local\Programs\Python\Python310\Scripts\
C:\Users\Admin>pip list
Package Version
---------- -------
pip 22.2.1
setuptools 63.2.0
[notice] A new release of pip available: 22.2.1 -> 24.3.1
[notice] To update, run: python.exe -m pip install --upgrade pip
C:\Users\Admin>pip --version
pip 22.2.1 from C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\site-packages\pip (python 3.10)
C:\Users\Admin>python --version
Python 3.10.6
I cant see what its doing but suspect a bug in Python 3.10.6 maybe? but the install process says 3.10.6 is needed.
### Steps to reproduce the problem
1. Windows 10 and 11 vmware clean build.
2. Install GIT and Python 3.10.6. Added set COMMANDLINE_ARGS= --skip-torch-cuda-test (Not expecting it to work as no pysical GPU but can still see the code and URL http page opens...)
venv "C:\Users\Admin\Downloads\AI\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.10.1
Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2
Launching Web UI with arguments: --skip-torch-cuda-test
C:\Users\Admin\Downloads\AI\stable-diffusion-webui\venv\lib\site-packages\timm\models\layers\__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Warning: caught exception 'Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx', memory monitor disabled
Loading weights [cc6cb27103] from C:\Users\Admin\Downloads\AI\stable-diffusion-webui\models\Stable-diffusion\v1-5-pruned-emaonly.ckpt
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 4.4s (prepare environment: 0.2s, import torch: 1.9s, import gradio: 0.5s, setup paths: 0.5s, initialize shared: 0.3s, other imports: 0.2s, load scripts: 0.5s, create ui: 0.3s, gradio launch: 0.1s).
Creating model from config: C:\Users\Admin\Downloads\AI\stable-diffusion-webui\configs\v1-inference.yaml
C:\Users\Admin\Downloads\AI\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py:797: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
warnings.warn(
Applying attention optimization: InvokeAI... done.
loading stable diffusion model: RuntimeError
Traceback (most recent call last):
File "C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\threading.py", line 973, in _bootstrap
self._bootstrap_inner()
File "C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\Admin\Downloads\AI\stable-diffusion-webui\modules\initialize.py", line 149, in load_model
shared.sd_model # noqa: B018
File "C:\Users\Admin\Downloads\AI\stable-diffusion-webui\modules\shared_items.py", line 175, in sd_model
return modules.sd_models.model_data.get_sd_model()
File "C:\Users\Admin\Downloads\AI\stable-diffusion-webui\modules\sd_models.py", line 693, in get_sd_model
load_model()
File "C:\Users\Admin\Downloads\AI\stable-diffusion-webui\modules\sd_models.py", line 868, in load_model
with devices.autocast(), torch.no_grad():
File "C:\Users\Admin\Downloads\AI\stable-diffusion-webui\modules\devices.py", line 228, in autocast
if has_xpu() or has_mps() or cuda_no_autocast():
File "C:\Users\Admin\Downloads\AI\stable-diffusion-webui\modules\devices.py", line 28, in cuda_no_autocast
device_id = get_cuda_device_id()
File "C:\Users\Admin\Downloads\AI\stable-diffusion-webui\modules\devices.py", line 40, in get_cuda_device_id
) or torch.cuda.current_device()
File "C:\Users\Admin\Downloads\AI\stable-diffusion-webui\venv\lib\site-packages\torch\cuda\__init__.py", line 769, in current_device
_lazy_init()
File "C:\Users\Admin\Downloads\AI\stable-diffusion-webui\venv\lib\site-packages\torch\cuda\__init__.py", line 298, in _lazy_init
torch._C._cuda_init()
RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx
3. Run SharkCodecs install process and launcher64.exe. Close the Codec window.
4. Retry SD and it errors....
venv "C:\Users\Admin\Downloads\AI\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.10.1
Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2
Launching Web UI with arguments: --skip-torch-cuda-test
Traceback (most recent call last):
File "C:\Users\Admin\Downloads\AI\stable-diffusion-webui\launch.py", line 48, in <module>
main()
File "C:\Users\Admin\Downloads\AI\stable-diffusion-webui\launch.py", line 44, in main
start()
File "C:\Users\Admin\Downloads\AI\stable-diffusion-webui\modules\launch_utils.py", line 465, in start
import webui
File "C:\Users\Admin\Downloads\AI\stable-diffusion-webui\webui.py", line 13, in <module>
initialize.imports()
File "C:\Users\Admin\Downloads\AI\stable-diffusion-webui\modules\initialize.py", line 23, in imports
import gradio # noqa: F401
File "C:\Users\Admin\Downloads\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\__init__.py", line 3, in <module>
import gradio.components as components
File "C:\Users\Admin\Downloads\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\components\__init__.py", line 1, in <module>
from gradio.components.annotated_image import AnnotatedImage
File "C:\Users\Admin\Downloads\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\components\annotated_image.py", line 13, in <module>
from gradio.components.base import IOComponent, _Keywords
File "C:\Users\Admin\Downloads\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\components\base.py", line 29, in <module>
from gradio.blocks import Block, BlockContext
File "C:\Users\Admin\Downloads\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 28, in <module>
from gradio import (
File "C:\Users\Admin\Downloads\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\networking.py", line 18, in <module>
from gradio.routes import App
File "C:\Users\Admin\Downloads\AI\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 60, in <module>
mimetypes.init()
File "C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\mimetypes.py", line 368, in init
db.read_windows_registry()
File "C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\mimetypes.py", line 253, in read_windows_registry
_mimetypes_read_windows_registry(add_type)
PermissionError: [WinError 5] Access is denied
Press any key to continue . . .
I have created a VMware snapshot which I keep rolling back to for testing. If anybody has any ideas? That would be great.
### What should have happened?
SD should run as expected and produce the GUI http page.
### What browsers do you use to access the UI ?
Google Chrome
### Sysinfo
[sysinfo-2024-11-15-17-12.json](https://github.com/user-attachments/files/17778831/sysinfo-2024-11-15-17-12.json)
### Console logs
```Shell
N/A
```
### Additional information
N/A | bug-report | low | Critical |
2,662,624,860 | svelte | Svelte 5 Exporting $state compile problem | ### Describe the bug
If I want to export a shared state, the normal way is to do like this:
### Option 1
```ts
// rune.svelte.ts
export const rune = <T>(initialValue: T) => {
let _rune = $state(initialValue);
return {
get value() {
return _rune;
},
set value(v: T) {
_rune = v;
}
};
};
```
### Option 2
This also works:
```ts
// rune.svelte.ts
export const rune = <T>(initialValue: T) => {
const _rune = $state({ value: initialValue });
return _rune;
};
```
### Option 3
However, if I simplify option 2, I get a compile error:
```ts
export const rune = <T>(initialValue: T) => $state({ value: initialValue });
// OR
export const rune = <T>(initialValue: T) => {
return $state({ value: initialValue });
};
```
I get the error:
```
$state(...)` can only be used as a variable declaration initializer or a class field
```
I would expect Option 3 to compile the exact same way as option 2 !?
### Reproduction
[REPL](https://svelte.dev/playground/2c06d586d890466e8dc7da6fb111efb7?version=5.2.0#H4sIAAAAAAAACnWPwUrEQAyGXyUEYbtQuicv7XbBg7CHPYiCF0ekliyOtJkyk66VYd5dZrbKFvWY5PuTLx656QlL3JsPEAOjI5A3gocTdUJwf3t3wByPuiOH5ZNH-RwiHRuYf2dvhqFwKYA5vjaO_uq3hoVYHJa4da3Vg-wUK9H9YKyABzsyQYCjNT2sik0s53Dx7laV4ki3hp3ABHXCs-t1pXi7-Vnnp-LUdCMFzFFoEizFjhTyf7yXN5buv2YX_jQl57NM8q4h06xFN91jvL-GegdeMczMywxdOWmEMg_JsoTLDIT4jBJLMlo-RyrFoVr-8hy-ADlJFLyzAQAA)
### Logs
```shell
Error message above.
```
### System Info
```shell
Svelte 5.2.0
```
### Severity
annoyance | feature request,runes | medium | Critical |
2,662,635,918 | flutter | VideoPlayerTests testSeekToWhilePlayingDoesNotStopDisplayLink flakes: (([player position]) equal to (1234)) failed: ("0") is not equal to ("1234"), "HALPlugIn::DeviceGetCurrentTime: got an error from the plug-in routine | Flaked in presubmit
```
Test Case '-[VideoPlayerTests testSeekToWhilePlayingDoesNotStopDisplayLink]' started.
2024-11-14 16:58:12.673815-0800 Runner[19814:124579] HALPlugIn.cpp:551 HALPlugIn::DeviceGetCurrentTime: got an error from the plug-in routine, Error: 1937010544 (stop)
2024-11-14 16:58:12.674364-0800 Runner[19814:124579] HALPlugIn.cpp:551 HALPlugIn::DeviceGetCurrentTime: got an error from the plug-in routine, Error: 1937010544 (stop)
2024-11-14 16:58:12.674550-0800 Runner[19814:124579] HALPlugIn.cpp:551 HALPlugIn::DeviceGetCurrentTime: got an error from the plug-in routine, Error: 1937010544 (stop)
2024-11-14 16:58:12.674702-0800 Runner[19814:124579] HALPlugIn.cpp:551 HALPlugIn::DeviceGetCurrentTime: got an error from the plug-in routine, Error: 1937010544 (stop)
2024-11-14 16:58:12.706392-0800 Runner[19814:124773] [connection] nw_connection_add_timestamp_locked_on_nw_queue [C3] Hit maximum timestamp count, will start dropping events
/Volumes/Work/s/w/ir/x/w/packages/packages/video_player/video_player_avfoundation/darwin/RunnerTests/VideoPlayerTests.m:354: error: -[VideoPlayerTests testSeekToWhilePlayingDoesNotStopDisplayLink] : (([player position]) equal to (1234)) failed: ("0") is not equal to ("1234")
Test Case '-[VideoPlayerTests testSeekToWhilePlayingDoesNotStopDisplayLink]' failed (0.149 seconds).
```
https://logs.chromium.org/logs/flutter/buildbucket/cr-buildbucket/8731246231638078593/+/u/Run_package_tests/native_test/stdout | platform-ios,p: video_player,package,P3,c: flake,team-ios,triaged-ios | low | Critical |
2,662,644,159 | go | proposal: html/template: export filters and escapers | ### Proposal Details
I try to create a [tool](https://github.com/apleshkov/tmtr) to translate text/template and html/template to go source code.
For example:
```html
<div>{{.}}</div>
```
translates to smth like this:
```go
func Render(w io.Writer, data string) {
io.WriteString(w, HTMLEscape(data))
}
```
Go has great [escaping mechanism](https://pkg.go.dev/html/template#hdr-A_fuller_picture) for html, but it's non-published API unfortunately.
From go v1.23 the `//go:linkname` directive is [restricted](https://tip.golang.org/doc/go1.23#linker), so it has made some things impossible. Re-implementing the whole escaping mechanism from scratch feels like very bad option.
It's possible to enrich the template tree by calling the `html.template`'s `Execute` method, and it doesn't matter if it fails or not, cause the current implementation adds additional escapers before the actual execution. This trick works now, but could be broken at any moment, so it'd be great to publish the `escape` method https://cs.opensource.google/go/go/+/refs/tags/go1.23.3:src/html/template/template.go;l=96
The other huge problem is the unpublished filters and escapers themselves: https://cs.opensource.google/go/go/+/refs/tags/go1.23.3:src/html/template/escape.go;l=64
I couldn't find a way to call them w/o `//go:linkname`. It's possible to create a special template like `{{_html_template_attrescaper .}}` and execute it. It does work, but it's very slow. So it would be awesome to expose all the escapers from the `funcMap` variable.
These updates shouldn't break the API, but instead other tools can reuse it. It could be similar translators, or even other html template engines, cause escaping is the core part anyway. | Proposal | low | Critical |
2,662,683,064 | kubernetes | New EvictionBlocked PodDisruptionCondition | ### What would you like to be added?
A EvictionByEvictionAPI [pod disruption condition](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#pod-disruption-conditions) already exists. Suggesting a EvictionBlocked condition be added to know when a eviction is being attempted but blocked by a pod disruption budget at api server
### Why is this needed?
Controllers (autoscalers, customer operators and replicaset/statefulset) have no insight into failed evictions. So if a pod disruption budget is at allowedDisruptions == 0 either because some pods are having errors or becasue the pdb is just very "tight" by default (minAvailable == replicacount), then the controller can't preemptively take action and scale out of the problem.
Workarounds today include a webhook on evictions but that seems a bit unnatural as you're not validating or mutating the evictions you're essentially watching them. If we use a Pod Disruption Condition instead then controllers can just watch pods which many are already doing.
Here's an example of a trivial project that would benefit from this [paulgmiller/k8s-pdb-autoscaler](https://github.com/paulgmiller/k8s-pdb-autoscaler) but the hope is this would be beneficial to lots of controllers | kind/feature,sig/apps,needs-triage | medium | Critical |
2,662,716,365 | go | crypto/internal/fips/check:exe_external: unrecognized failures | ```
#!watchflakes
default <- pkg == "crypto/internal/fips/check:exe_external" && test == ""
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8731297058133517505)):
FAIL crypto/internal/fips/check [build failed]
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,662,719,878 | electron | useSystemPicker with getDisplayMedia should not ask to pick another window when calling applyConstraints on the tracks. | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
33.2.0
### What operating system(s) are you using?
macOS
### Operating System Version
15.1
### What arch are you using?
arm64 (including Apple Silicon)
### Last Known Working Electron version
N/A
### Expected Behavior
It should function like chrome does, after picking the window, you can call `applyConstraints` as many times as you want without having to pick another window each time.
### Actual Behavior
Calling `applyConstraints` on a track immediately asks the user to pick another window to share in order to make changes.
### Testcase Gist URL
_No response_
### Additional Information
_No response_ | platform/macOS,bug :beetle:,33-x-y | low | Critical |
2,662,737,196 | PowerToys | Bug Report: Text Extractor Clipboard Issue in PowerToys | ### Microsoft PowerToys version
0.84.1 and 0.86.0
### Installation method
Microsoft Store
### Running as admin
Yes
### Area(s) with issue?
TextExtractor
### Steps to reproduce
When using the Text Extractor tool in PowerToys, the extracted text does not appear in the clipboard history (Win + V) as expected. However, the behavior becomes inconsistent when pasting or interacting with the clipboard history:
After using Text Extractor, the text does not show up in the clipboard history (Win + V).
If I paste immediately, the text extracted by Text Extractor is pasted.
In the clipboard history, my previous clipboard items are displayed instead of the extracted text.
If I select the first item in the clipboard history, the extracted text from Text Extractor is pasted.
Selecting the second item pastes my original clipboard content, but switching back to the first item pastes the original clipboard content again instead of the extracted text.
Steps to Reproduce:
1. Open PowerToys and ensure the Text Extractor tool is enabled.
2. Create multiple entries in the clipboard by copying different pieces of text.
3. Activate Text Extractor using its shortcut (Win + Shift + T).
4. Select a portion of text by clicking and dragging over it with the Text Extractor tool.
5. After the tool finishes processing, open the clipboard history by pressing Win + V.
6. Observe the clipboard history list:
* Check if the extracted text form Text Extractor appears as the most recent clipboard entry. **(In my case no)**
7. Without interacting with the clipboard history menu, paste the content into a document or text editor using Ctrl + V.
* Note whether the extracted text is pasted correctly.
8. Open the clipboard history again (Win + V) and try the following:
* Select the first item in the clipboard history and paste it into a document.
* Then, select the second item and paste it.
* Finally, re-select the first item and paste it.
9. Observe and document the behavior of the pasted content at each step.
### ✔️ Expected Behavior
The extracted text should appear as the most recent item in the clipboard history.
The extracted text should remain accessible when selected from the clipboard history.
### ❌ Actual Behavior
The extracted text is not visible in the clipboard history, leading to inconsistent pasting behavior.
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Critical |
2,662,757,028 | yt-dlp | Requesting support for escribe videos | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a new site support request
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that none of provided URLs [violate any copyrights](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and am willing to share it if required
### Region
Canada
### Example URLs
https://pub-guelph.escribemeetings.com/Players/ISIStandAlonePlayer.aspx?Id=3ac80dd1-d45a-45e8-8be0-cfe526e5b829
https://pub-guelph.escribemeetings.com/Players/ISIStandAlonePlayer.aspx?Id=4a0da857-5283-48ff-9675-6e41a6608b52
https://pub-guelph.escribemeetings.com/Players/ISIStandAlonePlayer.aspx?Id=99dad340-87ab-46cb-a53b-326b8e57b9af
### Provide a description that is worded well enough to be understood
City council meetings are published on pub-guelph.escribemeetings.com which are available for viewing. However, there is no support to download the videos from that website. This request is to add support for that site, which will likely allow people in other cities to download their city council meetings.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
yt-dlp https://pub-guelph.escribemeetings.com/Players/ISIStandAlonePlayer.aspx?Id=3ac80dd1-d45a-45e8-8be0-cfe526e5b829 ✔
zsh: no matches found: https://pub-guelph.escribemeetings.com/Players/ISIStandAlonePlayer.aspx?Id=3ac80dd1-d45a-45e8-8be0-cfe526e5b829
```
| site-request,triage | low | Critical |
2,662,767,474 | pytorch | auto-grad graph replicate split_with_sizes(lengths) X times where X = len(lengths) effecting compile time | auto-grad graph replicate split_with_sizes(lengths) X times where X = len(lengths) effecting compile time.
I think this repitions might be related to the functionalization view replay? unsure yet.
We spend 50% of in split_with_sizes .
strobelight profile:
https://fburl.com/scuba/pyperf_experimental/on_demand/pz6u9po2
for when we have lengths size =1000 we have this replicated 1000 times.
I counted how many times we call split_with_sizes when input is 8 and its like 32 times.
that means for input 1000 its maybe like 32/8 *1000 times = 16K times?
This is noted while debugging :
https://github.com/pytorch/pytorch/issues/138729
ttile, this pre
```
import time
import torch
import random
if __name__ == "__main__":
def make_val(n=10):
values = torch.arange(n , dtype=torch.float)
return values
@torch.compile(fullgraph=True, dynamic=True)
def consolidate(tensors):
start = 0
lengths = []
swaps = []
origs = []
def view_and_pad(tensor: torch.Tensor, lengths=lengths) -> torch.Tensor:
nonlocal start
origs.append(tensor)
# (1) View the tensor as a contiguous uint8 buffer.
swap = tensor.view(-1).view(torch.uint8)
# (2) Pad the tensor to a multiple of 8 elements.
# result must always have a multiple of 8 elements
pad = swap.numel() % 8
if pad != 0:
swap = torch.cat([swap, swap.new_zeros(8 - pad)])
n = swap.numel()
start = start + n
# add length of swap to lengths. (not all lengths are always the same)
lengths.append(n)
# Add the swap to the list of swaps.
swaps.append(swap)
for val in list(tensors):
view_and_pad(val)
# create empty storage with size filesize.
filesize = start
storage = torch.empty(
filesize,
dtype=torch.uint8,
)
# split storage into tensors of size lengths. lengths are not always the same.
torch.cat(swaps, out=storage)
# swaps have the same data but viewed as one tensor.
swaps = storage.split(lengths)
def view_old_as_new(
v: torch.Tensor, oldv: torch.Tensor
) -> torch.Tensor:
if oldv is None:
return v
v = v.view(oldv.dtype)
if v.numel() > oldv.numel():
return v[: oldv.numel()].view(oldv.shape)
return v.view(oldv.shape)
result = [
view_old_as_new(v, oldv)
for (v, oldv) in zip(swaps, origs, strict=True)
]
return result
for n in [7]:
torch.compiler.reset()
tensors = [make_val(10) for i in range(n)]
t0 = time.time()
results = consolidate(tensors)
print(n, time.time()-t0)
assert len(set(t.untyped_storage().data_ptr() for t in results)) == 1
# [[ X inputs tensor each os size yi]]
# can we write this as tensor of size X*y
```
Dynamo graph:
```
def forward(self, s0 : torch.SymInt, L_tensors_0_ : torch.Tensor, L_tensors_1_ : torch.Tensor, L_tensors_2_ : torch.Tensor, L_tensors_3_ : torch.Tensor, L_tensors_4_ : torch.Tensor, L_tensors_5_ : torch.Tensor, L_tensors_6_ : torch.Tensor):
l_tensors_0_ = L_tensors_0_
l_tensors_1_ = L_tensors_1_
l_tensors_2_ = L_tensors_2_
l_tensors_3_ = L_tensors_3_
l_tensors_4_ = L_tensors_4_
l_tensors_5_ = L_tensors_5_
l_tensors_6_ = L_tensors_6_
view = l_tensors_0_.view(-1)
swap = view.view(torch.uint8); view = None
mul = 4 * s0; s0 = None
view_2 = l_tensors_1_.view(-1)
swap_1 = view_2.view(torch.uint8); view_2 = None
add_1 = mul + mul
view_4 = l_tensors_2_.view(-1)
swap_2 = view_4.view(torch.uint8); view_4 = None
add_2 = add_1 + mul; add_1 = None
view_6 = l_tensors_3_.view(-1)
swap_3 = view_6.view(torch.uint8); view_6 = None
add_3 = add_2 + mul; add_2 = None
view_8 = l_tensors_4_.view(-1)
swap_4 = view_8.view(torch.uint8); view_8 = None
add_4 = add_3 + mul; add_3 = None
view_10 = l_tensors_5_.view(-1)
swap_5 = view_10.view(torch.uint8); view_10 = None
add_5 = add_4 + mul; add_4 = None
view_12 = l_tensors_6_.view(-1)
swap_6 = view_12.view(torch.uint8); view_12 = None
add_6 = add_5 + mul; add_5 = None
storage = torch.empty(add_6, dtype = torch.uint8); add_6 = None
cat = torch.cat([swap, swap_1, swap_2, swap_3, swap_4, swap_5, swap_6], out = storage); swap = swap_1 = swap_2 = swap_3 = swap_4 = swap_5 = swap_6 = cat = None
split = storage.split([mul, mul, mul, mul, mul, mul, mul]); storage = mul = None
v = split[0]
v_2 = split[1]
v_4 = split[2]
v_6 = split[3]
v_8 = split[4]
v_10 = split[5]
v_12 = split[6]; split = None
v_1 = v.view(torch.float32); v = None
size = l_tensors_0_.size(); l_tensors_0_ = None
view_15 = v_1.view(size); v_1 = size = None
v_3 = v_2.view(torch.float32); v_2 = None
size_1 = l_tensors_1_.size(); l_tensors_1_ = None
view_17 = v_3.view(size_1); v_3 = size_1 = None
v_5 = v_4.view(torch.float32); v_4 = None
size_2 = l_tensors_2_.size(); l_tensors_2_ = None
view_19 = v_5.view(size_2); v_5 = size_2 = None
v_7 = v_6.view(torch.float32); v_6 = None
size_3 = l_tensors_3_.size(); l_tensors_3_ = None
view_21 = v_7.view(size_3); v_7 = size_3 = None
v_9 = v_8.view(torch.float32); v_8 = None
size_4 = l_tensors_4_.size(); l_tensors_4_ = None
view_23 = v_9.view(size_4); v_9 = size_4 = None
v_11 = v_10.view(torch.float32); v_10 = None
size_5 = l_tensors_5_.size(); l_tensors_5_ = None
view_25 = v_11.view(size_5); v_11 = size_5 = None
v_13 = v_12.view(torch.float32); v_12 = None
size_6 = l_tensors_6_.size(); l_tensors_6_ = None
view_27 = v_13.view(size_6); v_13 = size_6 = None
return (view_15, view_17, view_19, view_21, view_23, view_25, view_27)
```
auto_grad graph:
```
def forward(self, arg0_1, arg1_1, arg2_1, arg3_1, arg4_1, arg5_1, arg6_1, arg7_1):
view = torch.ops.aten.view.default(arg1_1, [-1]); arg1_1 = None
view_1 = torch.ops.aten.view.dtype(view, torch.uint8); view = None
alias = torch.ops.aten.alias.default(view_1); view_1 = None
alias_1 = torch.ops.aten.alias.default(alias); alias = None
mul_1 = 4 * arg0_1
view_2 = torch.ops.aten.view.default(arg2_1, [-1]); arg2_1 = None
view_3 = torch.ops.aten.view.dtype(view_2, torch.uint8); view_2 = None
alias_2 = torch.ops.aten.alias.default(view_3); view_3 = None
alias_3 = torch.ops.aten.alias.default(alias_2); alias_2 = None
add_12 = mul_1 + mul_1
view_4 = torch.ops.aten.view.default(arg3_1, [-1]); arg3_1 = None
view_5 = torch.ops.aten.view.dtype(view_4, torch.uint8); view_4 = None
alias_4 = torch.ops.aten.alias.default(view_5); view_5 = None
alias_5 = torch.ops.aten.alias.default(alias_4); alias_4 = None
add_19 = add_12 + mul_1; add_12 = None
view_6 = torch.ops.aten.view.default(arg4_1, [-1]); arg4_1 = None
view_7 = torch.ops.aten.view.dtype(view_6, torch.uint8); view_6 = None
alias_6 = torch.ops.aten.alias.default(view_7); view_7 = None
alias_7 = torch.ops.aten.alias.default(alias_6); alias_6 = None
add_26 = add_19 + mul_1; add_19 = None
view_8 = torch.ops.aten.view.default(arg5_1, [-1]); arg5_1 = None
view_9 = torch.ops.aten.view.dtype(view_8, torch.uint8); view_8 = None
alias_8 = torch.ops.aten.alias.default(view_9); view_9 = None
alias_9 = torch.ops.aten.alias.default(alias_8); alias_8 = None
add_33 = add_26 + mul_1; add_26 = None
view_10 = torch.ops.aten.view.default(arg6_1, [-1]); arg6_1 = None
view_11 = torch.ops.aten.view.dtype(view_10, torch.uint8); view_10 = None
alias_10 = torch.ops.aten.alias.default(view_11); view_11 = None
alias_11 = torch.ops.aten.alias.default(alias_10); alias_10 = None
add_40 = add_33 + mul_1; add_33 = None
view_12 = torch.ops.aten.view.default(arg7_1, [-1]); arg7_1 = None
view_13 = torch.ops.aten.view.dtype(view_12, torch.uint8); view_12 = None
alias_12 = torch.ops.aten.alias.default(view_13); view_13 = None
alias_13 = torch.ops.aten.alias.default(alias_12); alias_12 = None
add_47 = add_40 + mul_1; add_40 = None
empty = torch.ops.aten.empty.memory_format([add_47], dtype = torch.uint8, device = device(type='cpu'), pin_memory = False); add_47 = empty = None
cat = torch.ops.aten.cat.default([alias_1, alias_3, alias_5, alias_7, alias_9, alias_11, alias_13]); alias_1 = alias_3 = alias_5 = alias_7 = alias_9 = alias_11 = alias_13 = None
split_with_sizes = torch.ops.aten.split_with_sizes.default(cat, [mul_1, mul_1, mul_1, mul_1, mul_1, mul_1, mul_1])
getitem = split_with_sizes[0]
getitem_1 = split_with_sizes[1]
getitem_2 = split_with_sizes[2]
getitem_3 = split_with_sizes[3]
getitem_4 = split_with_sizes[4]
getitem_5 = split_with_sizes[5]
getitem_6 = split_with_sizes[6]; split_with_sizes = None
view_14 = torch.ops.aten.view.dtype(getitem, torch.float32); getitem = None
alias_14 = torch.ops.aten.alias.default(view_14); view_14 = None
alias_15 = torch.ops.aten.alias.default(alias_14); alias_14 = None
view_15 = torch.ops.aten.view.default(alias_15, [arg0_1]); alias_15 = view_15 = None
view_16 = torch.ops.aten.view.dtype(getitem_1, torch.float32); getitem_1 = None
alias_16 = torch.ops.aten.alias.default(view_16); view_16 = None
alias_17 = torch.ops.aten.alias.default(alias_16); alias_16 = None
view_17 = torch.ops.aten.view.default(alias_17, [arg0_1]); alias_17 = view_17 = None
view_18 = torch.ops.aten.view.dtype(getitem_2, torch.float32); getitem_2 = None
alias_18 = torch.ops.aten.alias.default(view_18); view_18 = None
alias_19 = torch.ops.aten.alias.default(alias_18); alias_18 = None
view_19 = torch.ops.aten.view.default(alias_19, [arg0_1]); alias_19 = view_19 = None
view_20 = torch.ops.aten.view.dtype(getitem_3, torch.float32); getitem_3 = None
alias_20 = torch.ops.aten.alias.default(view_20); view_20 = None
alias_21 = torch.ops.aten.alias.default(alias_20); alias_20 = None
view_21 = torch.ops.aten.view.default(alias_21, [arg0_1]); alias_21 = view_21 = None
view_22 = torch.ops.aten.view.dtype(getitem_4, torch.float32); getitem_4 = None
alias_22 = torch.ops.aten.alias.default(view_22); view_22 = None
alias_23 = torch.ops.aten.alias.default(alias_22); alias_22 = None
view_23 = torch.ops.aten.view.default(alias_23, [arg0_1]); alias_23 = view_23 = None
view_24 = torch.ops.aten.view.dtype(getitem_5, torch.float32); getitem_5 = None
alias_24 = torch.ops.aten.alias.default(view_24); view_24 = None
alias_25 = torch.ops.aten.alias.default(alias_24); alias_24 = None
view_25 = torch.ops.aten.view.default(alias_25, [arg0_1]); alias_25 = view_25 = None
view_26 = torch.ops.aten.view.dtype(getitem_6, torch.float32); getitem_6 = None
alias_26 = torch.ops.aten.alias.default(view_26); view_26 = None
alias_27 = torch.ops.aten.alias.default(alias_26); alias_26 = None
view_27 = torch.ops.aten.view.default(alias_27, [arg0_1]); alias_27 = view_27 = None
split_with_sizes_1 = torch.ops.aten.split_with_sizes.default(cat, [mul_1, mul_1, mul_1, mul_1, mul_1, mul_1, mul_1])
getitem_7 = split_with_sizes_1[0]
getitem_8 = split_with_sizes_1[1]; getitem_8 = None
getitem_9 = split_with_sizes_1[2]; getitem_9 = None
getitem_10 = split_with_sizes_1[3]; getitem_10 = None
getitem_11 = split_with_sizes_1[4]; getitem_11 = None
getitem_12 = split_with_sizes_1[5]; getitem_12 = None
getitem_13 = split_with_sizes_1[6]; split_with_sizes_1 = getitem_13 = None
view_28 = torch.ops.aten.view.dtype(getitem_7, torch.float32); getitem_7 = None
alias_28 = torch.ops.aten.alias.default(view_28); view_28 = None
alias_29 = torch.ops.aten.alias.default(alias_28); alias_28 = None
alias_30 = torch.ops.aten.alias.default(alias_29); alias_29 = None
view_29 = torch.ops.aten.view.default(alias_30, [arg0_1]); alias_30 = None
split_with_sizes_2 = torch.ops.aten.split_with_sizes.default(cat, [mul_1, mul_1, mul_1, mul_1, mul_1, mul_1, mul_1])
getitem_14 = split_with_sizes_2[0]; getitem_14 = None
getitem_15 = split_with_sizes_2[1]
getitem_16 = split_with_sizes_2[2]; getitem_16 = None
getitem_17 = split_with_sizes_2[3]; getitem_17 = None
getitem_18 = split_with_sizes_2[4]; getitem_18 = None
getitem_19 = split_with_sizes_2[5]; getitem_19 = None
getitem_20 = split_with_sizes_2[6]; split_with_sizes_2 = getitem_20 = None
view_30 = torch.ops.aten.view.dtype(getitem_15, torch.float32); getitem_15 = None
alias_31 = torch.ops.aten.alias.default(view_30); view_30 = None
alias_32 = torch.ops.aten.alias.default(alias_31); alias_31 = None
alias_33 = torch.ops.aten.alias.default(alias_32); alias_32 = None
view_31 = torch.ops.aten.view.default(alias_33, [arg0_1]); alias_33 = None
split_with_sizes_3 = torch.ops.aten.split_with_sizes.default(cat, [mul_1, mul_1, mul_1, mul_1, mul_1, mul_1, mul_1])
getitem_21 = split_with_sizes_3[0]; getitem_21 = None
getitem_22 = split_with_sizes_3[1]; getitem_22 = None
getitem_23 = split_with_sizes_3[2]
getitem_24 = split_with_sizes_3[3]; getitem_24 = None
getitem_25 = split_with_sizes_3[4]; getitem_25 = None
getitem_26 = split_with_sizes_3[5]; getitem_26 = None
getitem_27 = split_with_sizes_3[6]; split_with_sizes_3 = getitem_27 = None
view_32 = torch.ops.aten.view.dtype(getitem_23, torch.float32); getitem_23 = None
alias_34 = torch.ops.aten.alias.default(view_32); view_32 = None
alias_35 = torch.ops.aten.alias.default(alias_34); alias_34 = None
alias_36 = torch.ops.aten.alias.default(alias_35); alias_35 = None
view_33 = torch.ops.aten.view.default(alias_36, [arg0_1]); alias_36 = None
split_with_sizes_4 = torch.ops.aten.split_with_sizes.default(cat, [mul_1, mul_1, mul_1, mul_1, mul_1, mul_1, mul_1])
getitem_28 = split_with_sizes_4[0]; getitem_28 = None
getitem_29 = split_with_sizes_4[1]; getitem_29 = None
getitem_30 = split_with_sizes_4[2]; getitem_30 = None
getitem_31 = split_with_sizes_4[3]
getitem_32 = split_with_sizes_4[4]; getitem_32 = None
getitem_33 = split_with_sizes_4[5]; getitem_33 = None
getitem_34 = split_with_sizes_4[6]; split_with_sizes_4 = getitem_34 = None
view_34 = torch.ops.aten.view.dtype(getitem_31, torch.float32); getitem_31 = None
alias_37 = torch.ops.aten.alias.default(view_34); view_34 = None
alias_38 = torch.ops.aten.alias.default(alias_37); alias_37 = None
alias_39 = torch.ops.aten.alias.default(alias_38); alias_38 = None
view_35 = torch.ops.aten.view.default(alias_39, [arg0_1]); alias_39 = None
split_with_sizes_5 = torch.ops.aten.split_with_sizes.default(cat, [mul_1, mul_1, mul_1, mul_1, mul_1, mul_1, mul_1])
getitem_35 = split_with_sizes_5[0]; getitem_35 = None
getitem_36 = split_with_sizes_5[1]; getitem_36 = None
getitem_37 = split_with_sizes_5[2]; getitem_37 = None
getitem_38 = split_with_sizes_5[3]; getitem_38 = None
getitem_39 = split_with_sizes_5[4]
getitem_40 = split_with_sizes_5[5]; getitem_40 = None
getitem_41 = split_with_sizes_5[6]; split_with_sizes_5 = getitem_41 = None
view_36 = torch.ops.aten.view.dtype(getitem_39, torch.float32); getitem_39 = None
alias_40 = torch.ops.aten.alias.default(view_36); view_36 = None
alias_41 = torch.ops.aten.alias.default(alias_40); alias_40 = None
alias_42 = torch.ops.aten.alias.default(alias_41); alias_41 = None
view_37 = torch.ops.aten.view.default(alias_42, [arg0_1]); alias_42 = None
split_with_sizes_6 = torch.ops.aten.split_with_sizes.default(cat, [mul_1, mul_1, mul_1, mul_1, mul_1, mul_1, mul_1])
getitem_42 = split_with_sizes_6[0]; getitem_42 = None
getitem_43 = split_with_sizes_6[1]; getitem_43 = None
getitem_44 = split_with_sizes_6[2]; getitem_44 = None
getitem_45 = split_with_sizes_6[3]; getitem_45 = None
getitem_46 = split_with_sizes_6[4]; getitem_46 = None
getitem_47 = split_with_sizes_6[5]
getitem_48 = split_with_sizes_6[6]; split_with_sizes_6 = getitem_48 = None
view_38 = torch.ops.aten.view.dtype(getitem_47, torch.float32); getitem_47 = None
alias_43 = torch.ops.aten.alias.default(view_38); view_38 = None
alias_44 = torch.ops.aten.alias.default(alias_43); alias_43 = None
alias_45 = torch.ops.aten.alias.default(alias_44); alias_44 = None
view_39 = torch.ops.aten.view.default(alias_45, [arg0_1]); alias_45 = None
split_with_sizes_7 = torch.ops.aten.split_with_sizes.default(cat, [mul_1, mul_1, mul_1, mul_1, mul_1, mul_1, mul_1]); cat = mul_1 = None
getitem_49 = split_with_sizes_7[0]; getitem_49 = None
getitem_50 = split_with_sizes_7[1]; getitem_50 = None
getitem_51 = split_with_sizes_7[2]; getitem_51 = None
getitem_52 = split_with_sizes_7[3]; getitem_52 = None
getitem_53 = split_with_sizes_7[4]; getitem_53 = None
getitem_54 = split_with_sizes_7[5]; getitem_54 = None
getitem_55 = split_with_sizes_7[6]; split_with_sizes_7 = None
view_40 = torch.ops.aten.view.dtype(getitem_55, torch.float32); getitem_55 = None
alias_46 = torch.ops.aten.alias.default(view_40); view_40 = None
alias_47 = torch.ops.aten.alias.default(alias_46); alias_46 = None
alias_48 = torch.ops.aten.alias.default(alias_47); alias_47 = None
view_41 = torch.ops.aten.view.default(alias_48, [arg0_1]); alias_48 = arg0_1 = None
return (view_29, view_31, view_33, view_35, view_37, view_39, view_41)
```
where exactly in the code that happen ? in this function:
```
def _create_graph(f, args, *, aot_config: AOTConfig) -> torch.fx.GraphModule:
# FunctionalTensorMode must be enabled here.
# See Note [Accessing .grad_fn on FunctionalTensor]
with enable_python_dispatcher(), FunctionalTensorMode(
pre_dispatch=aot_config.pre_dispatch,
export=aot_config.is_export,
# Allow token discovery for joint fn tracing as tokens can be used in backward.
_allow_token_discovery=True,
):
fx_g = make_fx(
f,
decomposition_table=aot_config.decompositions,
record_module_stack=True,
pre_dispatch=aot_config.pre_dispatch,
)(*args)
return fx_g
```
in /home/lsakka/pytorch/torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py
cc @bdhirsh @ezyang @zou3519 @penguinwu @yf225 @albanD @gqchen @pearu @nikitaved @soulitzer @Varal7 @xmfan @chauhang | triaged,module: functionalization,oncall: pt2,module: pt2-dispatcher | low | Critical |
2,662,773,228 | electron | useSystemPicker should return an audio track when the user requests audio in the constraints { audio: true}. | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
33.2.0
### What operating system(s) are you using?
macOS
### Operating System Version
15.1
### What arch are you using?
arm64 (including Apple Silicon)
### Last Known Working Electron version
N/A
### Expected Behavior
An audio track should be returned when the user requests audio in the constraints `{audio: true}`.
### Actual Behavior
If you use the native system picker, no audio track is returned.
### Testcase Gist URL
_No response_
### Additional Information
_No response_ | platform/macOS,bug :beetle:,component/desktopcapturer,33-x-y | low | Critical |
2,662,790,138 | react | [React 19] Bug : Weird behavior when changing the order of a list | # Explanations :
It seems to have some weird behavior introduced in the React 19 RC update. When we iterate over a list to create components and put a unique id as a key, when sorting certain elements are rerendered. Which was not the case in version 18.
# The effect :
```js
//In each item
useEffect(() => {
const doc = ref.current;
if (!doc) {
return;
}
const timeout = setTimeout(() => {
doc.animate(
[{ outlineColor: "#d20f39" }, { outlineColor: "transparent" }],
{
duration: 300,
easing: "cubic-bezier(0.4, 0, 0.2, 1)",
iterations: 2,
}
);
}, 500);
return () => {
clearTimeout(timeout);
};
}, [item.title]);
```
# React 18.3.1 :
https://github.com/user-attachments/assets/bb21513a-5eda-48bd-b306-81215d0a25f2
# React 19.0.0-rc-66855b96-20241106 :
https://github.com/user-attachments/assets/8b229790-56e9-4ace-be38-e4d7656ce61d
# Code
https://github.com/Armand-Dusart/react-19-key-bug
#Meta
Node : 22.9.0
npm : 10.8.3
| React 19 | low | Critical |
2,662,842,656 | vscode | improve failure message when disposables are leaked in unit test | This could probably be more immediately helpful to see why it failed

| debt,engineering | low | Critical |
2,662,889,028 | pytorch | torch.cond with unbacked symint failed to compile in AOTI | ### 🐛 Describe the bug
Repro:
```python
def test_cond_symint_input(self):
class M(torch.nn.Module):
def forward(self, x, y, z):
a = y.shape[0]
b = z.item()
def true_fn(x):
return x + a
def false_fn(x):
return x + b * z
return torch.cond(x.shape[0] > 5, true_fn, false_fn, (x,))
inp = (torch.ones(3, 3), torch.ones(3, 3), torch.tensor(5))
ep = torch.export.export(M(), inp, dynamic_shapes={"x": {0: Dim("d")}, "y": {0: Dim("d1")}, "z": None})
print(ep)
path = torch._inductor.aot_compile(ep.module(), inp)
breakpoint()
print(f"{path[:-3]}.cpp")
```
Error message: https://gist.github.com/larryliu0820/b2c4f5da466462ad53c6fcfbe9a4c4bc
Generated cpp: https://gist.github.com/larryliu0820/5423934533003b891425ad75cc32a0b0
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0.dev20241115+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: CentOS Stream 9 (x86_64)
GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-2)
Clang version: Could not collect
CMake version: version 3.30.0
Libc version: glibc-2.34
Python version: 3.11.9 (main, Apr 19 2024, 16:48:06) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.4.3-0_fbk14_hardened_2601_gcd42476b84e9-x86_64-with-glibc2.34
Is CUDA available: False
CUDA runtime version: 12.0.140
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA H100
Nvidia driver version: 550.90.07
cuDNN version: Probably one of the following:
/usr/lib64/libcudnn.so.9.1.0
/usr/lib64/libcudnn_adv.so.9.1.0
/usr/lib64/libcudnn_cnn.so.9.1.0
/usr/lib64/libcudnn_engines_precompiled.so.9.1.0
/usr/lib64/libcudnn_engines_runtime_compiled.so.9.1.0
/usr/lib64/libcudnn_graph.so.9.1.0
/usr/lib64/libcudnn_heuristic.so.9.1.0
/usr/lib64/libcudnn_ops.so.9.1.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 46
On-line CPU(s) list: 0-45
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9654 96-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 1
Core(s) per socket: 46
Socket(s): 1
Stepping: 1
BogoMIPS: 4792.78
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw perfctr_core invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 clzero xsaveerptr wbnoinvd arat npt lbrv nrip_save tsc_scale vmcb_clean pausefilter pfthreshold v_vmsave_vmload vgif vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm flush_l1d arch_capabilities
Virtualization: AMD-V
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 2.9 MiB (46 instances)
L1i cache: 2.9 MiB (46 instances)
L2 cache: 23 MiB (46 instances)
L3 cache: 736 MiB (46 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-45
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] executorch==0.5.0a0+b7763e9
[pip3] flake8==6.1.0
[pip3] flake8-breakpoint==1.1.0
[pip3] flake8-bugbear==23.9.16
[pip3] flake8-comprehensions==3.14.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-plugin-utils==1.3.3
[pip3] flake8-pyi==23.5.0
[pip3] flake8-simplify==0.19.3
[pip3] mypy==1.10.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.23.2
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.20
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] optree==0.12.1
[pip3] pytorch-triton==3.1.0+cf34004b8a
[pip3] torch==2.6.0.dev20241115+cpu
[pip3] torchao==0.5.0+git0916b5b2
[pip3] torchaudio==2.5.0.dev20241112+cpu
[pip3] torchsr==1.0.4
[pip3] torchtune==0.4.0.dev20241111+cpu
[pip3] torchvision==0.20.0.dev20241112+cpu
[conda] executorch 0.5.0a0+b7763e9 pypi_0 pypi
[conda] numpy 1.23.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.20 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] optree 0.12.1 pypi_0 pypi
[conda] pytorch-triton 3.1.0+cf34004b8a pypi_0 pypi
[conda] torch 2.6.0.dev20241115+cpu pypi_0 pypi
[conda] torchao 0.5.0+git0916b5b2 pypi_0 pypi
[conda] torchaudio 2.5.0.dev20241112+cpu pypi_0 pypi
[conda] torchfix 0.5.0 pypi_0 pypi
[conda] torchsr 1.0.4 pypi_0 pypi
[conda] torchtune 0.4.0.dev20241111+cpu pypi_0 pypi
[conda] torchvision 0.20.0.dev20241112+cpu pypi_0 pypi
```
cc @ezyang @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @chenyang78 | oncall: pt2,oncall: export,module: aotinductor | low | Critical |
2,662,899,048 | yt-dlp | [ApplePodcasts] Download all episodes of a podcast | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm requesting a site-specific feature
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
_No response_
### Example URLs
https://podcasts.apple.com/us/podcast/dan-carlins-hardcore-history/id173001861
### Provide a description that is worded well enough to be understood
Currently, yt-dlp supports links from Apple podcasts, but only direct links to each episode. When giving yt-dlp the main link of a podcast it'll use the generic extractor, then prints out `Unsupported URL`.
I'd love yt-dlp to handle entire podcasts instead of feeding it each and every episode's URL.
Example:
- When trying to download `https://podcasts.apple.com/us/podcast/mania-for-subjugation/id173001861?i=1000658235879` it downloads successfully, because it's a single episode.
- When trying to download `https://podcasts.apple.com/us/podcast/dan-carlins-hardcore-history/id173001861` it fails with `Unsupported URL`, because it's the entire podcast.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
$ yt-dlp -vU https://podcasts.apple.com/us/podcast/dan-carlins-hardcore-history/id173001861
[debug] Command-line config: ['-vU', 'https://podcasts.apple.com/us/podcast/dan-carlins-hardcore-history/id173001861']
[debug] Encodings: locale cp1256, fs utf-8, pref cp1256, out cp1256 (No VT), error cp1256 (No VT), screen cp1256 (No VT)
[debug] yt-dlp version stable@2024.11.04 from yt-dlp/yt-dlp [197d0b03b] (pip)
[debug] Python 3.12.4 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 3.0.11 19 Sep 2023)
[debug] exe versions: ffmpeg N-117671-g562524587e-20241027 (setts), ffprobe N-117671-g562524587e-20241027
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.07.04, curl_cffi-0.7.1, mutagen-1.47.0, requests-2.32.3, sqlite3-3.42.0, urllib3-2.2.2, websockets-13.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Extractor Plugins: AGB (YoutubeIE)
[debug] Plugin directories: ['C:\\Users\\Winuser\\AppData\\Local\\Programs\\Python\\Python312\\Lib\\site-packages\\yt_dlp_plugins']
[debug] Loaded 1838 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.11.04 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.11.04 from yt-dlp/yt-dlp)
[generic] Extracting URL: https://podcasts.apple.com/us/podcast/dan-carlins-hardcore-history/id173001861
[generic] id173001861: Downloading webpage
WARNING: [generic] Falling back on generic information extractor
[generic] id173001861: Extracting information
[debug] Looking for embeds
ERROR: Unsupported URL: https://podcasts.apple.com/us/podcast/dan-carlins-hardcore-history/id173001861
Traceback (most recent call last):
File "C:\Users\Winuser\AppData\Local\Programs\Python\Python312\Lib\site-packages\yt_dlp\YoutubeDL.py", line 1625, in wrapper
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Winuser\AppData\Local\Programs\Python\Python312\Lib\site-packages\yt_dlp\YoutubeDL.py", line 1760, in __extract_info
ie_result = ie.extract(url)
^^^^^^^^^^^^^^^
File "C:\Users\Winuser\AppData\Local\Programs\Python\Python312\Lib\site-packages\yt_dlp\extractor\common.py", line 742, in extract
ie_result = self._real_extract(url)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Winuser\AppData\Local\Programs\Python\Python312\Lib\site-packages\yt_dlp\extractor\generic.py", line 2553, in _real_extract
raise UnsupportedError(url)
yt_dlp.utils.UnsupportedError: Unsupported URL: https://podcasts.apple.com/us/podcast/dan-carlins-hardcore-history/id173001861
```
| site-enhancement,triage | low | Critical |
2,662,936,860 | deno | no hot realod under WSL2 | Environment : Deno Version: 2.0.6 | WSL2
Problem Description :
When I add external folders to the watch list, such as in the following command:
`"start": "deno run -A --watch=static/,routes/,islands/,components/ dev.ts"`
Hot reload does not work as expected. Specifically:
• Changes made to files in islands/ or components/ do not trigger a reload.
• I have to manually restart the process for the changes to take effect.
Steps to Reproduce
1. Add external folders (static/, routes/, islands/, components/) to the --watch parameter in the start script.
2. Make changes to files in islands/ or components/.
3. Observe that hot reload does not occur.
Observations:
I installed Deno using under WSL2 Hot reload doesn't work
Using that command from powershell outside of WSL enviroment makes hot realod work just fine
`irm https://deno.land/install.ps1 | iex`
After this, hot reload started working, but there is a cascading problem (details unclear).
On a MacBook with an ARM processor, everything works fine, including hot reload.
Expected Behavior:
Hot reload should work consistently across all specified folders (static/, routes/, islands/, components/).
Actual Behavior:
Changes in islands/ or components/ do not trigger a reload.
Additional Information
• The issue may be specific to WSL2.
• After using powershell command via cascade on windows , it's fail everytime with internel error. To fix you
have to restart machine. | windows,needs investigation,--watch | low | Critical |
2,662,953,820 | vscode | Semantic tokens are causing a large changed range onTokensChanged event to fire | I was working on https://github.com/microsoft/vscode/issues/227107 which involves listening to various view events and updating a cache. But I'm running into this problem where I want to clear all lines when new tokens come in, but it seems that a large range is being fired as changed when scrolling stops.
Here are a few logs that show that when scrolling only a single write occurs (the new line), 1 for each of the 2 mirrored buffers sent to the GPU. But when I stop scrolling I need to clear 33 lines from the cache (approximately the height of the viewport):

This is where the event originates after scrolling stops:


Would it be possible to tighten this to only fire for the ranges that actually change? AFAICT it's not changing any tokens. | feature-request,semantic-tokens | low | Minor |
2,662,954,432 | pytorch | compiled LSTM performance is worse than not compiled | ### 🐛 Describe the bug
I tried to export and compile LSTM model and it's performance finally is much worse than in CUDA in terms of total kernel time and in number of operations
```(python)
import torch
class LSTM(torch.nn.LSTM):
def _update_flat_weights(self):
return
@torch.compiler.disable
def forward(self, *args, **kwargs):
return super().forward(*args, **kwargs)
class M(torch.nn.Module):
def __init__(self):
super().__init__()
self.lstm = LSTM(input_size=64, hidden_size=64, batch_first=True)
def forward(self, x):
return self.lstm(x)
m = M().cuda()
inputs = (torch.randn(32, 128, 64).cuda(),)
exported_program = torch.export.export(
m, inputs,
strict=False,
#dynamic_shapes=({1: torch.export.Dim('history')},)
)
print(exported_program)
exported_m = exported_program.module()
profiler_kwargs = {
'activities': [torch.profiler.ProfilerActivity.CPU, torch.profiler.ProfilerActivity.CUDA],
'record_shapes': False,
'profile_memory': False,
'with_stack': True,
'with_flops': False,
'with_modules': True,
}
exported_m(*inputs)
profiler = torch.profiler.profile(**profiler_kwargs)
with profiler:
exported_m(*inputs)
profiler.export_chrome_trace(str('/tmp/test_compiled/trace_exported.json'))
exported_m.compile()
exported_m(*inputs)
profiler = torch.profiler.profile(**profiler_kwargs)
with profiler:
exported_m(*inputs)
profiler.export_chrome_trace(str('/tmp/test_compiled/trace_compiled.json'))
```
the printed program is
```
ExportedProgram:
class GraphModule(torch.nn.Module):
def forward(self, p_lstm_weight_ih_l0: "f32[256, 64]", p_lstm_weight_hh_l0: "f32[256, 64]", p_lstm_bias_ih_l0: "f32[256]", p_lstm_bias_hh_l0: "f32[256]", c_lstm_lifted_tensor_0: "f32[256, 64]", c_lstm_lifted_tensor_1: "f32[256, 64]", c_lstm_lifted_tensor_2: "f32[256]", c_lstm_lifted_tensor_3: "f32[256]", x: "f32[32, 128, 64]"):
# File: [/home/vscode/.cache/bazel/_bazel_vscode/93fd2cd9b3c5d87ae416561bff883334/execroot/__main__/bazel-out/k8-opt/bin/prediction/e.jupyter.runfiles/pytorch/lib/python3.10/site-packages/torch/nn/modules/rnn.py:1085](https://vscode-remote+ssh-002dremote-002bvscode-0040ws-002dmstebelev-002dmstebelev-002dtrain-002ecoder-002epods-002emax-002eavride-002eai.vscode-resource.vscode-cdn.net/home/vscode/.cache/bazel/_bazel_vscode/93fd2cd9b3c5d87ae416561bff883334/execroot/__main__/bazel-out/k8-opt/bin/prediction/e.jupyter.runfiles/pytorch/lib/python3.10/site-packages/torch/nn/modules/rnn.py:1085) in forward, code: h_zeros = torch.zeros(
zeros: "f32[1, 32, 64]" = torch.ops.aten.zeros.default([1, 32, 64], dtype = torch.float32, device = device(type='cuda', index=0), pin_memory = False)
# File: [/home/vscode/.cache/bazel/_bazel_vscode/93fd2cd9b3c5d87ae416561bff883334/execroot/__main__/bazel-out/k8-opt/bin/prediction/e.jupyter.runfiles/pytorch/lib/python3.10/site-packages/torch/nn/modules/rnn.py:1092](https://vscode-remote+ssh-002dremote-002bvscode-0040ws-002dmstebelev-002dmstebelev-002dtrain-002ecoder-002epods-002emax-002eavride-002eai.vscode-resource.vscode-cdn.net/home/vscode/.cache/bazel/_bazel_vscode/93fd2cd9b3c5d87ae416561bff883334/execroot/__main__/bazel-out/k8-opt/bin/prediction/e.jupyter.runfiles/pytorch/lib/python3.10/site-packages/torch/nn/modules/rnn.py:1092) in forward, code: c_zeros = torch.zeros(
zeros_1: "f32[1, 32, 64]" = torch.ops.aten.zeros.default([1, 32, 64], dtype = torch.float32, device = device(type='cuda', index=0), pin_memory = False)
# File: [/home/vscode/.cache/bazel/_bazel_vscode/93fd2cd9b3c5d87ae416561bff883334/execroot/__main__/bazel-out/k8-opt/bin/prediction/e.jupyter.runfiles/pytorch/lib/python3.10/site-packages/torch/nn/modules/rnn.py:1123](https://vscode-remote+ssh-002dremote-002bvscode-0040ws-002dmstebelev-002dmstebelev-002dtrain-002ecoder-002epods-002emax-002eavride-002eai.vscode-resource.vscode-cdn.net/home/vscode/.cache/bazel/_bazel_vscode/93fd2cd9b3c5d87ae416561bff883334/execroot/__main__/bazel-out/k8-opt/bin/prediction/e.jupyter.runfiles/pytorch/lib/python3.10/site-packages/torch/nn/modules/rnn.py:1123) in forward, code: result = _VF.lstm(
lstm = torch.ops.aten.lstm.input(x, [zeros, zeros_1], [c_lstm_lifted_tensor_0, c_lstm_lifted_tensor_1, c_lstm_lifted_tensor_2, c_lstm_lifted_tensor_3], True, 1, 0.0, True, False, True); x = zeros = zeros_1 = c_lstm_lifted_tensor_0 = c_lstm_lifted_tensor_1 = c_lstm_lifted_tensor_2 = c_lstm_lifted_tensor_3 = None
getitem: "f32[32, 128, 64]" = lstm[0]
getitem_1: "f32[1, 1, 32, 64]" = lstm[1]
getitem_2: "f32[1, 1, 32, 64]" = lstm[2]; lstm = None
return (getitem, getitem_1, getitem_2)
Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.PARAMETER: 2>, arg=TensorArgument(name='p_lstm_weight_ih_l0'), target='lstm.weight_ih_l0', persistent=None), InputSpec(kind=<InputKind.PARAMETER: 2>, arg=TensorArgument(name='p_lstm_weight_hh_l0'), target='lstm.weight_hh_l0', persistent=None), InputSpec(kind=<InputKind.PARAMETER: 2>, arg=TensorArgument(name='p_lstm_bias_ih_l0'), target='lstm.bias_ih_l0', persistent=None), InputSpec(kind=<InputKind.PARAMETER: 2>, arg=TensorArgument(name='p_lstm_bias_hh_l0'), target='lstm.bias_hh_l0', persistent=None), InputSpec(kind=<InputKind.CONSTANT_TENSOR: 4>, arg=TensorArgument(name='c_lstm_lifted_tensor_0'), target='lstm.lifted_tensor_0', persistent=None), InputSpec(kind=<InputKind.CONSTANT_TENSOR: 4>, arg=TensorArgument(name='c_lstm_lifted_tensor_1'), target='lstm.lifted_tensor_1', persistent=None), InputSpec(kind=<InputKind.CONSTANT_TENSOR: 4>, arg=TensorArgument(name='c_lstm_lifted_tensor_2'), target='lstm.lifted_tensor_2', persistent=None), InputSpec(kind=<InputKind.CONSTANT_TENSOR: 4>, arg=TensorArgument(name='c_lstm_lifted_tensor_3'), target='lstm.lifted_tensor_3', persistent=None), InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='x'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='getitem'), target=None), OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='getitem_1'), target=None), OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='getitem_2'), target=None)])
Range constraints: {}
```
The problems:
1. It [does some work with data pointers](https://github.com/pytorch/pytorch/blob/9602f56979fb24d6249117918926701b78e51bca/torch/nn/modules/rnn.py#L255), that's why I redefined `_update_flat_weights` in my custom code
2. after the fix above it works, but looks like it unrolls LSTM using [this implementation](https://github.com/pytorch/pytorch/blob/e2e67a010ac359899e24cebcbd04337831d10b9b/torch/_decomp/decompositions.py#L3422-L3446). And that's why it doesn't work with dynamic shapes.
3. With this implementation it's performance is much worse, it creates a lot of kernels instead of a few cudnn kernels in implementation before calling `exported_m.compile()`. I'm attaching screenshots and traces.
**My question is**: is there any way to fallback to cudnn implementation for LSTM after calling .compile(), but compile other modules in the model with triton?
Exported compiled model:
[trace_compiled.json](https://github.com/user-attachments/files/17780466/trace_compiled.json)
<img width="1202" alt="Screenshot 2024-11-15 at 20 31 02" src="https://github.com/user-attachments/assets/31f8fc4c-4748-4634-bef7-7f0ac538760c">
Exported, but not compiled model:
[trace_exported.json](https://github.com/user-attachments/files/17780474/trace_exported.json)
<img width="1202" alt="Screenshot 2024-11-15 at 20 33 36" src="https://github.com/user-attachments/assets/35dd5d3b-5b19-4704-93a5-ad8b32f61487">
### Error logs
_No response_
### Versions
Unfotunately it has failed, but my torch version is 2.5.1
```
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 24366 100 24366 0 0 75349 0 --:--:-- --:--:-- --:--:-- 75436
Collecting environment information...
Traceback (most recent call last):
File "/home/vscode/.cache/bazel/_bazel_vscode/93fd2cd9b3c5d87ae416561bff883334/execroot/__main__/bazel-out/k8-opt/bin/prediction/e.jupyter.runfiles/__main__/collect_env.py", line 693, in <module>
main()
File "/home/vscode/.cache/bazel/_bazel_vscode/93fd2cd9b3c5d87ae416561bff883334/execroot/__main__/bazel-out/k8-opt/bin/prediction/e.jupyter.runfiles/__main__/collect_env.py", line 676, in main
output = get_pretty_env_info()
File "/home/vscode/.cache/bazel/_bazel_vscode/93fd2cd9b3c5d87ae416561bff883334/execroot/__main__/bazel-out/k8-opt/bin/prediction/e.jupyter.runfiles/__main__/collect_env.py", line 671, in get_pretty_env_info
return pretty_str(get_env_info())
File "/home/vscode/.cache/bazel/_bazel_vscode/93fd2cd9b3c5d87ae416561bff883334/execroot/__main__/bazel-out/k8-opt/bin/prediction/e.jupyter.runfiles/__main__/collect_env.py", line 496, in get_env_info
pip_version, pip_list_output = get_pip_packages(run_lambda)
File "/home/vscode/.cache/bazel/_bazel_vscode/93fd2cd9b3c5d87ae416561bff883334/execroot/__main__/bazel-out/k8-opt/bin/prediction/e.jupyter.runfiles/__main__/collect_env.py", line 453, in get_pip_packages
out = run_with_pip([sys.executable, '-mpip'])
File "/home/vscode/.cache/bazel/_bazel_vscode/93fd2cd9b3c5d87ae416561bff883334/execroot/__main__/bazel-out/k8-opt/bin/prediction/e.jupyter.runfiles/__main__/collect_env.py", line 448, in run_with_pip
for line in out.splitlines()
AttributeError: 'NoneType' object has no attribute 'splitlines'
```
cc @chauhang @penguinwu @SherlockNoMad @ezyang | triaged,oncall: pt2,module: decompositions | low | Critical |
2,662,966,128 | rust | `assert_eq!` fails with `const { .. }` in Rust 2024 | This code fails to compile in Rust 2024 when it should pass:
```rust
//@ edition:2024
fn main() {
assert_eq!(0, const { 0 });
//~^ ERROR no rules expected keyword `const`
}
```
This will be fixed, I expect, by migrating the standard library to Rust 2024, and when doing that, by using the `:expr` rather than `:expr_2021` macro fragment specifier in `assert_eq` and similar macros.
cc @rust-lang/wg-macros @compiler-errors @ehuss
| A-macros,C-bug,T-libs,A-edition-2024,I-edition-triaged | low | Critical |
2,662,987,104 | vscode | Remote desktop connection drops when opening vscode |
Type: <b>Bug</b>
Connection drops for at least 30 seconds up to several minutes when opening vscode.
VS Code version: Code 1.95.3 (f1a4fb101478ce6ec82fe9627c43efbf9e98c813, 2024-11-13T14:50:04.152Z)
OS version: Windows_NT x64 10.0.19045
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Xeon(R) W-2104 CPU @ 3.20GHz (4 x 3192)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|15.69GB (9.93GB free)|
|Process Argv|C:\\testrack\\test\\.vscode\\testrack.code-workspace --crash-reporter-id a53dc31f-19d6-4a43-919c-b5007b87d4cb|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (9)</summary>
Extension|Author (truncated)|Version
---|---|---
ruff|cha|2024.54.0
debugpy|ms-|2024.13.2024111101
python|ms-|2024.21.2024111501
vscode-pylance|ms-|2024.11.100
jupyter|ms-|2024.11.2024102401
jupyter-keymap|ms-|1.1.2
jupyter-renderers|ms-|1.0.21
vscode-jupyter-cell-tags|ms-|0.1.9
vscode-jupyter-slideshow|ms-|0.1.6
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
962ge761:30959799
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
2e7ec940:31000449
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
01bff139:31013167
dvdeprecation:31068756
dwnewjupyter:31046869
newcmakeconfigv2:31071590
impr_priority:31102340
nativerepl2:31139839
pythonrstrctxt:31112756
cf971741:31144450
iacca1:31171482
notype1:31157159
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31181875
```
</details>
<!-- generated by issue reporter --> | bug,windows,remote-desktop | low | Critical |
2,663,025,159 | go | x/tools/go/analysis/passes/nilness: spurious "impossible/tautological condition" with cond := a != nil; if cond {...}; if cond { ... } | ### Go version
go version go1.23.0 linux/amd64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE=''
GOARCH='amd64'
GOBIN=''
GOCACHE='/home/liam/.cache/go-build'
GOENV='/home/liam/.config/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='amd64'
GOHOSTOS='linux'
GOINSECURE=''
GOMODCACHE='/home/liam/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='linux'
GOPATH='/home/liam/go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/usr/local/go'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/usr/local/go/pkg/tool/linux_amd64'
GOVCS=''
GOVERSION='go1.23.0'
GODEBUG=''
GOTELEMETRY='local'
GOTELEMETRYDIR='/home/liam/.config/go/telemetry'
GCCGO='gccgo'
GOAMD64='v1'
AR='ar'
CC='gcc'
CXX='g++'
CGO_ENABLED='1'
GOMOD='/home/liam/nilness-bug/go.mod'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -m64 -pthread -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=/tmp/go-build1057087488=/tmp/go-build -gno-record-gcc-switches'
```
### What did you do?
I believe there is a bug in the nilness checker that incorrectly reports an impossible "nil != nil" and also omits an error in a different location about a tautological comparison. This code demonstrates it:
```go
package main
import "fmt"
func TriggerBug(a *struct{}) {
hasA := a != nil // "impossible condition: nil != nil"
if hasA {
fmt.Println("A is not nil")
return
}
if !hasA {
fmt.Println("A is nil")
}
}
```
I checked here: https://github.com/golang/go/issues?page=1&q=is%3Aissue+is%3Aopen+nilness and I can't find anything that matches.
### What did you see happen?
```
hasA := a != nil // "impossible condition: nil != nil"
```
Here is a screenshot for clarity:

### What did you expect to see?
The above error implies that `a` is always `nil` and therefore `a != nil` will never happen. That is not true. However, there still is a problem with the code. Because of the `return`, then the `if !hasA` is tautological -- although, that's a different error, and it's not reported, and the error that *is* reported is not only erroneous but also in the wrong place.
This, for example, correctly reports no errors:
```go
func TriggerBug(a *struct{}) {
hasA := a != nil
if hasA {
fmt.Println("A is not nil")
return
}
fmt.Println("A is nil")
}
```
So I would expect to see something like this:
```go
func TriggerBug(a *struct{}) {
hasA := a != nil
if hasA {
fmt.Println("A is not nil")
return
}
// at this point, hasA is definitely false
if !hasA { // "tautological condition: nil == nil"
fmt.Println("A is nil")
}
}
```
Interestingly, if I just get rid of `hasA` and put the expression directly in the `if` statement, I get:
```go
func TriggerBug(a *struct{}) {
if a != nil {
fmt.Println("A is not nil")
return
}
if a == nil { // tautological condition: nil == nil
fmt.Println("A is nil")
}
}
```

That's what I would expect to see whether or not I have the boolean value broken out into a different variable. | Tools | low | Critical |
2,663,028,049 | angular | Warn users when attempting to navigate when no `<router-outlet>` exists | ### Which @angular/* package(s) are the source of the bug?
core
### Is this a regression?
No
### Description
If a router link is clicked, but there is no router outlet present, navigation will fail. Presently we don't indicate to anyone that this is a problem. In the case of incremental hydration, it's possible someone may attempt to wrap a router-outlet with a defer block and hydrate triggers, which creates this scenario. An example was reported in #58686. We need to provide useful errors and warnings in this situation. | area: router,area: core,core: incremental hydration | low | Critical |
2,663,054,040 | vscode | Editor GPU: Lines invalided by onLinesInserted and onLinesDeleted events should shift and retain valid line data | Follow up for https://github.com/microsoft/vscode/issues/227107, https://github.com/microsoft/vscode/pull/233947
We should avoid throwing so much line data away when lines are added and deleted here:
https://github.com/microsoft/vscode/blob/f8c18693df60043bdd8990baa66c919e13855c38/src/vs/editor/browser/gpu/fullFileRenderStrategy.ts#L128-L163 | plan-item,editor-gpu | low | Minor |
2,663,090,303 | flutter | [go_router_builder] Enum route ordering leads to wrong route | ### What package does this bug report belong to?
go_router_builder
### What target platforms are you seeing this bug on?
Android
### Have you already upgraded your packages?
Yes
### Dependency versions
<details><summary>pubspec.lock</summary>
```lock
# Generated by pub
# See https://dart.dev/tools/pub/glossary#lockfile
packages:
_fe_analyzer_shared:
dependency: transitive
description:
name: _fe_analyzer_shared
sha256: f256b0c0ba6c7577c15e2e4e114755640a875e885099367bf6e012b19314c834
url: "https://pub.dev"
source: hosted
version: "72.0.0"
_macros:
dependency: transitive
description: dart
source: sdk
version: "0.3.2"
analyzer:
dependency: transitive
description:
name: analyzer
sha256: b652861553cd3990d8ed361f7979dc6d7053a9ac8843fa73820ab68ce5410139
url: "https://pub.dev"
source: hosted
version: "6.7.0"
args:
dependency: transitive
description:
name: args
sha256: bf9f5caeea8d8fe6721a9c358dd8a5c1947b27f1cfaa18b39c301273594919e6
url: "https://pub.dev"
source: hosted
version: "2.6.0"
async:
dependency: transitive
description:
name: async
sha256: "947bfcf187f74dbc5e146c9eb9c0f10c9f8b30743e341481c1e2ed3ecc18c20c"
url: "https://pub.dev"
source: hosted
version: "2.11.0"
boolean_selector:
dependency: transitive
description:
name: boolean_selector
sha256: "6cfb5af12253eaf2b368f07bacc5a80d1301a071c73360d746b7f2e32d762c66"
url: "https://pub.dev"
source: hosted
version: "2.1.1"
build:
dependency: transitive
description:
name: build
sha256: "80184af8b6cb3e5c1c4ec6d8544d27711700bc3e6d2efad04238c7b5290889f0"
url: "https://pub.dev"
source: hosted
version: "2.4.1"
build_config:
dependency: transitive
description:
name: build_config
sha256: bf80fcfb46a29945b423bd9aad884590fb1dc69b330a4d4700cac476af1708d1
url: "https://pub.dev"
source: hosted
version: "1.1.1"
build_daemon:
dependency: transitive
description:
name: build_daemon
sha256: "79b2aef6ac2ed00046867ed354c88778c9c0f029df8a20fe10b5436826721ef9"
url: "https://pub.dev"
source: hosted
version: "4.0.2"
build_resolvers:
dependency: transitive
description:
name: build_resolvers
sha256: "339086358431fa15d7eca8b6a36e5d783728cf025e559b834f4609a1fcfb7b0a"
url: "https://pub.dev"
source: hosted
version: "2.4.2"
build_runner:
dependency: "direct dev"
description:
name: build_runner
sha256: "028819cfb90051c6b5440c7e574d1896f8037e3c96cf17aaeb054c9311cfbf4d"
url: "https://pub.dev"
source: hosted
version: "2.4.13"
build_runner_core:
dependency: transitive
description:
name: build_runner_core
sha256: f8126682b87a7282a339b871298cc12009cb67109cfa1614d6436fb0289193e0
url: "https://pub.dev"
source: hosted
version: "7.3.2"
built_collection:
dependency: transitive
description:
name: built_collection
sha256: "376e3dd27b51ea877c28d525560790aee2e6fbb5f20e2f85d5081027d94e2100"
url: "https://pub.dev"
source: hosted
version: "5.1.1"
built_value:
dependency: transitive
description:
name: built_value
sha256: c7913a9737ee4007efedaffc968c049fd0f3d0e49109e778edc10de9426005cb
url: "https://pub.dev"
source: hosted
version: "8.9.2"
characters:
dependency: transitive
description:
name: characters
sha256: "04a925763edad70e8443c99234dc3328f442e811f1d8fd1a72f1c8ad0f69a605"
url: "https://pub.dev"
source: hosted
version: "1.3.0"
checked_yaml:
dependency: transitive
description:
name: checked_yaml
sha256: feb6bed21949061731a7a75fc5d2aa727cf160b91af9a3e464c5e3a32e28b5ff
url: "https://pub.dev"
source: hosted
version: "2.0.3"
clock:
dependency: transitive
description:
name: clock
sha256: cb6d7f03e1de671e34607e909a7213e31d7752be4fb66a86d29fe1eb14bfb5cf
url: "https://pub.dev"
source: hosted
version: "1.1.1"
code_builder:
dependency: transitive
description:
name: code_builder
sha256: "0ec10bf4a89e4c613960bf1e8b42c64127021740fb21640c29c909826a5eea3e"
url: "https://pub.dev"
source: hosted
version: "4.10.1"
collection:
dependency: transitive
description:
name: collection
sha256: ee67cb0715911d28db6bf4af1026078bd6f0128b07a5f66fb2ed94ec6783c09a
url: "https://pub.dev"
source: hosted
version: "1.18.0"
convert:
dependency: transitive
description:
name: convert
sha256: b30acd5944035672bc15c6b7a8b47d773e41e2f17de064350988c5d02adb1c68
url: "https://pub.dev"
source: hosted
version: "3.1.2"
crypto:
dependency: transitive
description:
name: crypto
sha256: "1e445881f28f22d6140f181e07737b22f1e099a5e1ff94b0af2f9e4a463f4855"
url: "https://pub.dev"
source: hosted
version: "3.0.6"
cupertino_icons:
dependency: "direct main"
description:
name: cupertino_icons
sha256: ba631d1c7f7bef6b729a622b7b752645a2d076dba9976925b8f25725a30e1ee6
url: "https://pub.dev"
source: hosted
version: "1.0.8"
dart_style:
dependency: transitive
description:
name: dart_style
sha256: "7856d364b589d1f08986e140938578ed36ed948581fbc3bc9aef1805039ac5ab"
url: "https://pub.dev"
source: hosted
version: "2.3.7"
fake_async:
dependency: transitive
description:
name: fake_async
sha256: "511392330127add0b769b75a987850d136345d9227c6b94c96a04cf4a391bf78"
url: "https://pub.dev"
source: hosted
version: "1.3.1"
file:
dependency: transitive
description:
name: file
sha256: a3b4f84adafef897088c160faf7dfffb7696046cb13ae90b508c2cbc95d3b8d4
url: "https://pub.dev"
source: hosted
version: "7.0.1"
fixnum:
dependency: transitive
description:
name: fixnum
sha256: b6dc7065e46c974bc7c5f143080a6764ec7a4be6da1285ececdc37be96de53be
url: "https://pub.dev"
source: hosted
version: "1.1.1"
flutter:
dependency: "direct main"
description: flutter
source: sdk
version: "0.0.0"
flutter_lints:
dependency: "direct dev"
description:
name: flutter_lints
sha256: "3f41d009ba7172d5ff9be5f6e6e6abb4300e263aab8866d2a0842ed2a70f8f0c"
url: "https://pub.dev"
source: hosted
version: "4.0.0"
flutter_test:
dependency: "direct dev"
description: flutter
source: sdk
version: "0.0.0"
flutter_web_plugins:
dependency: "direct main"
description: flutter
source: sdk
version: "0.0.0"
frontend_server_client:
dependency: transitive
description:
name: frontend_server_client
sha256: f64a0333a82f30b0cca061bc3d143813a486dc086b574bfb233b7c1372427694
url: "https://pub.dev"
source: hosted
version: "4.0.0"
glob:
dependency: transitive
description:
name: glob
sha256: "0e7014b3b7d4dac1ca4d6114f82bf1782ee86745b9b42a92c9289c23d8a0ab63"
url: "https://pub.dev"
source: hosted
version: "2.1.2"
go_router:
dependency: "direct main"
description:
name: go_router
sha256: "8ae664a70174163b9f65ea68dd8673e29db8f9095de7b5cd00e167c621f4fef5"
url: "https://pub.dev"
source: hosted
version: "14.6.0"
go_router_builder:
dependency: "direct dev"
description:
name: go_router_builder
sha256: "3425b72dea69209754ac6b71b4da34165dcd4d4a2934713029945709a246427a"
url: "https://pub.dev"
source: hosted
version: "2.7.1"
graphs:
dependency: transitive
description:
name: graphs
sha256: "741bbf84165310a68ff28fe9e727332eef1407342fca52759cb21ad8177bb8d0"
url: "https://pub.dev"
source: hosted
version: "2.3.2"
http_multi_server:
dependency: transitive
description:
name: http_multi_server
sha256: "97486f20f9c2f7be8f514851703d0119c3596d14ea63227af6f7a481ef2b2f8b"
url: "https://pub.dev"
source: hosted
version: "3.2.1"
http_parser:
dependency: transitive
description:
name: http_parser
sha256: "2aa08ce0341cc9b354a498388e30986515406668dbcc4f7c950c3e715496693b"
url: "https://pub.dev"
source: hosted
version: "4.0.2"
io:
dependency: transitive
description:
name: io
sha256: "2ec25704aba361659e10e3e5f5d672068d332fc8ac516421d483a11e5cbd061e"
url: "https://pub.dev"
source: hosted
version: "1.0.4"
js:
dependency: transitive
description:
name: js
sha256: c1b2e9b5ea78c45e1a0788d29606ba27dc5f71f019f32ca5140f61ef071838cf
url: "https://pub.dev"
source: hosted
version: "0.7.1"
json_annotation:
dependency: transitive
description:
name: json_annotation
sha256: "1ce844379ca14835a50d2f019a3099f419082cfdd231cd86a142af94dd5c6bb1"
url: "https://pub.dev"
source: hosted
version: "4.9.0"
leak_tracker:
dependency: transitive
description:
name: leak_tracker
sha256: "3f87a60e8c63aecc975dda1ceedbc8f24de75f09e4856ea27daf8958f2f0ce05"
url: "https://pub.dev"
source: hosted
version: "10.0.5"
leak_tracker_flutter_testing:
dependency: transitive
description:
name: leak_tracker_flutter_testing
sha256: "932549fb305594d82d7183ecd9fa93463e9914e1b67cacc34bc40906594a1806"
url: "https://pub.dev"
source: hosted
version: "3.0.5"
leak_tracker_testing:
dependency: transitive
description:
name: leak_tracker_testing
sha256: "6ba465d5d76e67ddf503e1161d1f4a6bc42306f9d66ca1e8f079a47290fb06d3"
url: "https://pub.dev"
source: hosted
version: "3.0.1"
lints:
dependency: transitive
description:
name: lints
sha256: "976c774dd944a42e83e2467f4cc670daef7eed6295b10b36ae8c85bcbf828235"
url: "https://pub.dev"
source: hosted
version: "4.0.0"
logging:
dependency: transitive
description:
name: logging
sha256: c8245ada5f1717ed44271ed1c26b8ce85ca3228fd2ffdb75468ab01979309d61
url: "https://pub.dev"
source: hosted
version: "1.3.0"
macros:
dependency: transitive
description:
name: macros
sha256: "0acaed5d6b7eab89f63350bccd82119e6c602df0f391260d0e32b5e23db79536"
url: "https://pub.dev"
source: hosted
version: "0.1.2-main.4"
matcher:
dependency: transitive
description:
name: matcher
sha256: d2323aa2060500f906aa31a895b4030b6da3ebdcc5619d14ce1aada65cd161cb
url: "https://pub.dev"
source: hosted
version: "0.12.16+1"
material_color_utilities:
dependency: transitive
description:
name: material_color_utilities
sha256: f7142bb1154231d7ea5f96bc7bde4bda2a0945d2806bb11670e30b850d56bdec
url: "https://pub.dev"
source: hosted
version: "0.11.1"
meta:
dependency: transitive
description:
name: meta
sha256: bdb68674043280c3428e9ec998512fb681678676b3c54e773629ffe74419f8c7
url: "https://pub.dev"
source: hosted
version: "1.15.0"
mime:
dependency: transitive
description:
name: mime
sha256: "41a20518f0cb1256669420fdba0cd90d21561e560ac240f26ef8322e45bb7ed6"
url: "https://pub.dev"
source: hosted
version: "2.0.0"
package_config:
dependency: transitive
description:
name: package_config
sha256: "1c5b77ccc91e4823a5af61ee74e6b972db1ef98c2ff5a18d3161c982a55448bd"
url: "https://pub.dev"
source: hosted
version: "2.1.0"
path:
dependency: transitive
description:
name: path
sha256: "087ce49c3f0dc39180befefc60fdb4acd8f8620e5682fe2476afd0b3688bb4af"
url: "https://pub.dev"
source: hosted
version: "1.9.0"
pool:
dependency: transitive
description:
name: pool
sha256: "20fe868b6314b322ea036ba325e6fc0711a22948856475e2c2b6306e8ab39c2a"
url: "https://pub.dev"
source: hosted
version: "1.5.1"
pub_semver:
dependency: transitive
description:
name: pub_semver
sha256: "40d3ab1bbd474c4c2328c91e3a7df8c6dd629b79ece4c4bd04bee496a224fb0c"
url: "https://pub.dev"
source: hosted
version: "2.1.4"
pubspec_parse:
dependency: transitive
description:
name: pubspec_parse
sha256: c799b721d79eb6ee6fa56f00c04b472dcd44a30d258fac2174a6ec57302678f8
url: "https://pub.dev"
source: hosted
version: "1.3.0"
shelf:
dependency: transitive
description:
name: shelf
sha256: ad29c505aee705f41a4d8963641f91ac4cee3c8fad5947e033390a7bd8180fa4
url: "https://pub.dev"
source: hosted
version: "1.4.1"
shelf_web_socket:
dependency: transitive
description:
name: shelf_web_socket
sha256: "073c147238594ecd0d193f3456a5fe91c4b0abbcc68bf5cd95b36c4e194ac611"
url: "https://pub.dev"
source: hosted
version: "2.0.0"
sky_engine:
dependency: transitive
description: flutter
source: sdk
version: "0.0.99"
source_gen:
dependency: transitive
description:
name: source_gen
sha256: "14658ba5f669685cd3d63701d01b31ea748310f7ab854e471962670abcf57832"
url: "https://pub.dev"
source: hosted
version: "1.5.0"
source_helper:
dependency: transitive
description:
name: source_helper
sha256: "6adebc0006c37dd63fe05bca0a929b99f06402fc95aa35bf36d67f5c06de01fd"
url: "https://pub.dev"
source: hosted
version: "1.3.4"
source_span:
dependency: transitive
description:
name: source_span
sha256: "53e943d4206a5e30df338fd4c6e7a077e02254531b138a15aec3bd143c1a8b3c"
url: "https://pub.dev"
source: hosted
version: "1.10.0"
stack_trace:
dependency: transitive
description:
name: stack_trace
sha256: "73713990125a6d93122541237550ee3352a2d84baad52d375a4cad2eb9b7ce0b"
url: "https://pub.dev"
source: hosted
version: "1.11.1"
stream_channel:
dependency: transitive
description:
name: stream_channel
sha256: ba2aa5d8cc609d96bbb2899c28934f9e1af5cddbd60a827822ea467161eb54e7
url: "https://pub.dev"
source: hosted
version: "2.1.2"
stream_transform:
dependency: transitive
description:
name: stream_transform
sha256: "14a00e794c7c11aa145a170587321aedce29769c08d7f58b1d141da75e3b1c6f"
url: "https://pub.dev"
source: hosted
version: "2.1.0"
string_scanner:
dependency: transitive
description:
name: string_scanner
sha256: "556692adab6cfa87322a115640c11f13cb77b3f076ddcc5d6ae3c20242bedcde"
url: "https://pub.dev"
source: hosted
version: "1.2.0"
term_glyph:
dependency: transitive
description:
name: term_glyph
sha256: a29248a84fbb7c79282b40b8c72a1209db169a2e0542bce341da992fe1bc7e84
url: "https://pub.dev"
source: hosted
version: "1.2.1"
test_api:
dependency: transitive
description:
name: test_api
sha256: "5b8a98dafc4d5c4c9c72d8b31ab2b23fc13422348d2997120294d3bac86b4ddb"
url: "https://pub.dev"
source: hosted
version: "0.7.2"
timing:
dependency: transitive
description:
name: timing
sha256: "70a3b636575d4163c477e6de42f247a23b315ae20e86442bebe32d3cabf61c32"
url: "https://pub.dev"
source: hosted
version: "1.0.1"
typed_data:
dependency: transitive
description:
name: typed_data
sha256: f9049c039ebfeb4cf7a7104a675823cd72dba8297f264b6637062516699fa006
url: "https://pub.dev"
source: hosted
version: "1.4.0"
vector_math:
dependency: transitive
description:
name: vector_math
sha256: "80b3257d1492ce4d091729e3a67a60407d227c27241d6927be0130c98e741803"
url: "https://pub.dev"
source: hosted
version: "2.1.4"
vm_service:
dependency: transitive
description:
name: vm_service
sha256: "5c5f338a667b4c644744b661f309fb8080bb94b18a7e91ef1dbd343bed00ed6d"
url: "https://pub.dev"
source: hosted
version: "14.2.5"
watcher:
dependency: transitive
description:
name: watcher
sha256: "3d2ad6751b3c16cf07c7fca317a1413b3f26530319181b37e3b9039b84fc01d8"
url: "https://pub.dev"
source: hosted
version: "1.1.0"
web:
dependency: transitive
description:
name: web
sha256: cd3543bd5798f6ad290ea73d210f423502e71900302dde696f8bff84bf89a1cb
url: "https://pub.dev"
source: hosted
version: "1.1.0"
web_socket:
dependency: transitive
description:
name: web_socket
sha256: "3c12d96c0c9a4eec095246debcea7b86c0324f22df69893d538fcc6f1b8cce83"
url: "https://pub.dev"
source: hosted
version: "0.1.6"
web_socket_channel:
dependency: transitive
description:
name: web_socket_channel
sha256: "9f187088ed104edd8662ca07af4b124465893caf063ba29758f97af57e61da8f"
url: "https://pub.dev"
source: hosted
version: "3.0.1"
yaml:
dependency: transitive
description:
name: yaml
sha256: "75769501ea3489fca56601ff33454fe45507ea3bfb014161abc3b43ae25989d5"
url: "https://pub.dev"
source: hosted
version: "3.1.2"
sdks:
dart: ">=3.5.3 <4.0.0"
flutter: ">=3.19.0"
```
</details>
### Steps to reproduce
1. Write a route with an enum path parameter.
2. Write a route with a static path, with the same amount of segments as the first route.
3. Order the enum path parameter route first
4. Try to navigate to the route with the static path
### Expected results
The location is resolved successfully and navigates to the correct path.
### Actual results
The routing resolves into a StateError and doesn't navigate.
When writing routes with the same amount of path segments, a route with an enum path parameter can be prioritized over a static path. I already submitted #157809, which points out that code-generated enum parameters are not handled gracefully when not matching to an existing enum value. In this case, the path resolver tries to resolve a path by looking at the enum route, fails, and then never looks at other existing routes that might fit the bill.
A workaround at the moment is to simply reorder TypedGoRoutes to place the enum one at the end.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
import 'package:go_router/go_router.dart';
part 'main.g.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp.router(
routerConfig: GoRouter(
initialLocation: const MainRoute().location,
routes: $appRoutes,
),
);
}
}
enum Model {
x, y,
}
@TypedGoRoute<MainRoute>(
path: '/home',
routes: [
// Swapping these 2 lines results in different behavior.
// If ModelRoute is first, navigating to PageRoute causes
// a StateError to be thrown, but putting PageRoute first
// leads to the expected behaviour.
TypedGoRoute<ModelRoute>(path: ':model/:id'),
TypedGoRoute<PageRoute>(path: 'page/x'),
],
)
class MainRoute extends GoRouteData {
const MainRoute();
@override
Widget build(BuildContext context, GoRouterState state) {
return Scaffold(
body: Center(
child: ElevatedButton(
onPressed: () => const PageRoute().go(context),
child: const Text('Go to /home/page/x'),
),
),
);
}
}
class ModelRoute extends GoRouteData {
final Model model;
final String id;
const ModelRoute(this.model, this.id);
@override
Widget build(BuildContext context, GoRouterState state) {
return Scaffold(
body: Center(child: Text('/home/${model.name}/$id')),
);
}
}
class PageRoute extends GoRouteData {
const PageRoute();
@override
Widget build(BuildContext context, GoRouterState state) {
return const Scaffold(
body: Center(child: Text('/home/page/x')),
);
}
}
```
</details>
### Screenshots or Videos
_No response_
### Logs
<details open><summary>Logs</summary>
```console
════════ Exception caught by foundation library ════════════════════════════════
The following StateError was thrown while dispatching notifications for GoRouteInformationProvider:
Bad state: No element
When the exception was thrown, this was the stack:
#0 Iterable.singleWhere (dart:core/iterable.dart:775:9)
#1 _extension#3._$fromName (package:go_router_builder_enum_bug/main.g.dart:89:15)
#2 $ModelRouteExtension._fromState (package:go_router_builder_enum_bug/main.g.dart:47:24)
#3 GoRouteData.$route.factoryImpl (package:go_router/src/route_data.dart:102:53)
#4 GoRouteData.$route.redirect (package:go_router/src/route_data.dart:112:9)
#5 RouteConfiguration._getRouteLevelRedirect (package:go_router/src/configuration.dart:443:67)
#6 RouteConfiguration._getRouteLevelRedirect.processRouteRedirect (package:go_router/src/configuration.dart:440:9)
#7 RouteConfiguration._getRouteLevelRedirect (package:go_router/src/configuration.dart:448:14)
#8 RouteConfiguration.redirect.processRedirect.processTopLevelRedirect (package:go_router/src/configuration.dart:400:13)
#9 RouteConfiguration.redirect.processRedirect (package:go_router/src/configuration.dart:417:16)
#10 RouteConfiguration.redirect (package:go_router/src/configuration.dart:423:14)
#11 GoRouteInformationParser._redirect (package:go_router/src/parser.dart:164:10)
#12 GoRouteInformationParser.parseRouteInformationWithDependencies (package:go_router/src/parser.dart:98:32)
#13 _RouterState._processRouteInformation (package:flutter/src/widgets/router.dart:747:8)
#14 _RouterState._handleRouteInformationProviderNotification (package:flutter/src/widgets/router.dart:765:5)
#15 ChangeNotifier.notifyListeners (package:flutter/src/foundation/change_notifier.dart:437:24)
#16 GoRouteInformationProvider.notifyListeners (package:go_router/src/information_provider.dart:139:11)
#17 GoRouteInformationProvider._setValue (package:go_router/src/information_provider.dart:154:7)
#18 GoRouteInformationProvider.go (package:go_router/src/information_provider.dart:176:5)
#19 GoRouter.go (package:go_router/src/router.dart:349:30)
#20 GoRouterHelper.go (package:go_router/src/misc/extensions.dart:25:25)
#21 $PageRouteExtension.go (package:go_router_builder_enum_bug/main.g.dart:77:44)
#22 MainRoute.build.<anonymous closure> (package:go_router_builder_enum_bug/main.dart:43:46)
#23 _InkResponseState.handleTap (package:flutter/src/material/ink_well.dart:1170:21)
#24 GestureRecognizer.invokeCallback (package:flutter/src/gestures/recognizer.dart:351:24)
#25 TapGestureRecognizer.handleTapUp (package:flutter/src/gestures/tap.dart:656:11)
#26 BaseTapGestureRecognizer._checkUp (package:flutter/src/gestures/tap.dart:313:5)
#27 BaseTapGestureRecognizer.handlePrimaryPointer (package:flutter/src/gestures/tap.dart:246:7)
#28 PrimaryPointerGestureRecognizer.handleEvent (package:flutter/src/gestures/recognizer.dart:703:9)
#29 PointerRouter._dispatch (package:flutter/src/gestures/pointer_router.dart:98:12)
#30 PointerRouter._dispatchEventToRoutes.<anonymous closure> (package:flutter/src/gestures/pointer_router.dart:143:9)
#31 _LinkedHashMapMixin.forEach (dart:collection-patch/compact_hash.dart:633:13)
#32 PointerRouter._dispatchEventToRoutes (package:flutter/src/gestures/pointer_router.dart:141:18)
#33 PointerRouter.route (package:flutter/src/gestures/pointer_router.dart:127:7)
#34 GestureBinding.handleEvent (package:flutter/src/gestures/binding.dart:501:19)
#35 GestureBinding.dispatchEvent (package:flutter/src/gestures/binding.dart:481:22)
#36 RendererBinding.dispatchEvent (package:flutter/src/rendering/binding.dart:450:11)
#37 GestureBinding._handlePointerEventImmediately (package:flutter/src/gestures/binding.dart:426:7)
#38 GestureBinding.handlePointerEvent (package:flutter/src/gestures/binding.dart:389:5)
#39 GestureBinding._flushPointerEventQueue (package:flutter/src/gestures/binding.dart:336:7)
#40 GestureBinding._handlePointerDataPacket (package:flutter/src/gestures/binding.dart:305:9)
#41 _invoke1 (dart:ui/hooks.dart:328:13)
#42 PlatformDispatcher._dispatchPointerDataPacket (dart:ui/platform_dispatcher.dart:442:7)
#43 _dispatchPointerDataPacket (dart:ui/hooks.dart:262:31)
The GoRouteInformationProvider sending notification was: Instance of 'GoRouteInformationProvider'
════════════════════════════════════════════════════════════════════════════════
D/EGL_emulation( 4398): app_time_stats: avg=3291.46ms min=3291.46ms max=3291.46ms count=1
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[√] Flutter (Channel stable, 3.24.3, on Microsoft Windows [Version 10.0.22631.4460], locale en-CA)
• Flutter version 3.24.3 on channel stable at W:\flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 2663184aa7 (9 weeks ago), 2024-09-11 16:27:48 -0500
• Engine revision 36335019a8
• Dart version 3.5.3
• DevTools version 2.37.3
[√] Windows Version (Installed version of Windows is version 10 or higher)
[√] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at W:\Android
• Platform android-35, build-tools 34.0.0
• ANDROID_HOME = W:\Android
• Java binary at: W:\Android\AndroidStudio\jbr\bin\java
• Java version OpenJDK Runtime Environment (build 17.0.7+0-b2043.56-10550314)
• All Android licenses accepted.
[√] Chrome - develop for the web
• Chrome at C:\Program Files\Google\Chrome\Application\chrome.exe
[√] Android Studio (version 2023.1)
• Android Studio at W:\Android\AndroidStudio
• Flutter plugin can be installed from:
https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.7+0-b2043.56-10550314)
[√] VS Code (version 1.95.3)
• VS Code at C:\Users\alexa\AppData\Local\Programs\Microsoft VS Code
• Flutter extension version 3.100.0
[√] Connected device (4 available)
• sdk gphone64 x86 64 (mobile) • emulator-5554 • android-x64 • Android 14 (API 34) (emulator)
• sdk gphone64 x86 64 (mobile) • emulator-5556 • android-x64 • Android 15 (API 35) (emulator)
• Chrome (web) • chrome • web-javascript • Google Chrome 130.0.6723.117
• Edge (web) • edge • web-javascript • Microsoft Edge 130.0.2849.68
[√] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| c: crash,package,has reproducible steps,P2,p: go_router,p: go_router_builder,team-go_router,triaged-go_router,found in release: 3.24,found in release: 3.27 | low | Critical |
2,663,090,817 | rust | Performance regression between 1.78 and 1.79 | We were upgrading from rust 1.78 to 1.79 and later 1.81 and we discovered a performance regression in our applications in certain cases up to 10%.
In our benchmarks we were using `iai-callgrind` to measure instructions and we have seen a significant increase there.
Unfortunately, I can't provide exact code as they are proprietary but in the nature they have long callchains with lots of small functions.
The same issue was reported [here](https://users.rust-lang.org/t/performance-regression-1-78-1-79/116746).
I have bisected the 1.78 and 1.79 version and found that the following change causes performance regression: https://github.com/rust-lang/rust/commit/3412f0174934778dd803081dc7c4b5864309350f#diff-deee82aaf9baf43ab05d939355f6249fdacf8959bc0e06c9574283453f838ee9R702
## Release cargo profile
```
codegen-units = 1
opt-level = 3
lto = "thin"
```
### Workaround
Adding `debug = 1` to the profile in release mode essentially disables this change and the performance degradation is not there anymore as expected.
<!--
If you know when this regression occurred, please add a line like below, replacing `{channel}` with one of stable, beta, or nightly.
@rustbot modify labels: +regression-from-stable-to-stable -regression-untriaged
-->
| I-slow,T-compiler,regression-from-stable-to-stable,C-bug,A-mir-opt-inlining,S-needs-repro | low | Critical |
2,663,158,453 | pytorch | flex_attention + `export_for_inference` ` NYI: querying is_contiguous inside of vmap for memory_format other than torch.contiguous_format` | ### 🐛 Describe the bug
```python
import torch
from torch import nn, Tensor
from torch.export import export_for_inference, Dim
from torch.nn.attention.flex_attention import flex_attention
class MinimalAttention(nn.Module):
"""Minimal reproduction of Attention layer using flex_attention."""
def __init__(self, dim, num_heads=8, qkv_bias=True):
super().__init__()
self.num_heads = num_heads
self.head_dim = dim // num_heads
self.scale = self.head_dim ** -0.5
self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
self.proj = nn.Linear(dim, dim)
def forward(self, x: Tensor) -> Tensor:
B, N, C = x.shape # Flattened spatial dimensions with batch
qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, self.head_dim).permute(0, 3, 2, 1, 4)
q, k, v = qkv[:, :, 0], qkv[:, :, 1], qkv[:, :, 2]
# Compute attention output using flex_attention
attn_out = flex_attention(q, k, v, scale=self.scale)
return self.proj(attn_out.transpose(1, 2).reshape(B, N, C))
def replicate_error():
# Define the input tensor with fixed values to avoid randomness
B, N, C = 2, 1024, 64
x = torch.ones(B, N, C, device="cuda", dtype=torch.float32) # Fixed values instead of random
# Initialize the minimal attention model
model = MinimalAttention(dim=C).cuda()
# Set up dynamic shape for batch size flexibility
dynamic_shapes = {
"x": {0: Dim("batch", min=1, max=B), 1: Dim.STATIC, 2: Dim.STATIC}
}
# Export with AMP and dynamic shape settings
try:
with torch.amp.autocast(device_type="cuda", dtype=torch.float16):
exported_model = export_for_inference(model, (x,), dynamic_shapes=dynamic_shapes)
torch.save(exported_model, "minimal_attention_error_model.pt")
print("Exported model successfully.")
except RuntimeError as e:
print("Export failed with error:", e)
# Run the minimal reproduction
replicate_error()
```
```python
Export failed with error: NYI: querying is_contiguous inside of vmap for memory_format other than torch.contiguous_format
While executing %dense_mask : [num_users=2] = call_method[target=new_zeros](args = (%_add_batch_dim_3, 1, 2), kwargs = {dtype: torch.int32})
Original traceback:
File "/workspace/delme_repro.py", line 24, in forward
attn_out = flex_attention(q, k, v, scale=self.scale)
File "/opt/conda/lib/python3.11/site-packages/torch/nn/attention/flex_attention.py", line 1197, in flex_attention
block_mask = _create_empty_block_mask(query, key)
File "/opt/conda/lib/python3.11/site-packages/torch/nn/attention/flex_attention.py", line 893, in _create_empty_block_mask
return BlockMask.from_kv_blocks(
File "/opt/conda/lib/python3.11/site-packages/torch/nn/attention/flex_attention.py", line 349, in from_kv_blocks
q_num_blocks, q_indices = _transpose_ordered(kv_num_blocks, kv_indices)
File "/opt/conda/lib/python3.11/site-packages/torch/nn/attention/flex_attention.py", line 187, in _transpose_ordered
dense = _ordered_to_dense(num_blocks_in_row, col_indices)
File "/opt/conda/lib/python3.11/site-packages/torch/nn/attention/flex_attention.py", line 172, in _ordered_to_dense
out = create_dense_batched(num_blocks_in_row, col_indices)
File "/opt/conda/lib/python3.11/site-packages/torch/_functorch/apis.py", line 203, in wrapped
return vmap_impl(
File "/opt/conda/lib/python3.11/site-packages/torch/_functorch/vmap.py", line 331, in vmap_impl
return _flat_vmap(
File "/opt/conda/lib/python3.11/site-packages/torch/_functorch/vmap.py", line 479, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/lib/python3.11/site-packages/torch/_functorch/apis.py", line 203, in wrapped
return vmap_impl(
File "/opt/conda/lib/python3.11/site-packages/torch/_functorch/vmap.py", line 331, in vmap_impl
return _flat_vmap(
File "/opt/conda/lib/python3.11/site-packages/torch/_functorch/vmap.py", line 479, in _flat_vmap
batched_outputs = func(*batched_inputs, **kwargs)
File "/opt/conda/lib/python3.11/site-packages/torch/nn/attention/flex_attention.py", line 153, in create_dense_one
dense_mask = kv_indices.new_zeros(num_rows, num_cols + 1, dtype=torch.int32)
```
/cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @yf225 @Chillee @drisspg @yanboliang @BoyuanFeng @ezyang @tugsbayasgalan
### Versions
nightly | triaged,oncall: pt2,module: higher order operators,module: pt2-dispatcher,dynamo-functorch,module: flex attention | low | Critical |
2,663,312,116 | godot | Changing the script on an object in the editor and undoing does not restore any properties | ### Tested versions
Reproducible in `4.2.stable`, `4.3.stable` and `4.4.dev4`
### System information
Godot v4.3.stable - Windows 10.0.22631 - GLES3 (Compatibility) - NVIDIA GeForce GTX 1660 SUPER (NVIDIA; 32.0.15.6603) - Intel(R) Core(TM) i7-10700F CPU @ 2.90GHz (16 Threads)
### Issue description
When you change the script on a node or a resource in the inspector (this can be clearing the script, extending it, making it unique, etc.), and then immediately undo, none of the exported properties that were on the object are restored, and will be reset to their default values. If you save the resource in this broken state, the wipe becomes permanent. The only way to get out of this is to reload the scene if you're editing a node, or to reload the entire project if you're editing a resource.
Expected behaviour is for the properties to be restored.
### Steps to reproduce
- Open the attached project and inspect either `test.tscn` or `test_resource.tres`.
- Change the script by either clearing, extending, or making unique.
- Undo.
- Observe that both the properties have been reset.
### Minimal reproduction project (MRP)
[bug-report.zip](https://github.com/user-attachments/files/17781909/bug-report.zip)
| bug,topic:editor | low | Critical |
2,663,313,274 | deno | Reduce memory usage by taking content bytes out of deno_graph's graph | Right now, we keep deno_graph's graph around in memory, but this is a waste of memory. We should have a way to be able to take the source out of the graph once it's no longer necessary. | perf | low | Minor |
2,663,322,698 | ollama | When I run Ollama using AMD 6750GRE 12G I get an error - gfx1031 unsupported by official ROCm on windows | ### What is the issue?
After downloading and installing. Requires additional download of compiled Rocblas
Rocblas.dll overwrites the rocblas.dll that comes with the SDK, and puts rocblas.dll in the relative path of the HIP SDK file (if not, please download it yourself), and replaces the library folder (C:\Program Files\AMD that comes with the SDK \ROCm\6.1\bin\rocblas\library), it can run normally on the specified graphics card.
The second step is to replace the rocblas.dll file and Library folder in the ollama program directory (C:\Users\96133\AppData\Local\Programs\Ollama\lib\ollama Chinese file and folder with the same name)
Then I can let ollama run normally on the graphics card, but after I finish it, I get a prompt
Microsoft Windows [Version 10.0.19045.4894]
(c) Microsoft Corporation. All rights reserved.
C:\Users\96133>ollama run qwen2.5-coder:14b
>>> 2
Error: POST predict: Post "http://127.0.0.1:53690/completion": read tcp 127.0.0.1:53698->127.0.0.1:53690: wsarecv: An existing connection was forcibly closed by the remote host.
C:\Users\96133>
When I change the model too
C:\Users\96133>ollama run qwen2.5-coder
pulling manifest
pulling 60e05f210007... 100% ▕████████████████████████████████████████████████████████▏ 4.7 GB
pulling 66b9ea09bd5b... 100% ▕████████████████████████████████████████████████████████▏ 68 B
pulling e94a8ecb9327... 100% ▕████████████████████████████████████████████████████████▏ 1.6 KB
pulling 832dd9e00a68... 100% ▕████████████████████████████████████████████████████████▏ 11 KB
pulling d9bb33f27869... 100% ▕████████████████████████████████████████████████████████▏ 487 B
verifying sha256 digest
writing manifest
success
>>> 3
Error: POST predict: Post "http://127.0.0.1:52408/completion": read tcp 127.0.0.1:52411->127.0.0.1:52408: wsarecv: An existing connection was forcibly closed by the remote host.
Attached are the logs in the C:\Users\96133\AppData\Local\Ollama folder
I don't know why this is happening, when I run the file without replacing it on the cpu there is no problem,The ROCm I use is rocm.gfx1031.for.hip.sdk.6.1.2.7z from https://github.com/likelovewant/ROCmLibs-for-gfx1103-AMD780M-APU/
[app-1.log](https://github.com/user-attachments/files/17781911/app-1.log)
[server.log](https://github.com/user-attachments/files/17781914/server.log)
help me what to do
### OS
Windows
### GPU
AMD
### CPU
AMD
### Ollama version
0.4.1 | bug,windows | low | Critical |
2,663,339,363 | pytorch | FlexAttention + ROCM Issue Tracker | ```[tasklist]
### Tasks
- [ ] Rocm accuracy failures on learnable bias cases: https://github.com/pytorch/pytorch/pull/137452
```
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @ezyang @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @yf225 @Chillee @yanboliang @BoyuanFeng | module: rocm,triaged,oncall: pt2,rocm,module: higher order operators,module: pt2-dispatcher,module: flex attention | low | Critical |
2,663,405,637 | go | cmd/go: using both -v and -json with go test has unintended effects | ### Go version
go version go1.23.1 linux/amd64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE=''
GOARCH='amd64'
GOBIN=''
GOCACHE='/home/firelizzard/.cache/go-build'
GOENV='/home/firelizzard/.config/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='amd64'
GOHOSTOS='linux'
GOINSECURE=''
GOMODCACHE='/home/firelizzard/go/pkg/mod'
GOOS='linux'
GOPATH='/home/firelizzard/go'
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/home/firelizzard/sdk/go1.23.1'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/home/firelizzard/sdk/go1.23.1/pkg/tool/linux_amd64'
GOVCS=''
GOVERSION='go1.23.1'
GODEBUG=''
GOTELEMETRY='on'
GOTELEMETRYDIR='/home/firelizzard/.config/go/telemetry'
GCCGO='/usr/bin/gccgo'
GOAMD64='v1'
AR='ar'
CC='gcc'
CXX='g++'
CGO_ENABLED='1'
GOMOD='/home/firelizzard/src/yaegi/go-script/go.mod'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -m64 -pthread -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=/tmp/go-build4165056435=/tmp/go-build -gno-record-gcc-switches'
```
### What did you do?
`go test -v -json -run=- -bench=^BenchmarkSimple$ ./pkg/vm` in the root of https://gitlab.com/firelizzard/go-script
### What did you see happen?
Nothing but output events for the benchmark
### What did you expect to see?
An `"Action":"run"` event for the benchmark | NeedsInvestigation,GoCommand | low | Critical |
2,663,434,714 | vscode | Telemetry info table is not aligned in defaultSettings.json | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.95.3
- OS Version: Windows 11
Steps to Reproduce:
1. Click CTRL+SHIFT+P.
2. Type and open `Preferences: Open Default Settings (JSON) to open `defaultSettings.json` file.
3. Scroll to the lines 18-23 and look for a table which is not aligned.
Expected result:
For a consistency the table should be as aligned as possible keeping in mind that UTF-8 characters may harden this thing a little.
| bug,telemetry | low | Critical |
2,663,526,685 | yt-dlp | [archive.org] Original files with unsupported extensions not extracted. | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm requesting a site-specific feature
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
_No response_
### Example URLs
These are only the extensions I have become aware of so far:
[`dv`](https://archive.org/download/out_20190702): https://archive.org/details/out_20190702
[`ts`](https://archive.org/download/HDNETTestPattern1080iHDTVDD5.1MPEG2TrollHD): https://archive.org/details/HDNETTestPattern1080iHDTVDD5.1MPEG2TrollHD
[`m2ts`](https://archive.org/download/foehn): https://archive.org/details/foehn
[`mpeg`](https://archive.org/download/FENSTER_HOF): https://archive.org/details/FENSTER_HOF
### Provide a description that is worded well enough to be understood
Some videos on archive.org have original formats with extensions that are apparently not supported by yt-dlp (I guess?), leading to an inferior `derivative` format getting downloaded.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vUFs', 'https://archive.org/details/out_20190702', 'https://archive.org/details/HDNETTestPattern1080iHDTVDD5.1MPEG2TrollHD', 'https://archive.org/details/foehn', 'https://archive.org/details/FENSTER_HOF']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8 (No VT), error utf-8 (No VT), screen utf-8 (No VT)
[debug] yt-dlp version nightly@2024.11.15.232903 from yt-dlp/yt-dlp-nightly-builds [c699bafc5] (win_exe)
[debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-8.1-6.3.9600-SP0 (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: ffmpeg 4.3, ffprobe 2022-07-24-git-39a538f430-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1838 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Latest version: nightly@2024.11.15.232903 from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date (nightly@2024.11.15.232903 from yt-dlp/yt-dlp-nightly-builds)
[archive.org] Extracting URL: https://archive.org/details/out_20190702
[archive.org] out_20190702: Downloading webpage
[archive.org] out_20190702: Downloading JSON metadata
[debug] Sort order given by extractor: source
[debug] Formats sorted by: hasvid, ie_pref, source, lang, quality, res, fps, hdr:12(7), vcodec, channels, acodec, size, br, asr, proto, vext, aext, hasaud, id
[info] Available formats for out_20190702:
ID EXT RESOLUTION | FILESIZE PROTO | VCODEC ACODEC MORE INFO
-----------------------------------------------------------------
0 ogv 400x300 | 551.24KiB https | unknown unknown derivative
1 mp4 640x480 | 738.35KiB https | unknown unknown derivative
[debug] Default format spec: bestvideo*+bestaudio/best
[info] out_20190702: Downloading 1 format(s): 1
[archive.org] Extracting URL: https://archive.org/details/HDNETTestPattern1080iHDTVDD5.1MPEG2TrollHD
[archive.org] HDNETTestPattern1080iHDTVDD5.1MPEG2TrollHD: Downloading webpage
[archive.org] HDNETTestPattern1080iHDTVDD5.1MPEG2TrollHD: Downloading JSON metadata
[debug] Sort order given by extractor: source
[debug] Formats sorted by: hasvid, ie_pref, source, lang, quality, res, fps, hdr:12(7), vcodec, channels, acodec, size, br, asr, proto, vext, aext, hasaud, id
[info] Available formats for HDNETTestPattern1080iHDTVDD5.1MPEG2TrollHD:
ID EXT RESOLUTION | FILESIZE PROTO | VCODEC ACODEC MORE INFO
----------------------------------------------------------------
0 ogv 533x300 | 39.56MiB https | unknown unknown derivative
1 mp4 853x480 | 59.21MiB https | unknown unknown derivative
[debug] Default format spec: bestvideo*+bestaudio/best
[info] HDNETTestPattern1080iHDTVDD5.1MPEG2TrollHD: Downloading 1 format(s): 1
[archive.org] Extracting URL: https://archive.org/details/foehn
[archive.org] foehn: Downloading webpage
[archive.org] foehn: Downloading JSON metadata
[debug] Sort order given by extractor: source
[debug] Formats sorted by: hasvid, ie_pref, source, lang, quality, res, fps, hdr:12(7), vcodec, channels, acodec, size, br, asr, proto, vext, aext, hasaud, id
[info] Available formats for foehn:
ID EXT RESOLUTION | FILESIZE PROTO | VCODEC ACODEC MORE INFO
----------------------------------------------------------------
0 mp4 854x480 | 12.59MiB https | unknown unknown derivative
[debug] Default format spec: bestvideo*+bestaudio/best
[info] foehn: Downloading 1 format(s): 0
[archive.org] Extracting URL: https://archive.org/details/FENSTER_HOF
[archive.org] FENSTER_HOF: Downloading webpage
[archive.org] FENSTER_HOF: Downloading JSON metadata
WARNING: "asr" field is not numeric - forcing int conversion, there is an error in extractor
[debug] Sort order given by extractor: source
[debug] Formats sorted by: hasvid, ie_pref, source, lang, quality, res, fps, hdr:12(7), vcodec, channels, acodec, size, br, asr, proto, vext, aext, hasaud, id
[info] Available formats for FENSTER_HOF:
ID EXT RESOLUTION | FILESIZE PROTO | VCODEC ACODEC MORE INFO
-----------------------------------------------------------------
0 mp3 unknown | 16.64MiB https | unknown mp3 derivative
1 mp4 320x240 | 100.42MiB https | unknown unknown derivative
2 ogv 400x304 | 93.95MiB https | unknown unknown derivative
3 mp4 640x480 | 143.47MiB https | unknown unknown derivative
[debug] Default format spec: bestvideo*+bestaudio/best
[info] FENSTER_HOF: Downloading 1 format(s): 3
```
| site-enhancement,triage | low | Critical |
2,663,538,406 | kubernetes | In terminating pod, status of containers is not updated | ### What happened?
When a pod with several containers is terminating, until all of those containers successfully terminate, the number of ready containers is not updated. For example, if you have a pod with two containers and one of them immediately exits and one of them has a prestop hook that sleeps for several minutes, the pod will show up as 2/2 containers ready even though one container immediately exits.
See kubelet logs here: https://gist.github.com/olyazavr/7f77673a47ce441b3a8670d509de8b16
### What did you expect to happen?
I expect that the pod status should reflect the number of containers that is actually ready, even if the pod is terminating.
### How can we reproduce it (as minimally and precisely as possible)?
Create a pod with two containers, one of which has a prestop hook that sleeps:
```
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
terminationGracePeriodSeconds: 200
containers:
- name: test
command: ["bash", "-c", "sleep infinity"]
- name: test2
command: ["bash", "-c", "sleep infinity"]
lifecycle:
preStop:
exec:
command: ["bash", "-c", "sleep 180"]
```
Delete the pod, and notice that it will stay at 2/2 ready until both containers die even when the runtime acknowledges that one container has been successfully shut down early on
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
$ kubectl version
Client Version: v1.29.10
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.10
```
</details>
### Cloud provider
<details>
aws
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
NAME="AlmaLinux"
VERSION="9.3 (Shamrock Pampas Cat)"
ID="almalinux"
ID_LIKE="rhel centos fedora"
VERSION_ID="9.3"
PLATFORM_ID="platform:el9"
PRETTY_NAME="AlmaLinux 9.3 (Shamrock Pampas Cat)"
ANSI_COLOR="0;34"
LOGO="fedora-logo-icon"
CPE_NAME="cpe:/o:almalinux:almalinux:9::baseos"
HOME_URL="https://almalinux.org/"
DOCUMENTATION_URL="https://wiki.almalinux.org/"
BUG_REPORT_URL="https://bugs.almalinux.org/"
ALMALINUX_MANTISBT_PROJECT="AlmaLinux-9"
ALMALINUX_MANTISBT_PROJECT_VERSION="9.3"
REDHAT_SUPPORT_PRODUCT="AlmaLinux"
REDHAT_SUPPORT_PRODUCT_VERSION="9.3"
$ uname -a
Linux ip-172-18-59-83 6.1.109-hs83.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Tue Sep 24 17:33:44 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
crio and containerd
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,priority/backlog,sig/node,triage/accepted | low | Critical |
2,663,578,509 | vscode | vscode crashes when delete file with explorer |
Type: <b>Bug</b>
1. open an empty folder.
2. create a source file, such as 'main.cpp'
3. select the source file in explorer, and press the 'Del' key.
4. click 'Move to Recycle Bin' when the dialog box 'Are you sure you want to delete xxx' pops up.
5. vscode crashes
VS Code version: Code 1.95.3 (f1a4fb101478ce6ec82fe9627c43efbf9e98c813, 2024-11-13T14:50:04.152Z)
OS version: Windows_NT x64 10.0.22631
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|12th Gen Intel(R) Core(TM) i7-12700F (20 x 2112)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|15.79GB (8.00GB free)|
|Process Argv|--disable-extensions .\\hello\\ --crash-reporter-id 69b68463-b17c-4114-b63b-80c1837dfdb8|
|Screen Reader|no|
|VM|40%|
</details>Extensions disabled<details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vswsl492cf:30256860
vscod805cf:30301675
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
a9j8j154:30646983
962ge761:30959799
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
bdiig495:31013172
dvdeprecation:31068756
dwnewjupyter:31046869
2f103344:31071589
impr_priority:31102340
nativerepl1:31139838
pythonrstrctxt:31112756
cf971741:31144450
iacca1:31171482
notype1cf:31157160
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31181875
```
</details>
<!-- generated by issue reporter --> | bug,freeze-slow-crash-leak | low | Critical |
2,663,652,276 | PowerToys | File Explorer File/Folder Comment Adding Tool | ### Description of the new feature / enhancement
I would love to see a PowerToys app where you can right click on a file or file within file explorer and add/edit/delete a comment.
### Scenario when this would be used?
This would be useful to know which files are important and why if you want to mark them in some way to come back to without needing to modify the file name.
### Supporting information
Current methods to add comments within File Explorer are only applicable to folders and are much to convoluted. | Needs-Triage | low | Minor |
2,663,652,671 | kubernetes | Kubelet receives pod scheduling late. | ### What happened?
The time when kubelet receives the pod creation request is much later than the time when scheduler scheduling is successful.
The log is as follows:
scheduler:
I1115 04:12:00.190703 10 schedule_one.go:286] "Successfully bound pod to node" pod="kube-system/registry-6cc84d4599-d7v2z" node="master1" evaluatedNodes=1 feasibleNodes=1
kubelet:
I1115 04:47:49.244673 504031 kubelet.go:2430] "SyncLoop ADD" source="api" pods=["kube-system/registry-6cc84d4599-d7v2z"]
### What did you expect to happen?
Kubelet should watch pod changes in time.
### How can we reproduce it (as minimally and precisely as possible)?
N/A
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
$ kubectl version
# paste output here
```
1.31
</details>
### Cloud provider
<details>
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/support,sig/node,triage/needs-information,needs-triage | low | Major |
2,663,703,140 | TypeScript | Weird behaviour with an "evolving any" and a non-null assertion operator | ### 🔎 Search Terms
"evolving any", "non-null assertion", "compiler api"
### 🕗 Version & Regression Information
- This changed in version 5.1
### ⏯ Playground Link
https://typescript-eslint.io/play/#ts=5.5.2&fileType=.tsx&code=CYUwxgNghgTiAEAzArgOzAFwJYHtVJxwAoBKALnlWQFsAjEGeAH3jVES1RGAFgAoUJFgIU6bHni1YpClToN%2B-UZlz5qUTgEZS8AN7948CCAzwAHgG4D5%2BAF5J0klb6GzdgsSf8A9N4B6APzWxqYAnpp21r6GgdaG8QmWUd4J8bEuRibwoQBMkRnR8OmpJWYAhM6GhanFmWEAzPlVKUVBGSXxblAAzpQ09DCV8NUJ6QC%2B-EA&eslintrc=N4KABGBEBOCuA2BTAzpAXGYBfEWg&tsconfig=N4KABGBEDGD2C2AHAlgGwKYCcDyiAuysAdgM6QBcYoEEkJemy0eAcgK6qoDCAFutAGsylBm3QAacDUhFYASSSomyPAEEiATwphR6KQF8Q%2BoA&tokens=false
NOTE: I linked to the typescript-eslint playground just because it both supports `// ^?` and also provides access to the TS types so you can easily see the issue. You can ofc reproduce this in the [TS playground too](https://www.typescriptlang.org/play/?ts=5.6.3#code/CYUwxgNghgTiAEAzArgOzAFwJYHtVJxwAoBKALnlWQFsAjEGeAH3jVES1RGAChRJYCFOmx54tWKQpU6DHj2GZc+alE4BGUvADePePAggM8AB4BuPafgBecZJIX9JmwWIOeAeg8A9APyXDYwBPdRtLL30-S30Y2PNwj1iYqP1A+CCAJjD9CPgUpKSTAEJHeFyk-LSggGZsssS8-wLC+CgAZ0oaehhS8tiogF8eIA).
### 💻 Code
```ts
declare function foo(): number | undefined
declare function bar(): number
function main1() {
let x;
x = bar();
x = foo();
//^?
let y1 =
// ^?
x;
// ^?
let y2 =
// ^?
x!;
// ^?
let y3 =
// ^?
x as number;
// ^?
}
```
### 🙁 Actual behavior
```ts
declare function foo(): number | undefined
declare function bar(): number
function main1() {
let x;
x = bar();
x = foo();
//^? any ✅
let y1 =
// ^? number | undefined ✅
x;
// ^? number | undefined ✅
let y2 =
// ^? number ✅
x!;
// ^? number ❌
let y3 =
// ^? number ✅
x as number;
// ^? number | undefined ✅
}
```
### 🙂 Expected behavior
```ts
declare function foo(): number | undefined
declare function bar(): number
function main1() {
let x;
x = bar();
x = foo();
//^? any ✅
let y1 =
// ^? number | undefined ✅
x;
// ^? number | undefined ✅
let y2 =
// ^? number ✅
x!;
// ^? number | undefined ❓❓❓❓
let y3 =
// ^? number ✅
x as number;
// ^? number | undefined ✅
}
```
### Additional information about the issue
Reference: https://github.com/typescript-eslint/typescript-eslint/issues/10334
[TypeScript-ESLint has a rule called `no-unnecessary-type-assertion`](https://typescript-eslint.io/rules/no-unnecessary-type-assertion/) which, as the name implies, flags unnecessary type assertions to help keep the code clean.
One of the checks done by the rule involves flagging unnecessary non-null assertions.
The basic logic for this check is "if the type of the thing does not include `null | undefined` then the non-null assertion is unnecessary".
By "the thing" I am referring to the expression being non-null asserted, eg the `x` in `x!`.
In the case of an "evolving any" the type of the variable is shown to be incorrect by the TS APIs.
As shown in the example code (specifically the `y2` case) - the type of `x` is presented as `number` - even though it should be presented as `number | undefined`. It appears that for some reason the non-null assertion modifies the type of the variable to remove the `| undefined`, rather than just emitting a non-nullish type from the non-null assertion expression.
This is made even weirder from the fact that writing `x as number` behaves correctly and the type of `x` is correctly presented as `number | undefined`.
This discrepancy is problematic for our lint rule because it means we currently false-positive on this case because when we query for the type of `x` it looks non-nullish -- which suggests the non-null assertion is unnecessary. But if the user removes the non-null assertion then the type of `x` changes to `number | undefined` and this causes errors. | Bug,Help Wanted | low | Critical |
2,663,703,334 | angular | Strictly typed FormGroup should have stictly-typed "get" accessor | ### Which @angular/* package(s) are the source of the bug?
forms
### Is this a regression?
No
### Description
> https://angular.dev/guide/forms/typed-forms
For instance:
```ts
private formBuilder = inject(NonNullableFormBuilder);
constructor() {
const formGroup = this.formBuilder.group({
someField: [""],
});
const control1 = formGroup.controls.someField.value; // works with no issues
const control2 = formGroup.get("someField").value; // Object is possibly 'null'.
}
```
The `get` method is typed as follows:
```ts
get<P extends string | readonly (string | number)[]>(path: P): AbstractControl<ɵGetProperty<TRawValue, P>> | null;
```
It seems like something here should be smarter in limiting the path of `P` to include something related to the actual keys of the form group. It should be smart enough to know that the result is not `null` if the `path` is a `keyof TRawValue`, since that means it'd always be there.
```ts
typedGet: <P extends Extract<keyof T, string> | readonly (string | number)[]>(
path: P,
) => P extends Extract<keyof T, string>
? AbstractControl<ɵGetProperty<T, P>>
: null;
```
Is this something angular could reasonably support? I suppose it'd be a breaking change and would never fit into the v19 release at this point.
### Please provide a link to a minimal reproduction of the bug
_No response_
### Please provide the exception or error you saw
```true
Object is possibly 'null'.ts(2531)
```
### Please provide the environment you discovered this bug in (run `ng version`)
```true
Angular CLI: 18.2.11
Node: 18.19.0
Package Manager: npm 10.2.3
OS: darwin arm64
Angular: 18.2.11
... animations, build, cli, common, compiler, compiler-cli, core
... forms, localize, platform-browser, platform-browser-dynamic
... router
Package Version
------------------------------------------------------
@angular-devkit/architect 0.1700.10
@angular-devkit/core 18.2.11
@angular-devkit/schematics 18.2.11
@angular/cdk 18.2.13
@schematics/angular 18.2.11
rxjs 7.8.1
typescript 5.5.4
zone.js 0.14.10
```
### Anything else?
_No response_ | area: forms,forms: strictly typed | low | Critical |
2,663,705,737 | create-react-app | Default project 'dotnet react' is not working | Just created project and executing test run.
Created Project using Visual Studio Code
Create command: 'dotnet new react'
Run command: dotnet run
TargetFramework: net6.0
Nove Version: v22.11.0
npm version: 10.9.0
Error: Microsoft.AspNetCore.SpaProxy.SpaProxyMiddleware[0]
SPA proxy is not ready. Returning temporary landing page.
trips@0.1.0 prestart
> node aspnetcore-https && node aspnetcore-react
There was an error exporting the HTTPS developer certificate to a file. Please create the target directory before exporting. Choose permissions carefully when creating it.
| needs triage,issue: bug report | low | Critical |
2,663,741,798 | kubernetes | Add flagz endpoint for kube-controller-manager | ### What would you like to be added?
Part of
- https://github.com/kubernetes/enhancements/issues/4828
Add the `/flagz` endpoint for kube-controller-manager.
Sample response:
```
----------------------------
title: Kubernetes Flagz
description: Command line flags that Kubernetes component was started with.
----------------------------
default-watch-cache-size=100
delete-collection-workers=1
enable-garbage-collector=true
encryption-provider-config-automatic-reload=false
...
```
/sig instrumentation
### Why is this needed?
Refer to [Motivation setcion](https://github.com/kubernetes/enhancements/blob/master/keps/sig-instrumentation/4828-component-flagz/README.md#motivation) in KEP. | kind/feature,sig/instrumentation,needs-triage | low | Minor |
2,663,743,053 | vscode | Emmet class expansions retain period in .erb files |
Type: <b>Bug</b>
If you expand an Emmet class shortcut (e.g.: `.something`) vscode will expand the entry to `.<div class="something"></div>`. The period at the start is incorrect.


VS Code version: Code 1.95.2 (e8653663e8840adaf45af01eab5c627a5af81807, 2024-11-07T11:07:22.054Z)
OS version: Linux x64 6.11.7-300.fc41.x86_64
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i7-9700T CPU @ 2.00GHz (8 x 3600)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: disabled_software<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: disabled_off<br>webnn: disabled_off|
|Load (avg)|2, 1, 1|
|Memory (System)|31.18GB (21.71GB free)|
|Process Argv|. --crash-reporter-id 0ca08db6-6cd2-4398-b705-c59f2340e5d3|
|Screen Reader|no|
|VM|0%|
|DESKTOP_SESSION|gnome|
|XDG_CURRENT_DESKTOP|GNOME|
|XDG_SESSION_DESKTOP|gnome|
|XDG_SESSION_TYPE|wayland|
</details><details><summary>Extensions (58)</summary>
Extension|Author (truncated)|Version
---|---|---
language-x86-64-assembly|13x|3.1.4
icons-carbon|ant|0.2.6
vscode-eslint|dba|3.0.10
EditorConfig|Edi|0.16.4
prettier-vscode|esb|11.0.0
vscode-firefox-debug|fir|2.9.11
go|gol|0.42.1
lua-plus|jep|1.1.1
vscode-rdbg|Koi|0.2.2
cortex-debug|mar|1.12.1
debug-tracker-vscode|mcu|0.0.15
memory-view|mcu|0.0.25
peripheral-viewer|mcu|1.4.6
rtos-views|mcu|0.0.7
dotenv|mik|1.0.1
vscode-docker|ms-|1.29.3
debugpy|ms-|2024.12.0
python|ms-|2024.20.0
vscode-pylance|ms-|2024.11.2
remote-containers|ms-|0.388.0
remote-ssh|ms-|0.115.1
remote-ssh-edit|ms-|0.87.0
remote-wsl|ms-|0.88.5
vscode-remote-extensionpack|ms-|0.26.0
cpptools|ms-|1.22.11
cpptools-extension-pack|ms-|1.3.0
makefile-tools|ms-|0.11.13
powershell|ms-|2024.4.0
remote-explorer|ms-|0.4.3
remote-server|ms-|1.5.2
vscode-serial-monitor|ms-|0.13.1
playdate|Ort|0.9.3
pico-w-go|pau|4.0.6
material-icon-theme|PKi|5.14.1
java|red|1.36.0
vscode-xml|red|0.27.1
rust-analyzer|rus|0.3.2180
ruby-lsp|Sho|0.8.13
ayu-green|Sir|1.0.1
lua|sum|3.13.1
svelte-vscode|sve|109.2.3
even-better-toml|tam|0.19.2
ayu|tea|1.0.5
vscode-standard-ruby|tes|0.0.16
vscode-tinygo|tin|0.5.0
cmake|twx|0.0.17
intellicode-api-usage-examples|Vis|0.2.9
vscodeintellicode|Vis|1.3.2
vscode-gradle|vsc|3.16.4
vscode-java-debug|vsc|0.58.1
vscode-java-dependency|vsc|0.24.1
vscode-java-pack|vsc|0.29.0
vscode-java-test|vsc|0.43.0
vscode-maven|vsc|0.44.0
vscode-icons|vsc|12.9.0
volar|Vue|2.1.10
material-theme|zhu|3.17.6
linkerscript|Zix|1.0.4
(5 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805:30301674
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
962ge761:30959799
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
2e7ec940:31000449
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
945dj816:31013170
dvdeprecation:31068756
dwnewjupyter:31046869
newcmakeconfigv2:31071590
impr_priority:31102340
nativerepl1:31139838
pythonrstrctxt:31112756
cf971741:31144450
iacca1:31171482
notype1cf:31157160
5fd0e150:31155592
dwcopilot:31170013
j44ff735:31181874
```
</details>
<!-- generated by issue reporter --> | bug,emmet,confirmation-pending | low | Critical |
2,663,812,522 | vscode | Issue Reporter is counter productive | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.95.3
- OS Version: Linux x64 6.8.0-48-generic
Steps to Reproduce:
1. Use "Help - Report Issue".
2. Issue Reporter prompts.
Problems:
* It does not allow user to copy collected information.
* It forces user to fill in title/steps before "Preview on Github" is enabled.
* Issue Reporter is unhelpful for writing a bug report, and it does not generate an issue that matches vscode or extension issue template.
Expected Behavior:
- Collected information can be copied.
- "Preview on Github" can create a blank issue with correct template. | bug,issue-reporter | low | Critical |
2,663,828,876 | electron | Minimum size constraints snap exactly to aspect ratio ignoring exempt size | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
33.2.0
### What operating system(s) are you using?
macOS
### Operating System Version
macOS Sonoma 14.7
### What arch are you using?
arm64 (including Apple Silicon)
### Last Known Working Electron version
< v30.3.1.
_No response_
### Expected Behavior
`minWidth` and `minHeight` should account for `extraSize` when clamping to an aspect ratio.
### Actual Behavior
Minimum size constraints seemingly clamp to an aspect ratio but ignore the [extraSize parameter](https://www.electronjs.org/docs/latest/api/browser-window#winsetaspectratioaspectratio-extrasize).
https://github.com/user-attachments/assets/ce33d140-e9fa-4872-a71f-39fdef4d8f1e
### Testcase Gist URL
https://gist.github.com/jesse-gibson-dd/f5d73f3ac84c60f3edd8d3b232c08b11
### Additional Information
I'm building a window that's basically a wrapper around a video and has controls. The inner video must be scaled according to an aspect ratio and the outer controls remain constant. An ideal use case for [extraSize](https://www.electronjs.org/docs/latest/api/browser-window#winsetaspectratioaspectratio-extrasize). However, this window can't get too small, otherwise the controls get smooshed and become sort of useless.
I noticed that if you hit the exact pixel of a minimum height OR minimum width, it instantly snaps to the aspect ratio of the video, not the aspect ratio of the whole window (taking `extraSize` into account). The controls are rather large. The aspect skew is pretty noticeable.
In the meantime I'm recalculating the aspect ratio on `resize` and snapping the window to match. It's not great UX, but it avoids the skew.
I first noticed this on Electron `v30.3.1`. Repro'd on latest `v33.2.0`. | platform/macOS,bug :beetle:,status/confirmed,has-repro-gist,33-x-y,34-x-y | low | Critical |
2,663,921,534 | godot | Disabling "Script Editor" editor feature causes external script editor to not auto-launch | ### Tested versions
v4.4.dev4.mono.official [36e6207bb]
### System information
Godot v4.4.dev4.mono - Windows 10.0.22631 - Multi-window, 2 monitors - OpenGL 3 (Compatibility) - NVIDIA GeForce RTX 4070 Ti SUPER (NVIDIA; 32.0.15.6614) - 13th Gen Intel(R) Core(TM) i9-13900K (32 threads)
### Issue description
Disabling the "Script Editor" editor feature prevents Godot from launching the configured external editor.
In my case, I use C# and JetBrains Rider. If I have the Script Editor feature enabled, clicking the script icon in a scene correctly opens Rider, but if I disable the Script Editor feature, clicking the script icon no longer has any effect.
### Steps to reproduce
1. Go to Editor > Manage Editor Features...
2. Under "Main Features:", disable the "Script Editor" feature
3. Try opening a C# script by clicking the script icon on a node in the scene hierarchy
### Minimal reproduction project (MRP)
N/A | discussion,topic:editor | low | Major |
2,663,928,961 | react-native | RCTUnsafeExecuteOnMainQueueSync crash | ### Description
UIKit's UIView only guarantee to run on main thread/MainActor.
But ReactNative's RCTRootView is expecting to run on main queue.
This will cause crash when using RCTRootView in UICollectionViewCell which may call init on `com.apple.uikit.datasource.diffing` (serial) queue.
### Steps to reproduce
Init a `RCTRootView` in a UICollectionViewCell.init method
### React Native Version
0.76.2
### Affected Platforms
Runtime - iOS
### Output of `npx react-native info`
```text
System:
OS: macOS 15.1
CPU: (10) arm64 Apple M2 Pro
Memory: 193.73 MB / 16.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 23.2.0
path: /opt/homebrew/bin/node
Yarn:
version: 1.22.22
path: /opt/homebrew/bin/yarn
npm:
version: 10.9.0
path: /opt/homebrew/bin/npm
Watchman:
version: 2024.11.11.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods:
version: 1.15.2
path: /Users/kyle/.rbenv/shims/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 24.0
- iOS 18.0
- macOS 15.0
- tvOS 18.0
- visionOS 2.0
- watchOS 11.0
Android SDK: Not Found
IDEs:
Android Studio: Not Found
Xcode:
version: 16.0/16A242d
path: /usr/bin/xcodebuild
Languages:
Java: Not Found
Ruby:
version: 3.3.0
path: /Users/kyle/.rbenv/shims/ruby
npmPackages:
"@react-native-community/cli":
installed: 15.0.1
wanted: 15.0.1
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.2
wanted: 0.76.2
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: true
iOS:
hermesEnabled: true
newArchEnabled: true
```
### Stacktrace or Logs
```text
- dataSource.apply(_:to:animatingDifferences:) (in main queue & MainThread)
- __UIDiffableDataSource _performOnApplyQueue: use dispatch_sync(diffQueue)
- ...
- RCTRootView init (in diffQueue & MainThread)
- RCTUnsafeExecuteOnMainQueueSync (crash)
```
### Reproducer
https://github.com/Kyle-Ye/RNMainQueueCrashDemo
### Screenshots and Videos
<img width="963" alt="image" src="https://github.com/user-attachments/assets/9ee7d0c8-619a-48ad-b550-b62836270784">
### Other information
The original discussion can be found at [here](https://x.com/KyleSwifter/status/1856704385285566896)
- http://blog.benjamin-encz.de/post/main-queue-vs-main-thread/
| Issue: Author Provided Repro | low | Critical |
2,663,952,503 | svelte | svelte component not reactive | ### Describe the bug
Hi , here activeGrid is reactive state , i can see activeGrid text changes.. but GridSvg with reactive component is not re-rendering when activeGrid is changed.
{#snippet button()}
<div>
{activeGrid} - this changes , but not below
<GridSvg template={activeGrid} {dimension} {strokeWidth} />
</div>
{/snippet}
{#snippet content({ togglePopup })}
<div class="gridcontainer">
content
</div>
{/snippet}
<PopMenu {button} {content} />
I even tried like this template=bindable option in my GridSvg , but still its not reactive
### Reproduction
Hi , here activeGrid is reactive state , i can see activeGrid text changes.. but GridSvg with reactive component is not re-rendering when activeGrid is changed.
### Logs
_No response_
### System Info
```shell
mac
```
### Severity
blocking all usage of svelte | awaiting submitter | low | Critical |
2,663,994,319 | langchain | langchain-pinecone 0.2.0 doesn't support Python 3.13 | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
--
### Error Message and Stack Trace (if applicable)
_No response_
### Description
The current `langchain-pinecone` package (`v0.2.0`) depends on pinecone-client version `^5.0.0`. However, the latest release in the `5.0.x` series is `5.0.1`, which was last updated on **August 1, 2024**, and does not support Python 3.13.
The most recent version of `pinecone` on PyPI is `5.4.0`, released on **November 13, 2024**, and it includes support for Python 3.13.
Updating the dependency in `langchain-pinecone` to use `pinecone` version `^5.4.0` or newer would enable compatibility with Python 3.13 and ensure access to the latest features and improvements in the pinecone sdk.
https://pypi.org/project/pinecone/
https://pypi.org/project/pinecone-client/
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 24.1.0: Thu Oct 10 21:05:14 PDT 2024; root:xnu-11215.41.3~2/RELEASE_ARM64_T8103
> Python Version: 3.12.0 (v3.12.0:0fb18b02c8, Oct 2 2023, 09:45:56) [Clang 13.0.0 (clang-1300.0.29.30)]
Package Information
-------------------
> langchain_core: 0.3.19
> langchain: 0.3.7
> langchain_community: 0.3.7
> langsmith: 0.1.143
> langchain_experimental: 0.3.3
> langchain_openai: 0.2.8
> langchain_pinecone: 0.2.0
> langchain_text_splitters: 0.3.2
Optional packages not installed
-------------------------------
> langgraph
> langserve
Other Dependencies
------------------
> aiohttp: 3.9.5
> async-timeout: Installed. No version info available.
> dataclasses-json: 0.6.7
> httpx: 0.27.2
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> numpy: 1.26.4
> openai: 1.54.4
> orjson: 3.10.11
> packaging: 24.2
> pinecone-client: 5.0.1
> pydantic: 2.9.2
> pydantic-settings: 2.6.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.35
> tenacity: 9.0.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2 | investigate | low | Critical |
2,664,051,643 | opencv | CUDA_DISABLER macro is not defined in OpenCV code | ### System Information
OpenCV version: 4.10.0
Operating System / Platform: Ubuntu 22.04
Compiler & compiler version: GCC 11.4.0
### Detailed description
In the following code refer to `CUDA_DISABLER`.
- https://github.com/opencv/opencv/blob/4.10.0/modules/stitching/src/cuda/build_warp_maps.cu
- https://github.com/opencv/opencv/blob/4.10.0/modules/stitching/src/cuda/multiband_blend.cu
But, this macro is not defined in OpenCV code.
I found that the definition of this macro was removed in <https://github.com/opencv/opencv_contrib/pull/2554>
### Steps to reproduce
```bash
git clone https://github.com/opencv/opencv.git -b 4.10.0
cd opencv
grep -r CUDA_DISABLER
```
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [X] I updated to the latest OpenCV version and the issue is still there
- [X] There is reproducer code and related data files (videos, images, onnx, etc) | bug,category: build/install,category: gpu/cuda (contrib) | low | Minor |
2,664,062,120 | TypeScript | `RegExp#[Symbol.matchAll]` should return iterable of `RegExpExecArray` instead of `RegExpMatchArray` | ### 🔎 Search Terms
RegExp, matchAll, Symbol.matchAll, RegExpExecArray, RegExpMatchArray
### 🕗 Version & Regression Information
- This is the behavior in every version I tried, and I reviewed the FAQ for entries about RegExp, matchAll, Symbol.matchAll, RegExpExecArray, RegExpMatchArray
### ⏯ Playground Link
https://www.typescriptlang.org/play/?target=99#code/MYewdgzgLgBATgUxgXhgegHRoOYChSSzRwowDkAhgEbBm64BmIJAFAdDALYwgMzEZOFKMAAWAQQA2klogCUcmAG9cMNejQ8A1jBYIAHgAcEwKAgAm8BBACukqHNXr2sAJYAuGGBucqCEqicGK5g5gbqMLgAvvRMrC5cPHyIANoAygCeviCSgsJiUpIAuizECspOamiaACoZxuTevv4wAD4wNqEIDCEWZDCuEF4gsBQQEK7YYNSSSFAgMFD1SGRNfnBkGJUwCR5ePuukQSFh+tG4QA
### 💻 Code
```ts
const re = /./g
const str = 'abc'
for (const m of str.matchAll(re)) {
// ok (expected result)
const i: number = m.index
}
for (const m of re[Symbol.matchAll](str)) {
// Type 'number | undefined' is not assignable to type 'number'.
const i: number = m.index
}
```
### 🙁 Actual behavior
While `String#matchAll` now correctly returns `RegExpStringIterator<RegExpExecArray>` (https://github.com/microsoft/TypeScript/issues/36788, fixed in https://github.com/microsoft/TypeScript/pull/55565), the identically-behaving `RegExp#[Symbol.matchAll]` still returns the incorrect type `RegExpStringIterator<RegExpMatchArray>`
### 🙂 Expected behavior
`String#matchAll` and `RegExp#[Symbol.matchAll]` to behave identically at compile time as well as runtime, i.e. both `String#matchAll` and `RegExp#[Symbol.matchAll]` to consistently return `RegExpStringIterator<RegExpExecArray>`.
### Additional information about the issue
_No response_ | Bug,Help Wanted,Domain: lib.d.ts | low | Minor |
2,664,121,266 | tauri | [bug] Tauri dev failing when adding tailwind to SvelteKit project | ### Describe the bug
After adding tailwindcss to a brand new project, I get an error when running `pnpm tauri dev`.
But works when running `pnpm run dev` and opening in the browser.
Seems similar to this issue: https://github.com/tauri-apps/tauri/issues/11482
### Reproduction
- Create a new project with `pnpm create tauri-app`. All default options, except selecting Svelte as the framework.
- Add tailwind with `pnpm dlx sv add tailwindcss`
- Run `pnpm tauri dev`
### Expected behavior
Should be able to run the app, instead of crashing.
### Full `tauri info` output
```text
[✔] Environment
- OS: EndeavourOS Rolling Release x86_64 (X64)
✔ webkit2gtk-4.1: 2.46.3
✔ rsvg2: 2.59.2
✔ rustc: 1.82.0 (f6e511eec 2024-10-15)
✔ cargo: 1.82.0 (8f40fc59f 2024-08-21)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: 1.82.0-x86_64-unknown-linux-gnu (default)
- node: 22.11.0
- pnpm: 9.13.2
- npm: 10.9.0
[-] Packages
- tauri 🦀: 2.1.1
- tauri-build 🦀: 2.0.3
- wry 🦀: 0.47.0
- tao 🦀: 0.30.8
- tauri-cli 🦀: 2.0.4
- @tauri-apps/api : 2.1.1
- @tauri-apps/cli : 2.1.0
[-] Plugins
- tauri-plugin-shell 🦀: 2.0.2
- @tauri-apps/plugin-shell : 2.0.1
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../build
- devUrl: http://localhost:1420/
- framework: Svelte
- bundler: Vite
```
### Stack trace
```text
10:39:54 AM [vite] Internal server error: [postcss] /home/tore/workspace/query-nest/src/routes/+page.svelte?svelte&type=style&lang.css:2:12: Unknown word
Plugin: vite:css
File: /home/tore/workspace/query-nest/src/routes/+page.svelte?svelte&type=style&lang.css:2:11
1 | <script lang="ts">
2 | import { invoke } from "@tauri-apps/api/core";
| ^
3 |
4 | let name = $state("");
at Input.error (/home/tore/workspace/query-nest/node_modules/.pnpm/postcss@8.4.49/node_modules/postcss/lib/input.js:106:16)
at Parser.unknownWord (/home/tore/workspace/query-nest/node_modules/.pnpm/postcss@8.4.49/node_modules/postcss/lib/parser.js:593:22)
at Parser.other (/home/tore/workspace/query-nest/node_modules/.pnpm/postcss@8.4.49/node_modules/postcss/lib/parser.js:435:12)
at Parser.parse (/home/tore/workspace/query-nest/node_modules/.pnpm/postcss@8.4.49/node_modules/postcss/lib/parser.js:470:16)
at parse (/home/tore/workspace/query-nest/node_modules/.pnpm/postcss@8.4.49/node_modules/postcss/lib/parse.js:11:12)
at new LazyResult (/home/tore/workspace/query-nest/node_modules/.pnpm/postcss@8.4.49/node_modules/postcss/lib/lazy-result.js:133:16)
at Processor.process (/home/tore/workspace/query-nest/node_modules/.pnpm/postcss@8.4.49/node_modules/postcss/lib/processor.js:53:14)
at compileCSS (file:///home/tore/workspace/query-nest/node_modules/.pnpm/vite@5.4.11/node_modules/vite/dist/node/chunks/dep-CB_7IfJ-.js:36897:59)
at async TransformPluginContext.transform (file:///home/tore/workspace/query-nest/node_modules/.pnpm/vite@5.4.11/node_modules/vite/dist/node/chunks/dep-CB_7IfJ-.js:36170:11)
at async PluginContainer.transform (file:///home/tore/workspace/query-nest/node_modules/.pnpm/vite@5.4.11/node_modules/vite/dist/node/chunks/dep-CB_7IfJ-.js:49096:18)
```
### Additional context
_No response_ | type: bug,status: upstream,platform: Linux,status: needs triage | low | Critical |
2,664,157,722 | PowerToys | Highlight diff for PowerRename | ### Description of the new feature / enhancement
Highlight only changed parts of the renamed file name (like git diff)

### Scenario when this would be used?
Always.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,664,179,853 | TypeScript | Missing Boolean.toString definition and loose valueOf definition | ### ⚙ Compilation target
ES2017
### ⚙ Library
lib.es5.d.ts
### Missing / Incorrect Definition
```typescript
interface Boolean {
toString<T extends boolean>(this: T): `${T}`
toString(this: Boolean): `${boolean}`
valueOf<T extends boolean>(this: T): T
valueOf(this: Boolean): boolean
}
```
[TS Playground](https://www.typescriptlang.org/play/?#code/PTAEEsDsBcFMCcBmBDAxrUAhA9tgNrMpKAN4BQIoVV02AytPFAOYA8AKqLAB5yQAmAZ1AAjXASIA+ABTQAFuEEAuUOwCUKgAYASEuwC+mimGqhaDJpGayFyrOMKQNoHSTH5Hh46dAA3ZHgArrAA8ogcXLywAsLuEpAy8ooq6inepv5BoYg2yfYeRM5xjt76ZGSo2JCC0KLioAC8ZvDBoMixDkQA3OVkBLXIWrrFRIaNzcEAdOaMLNJqPf2iQ26dkGNNKHiCsNP0s1bzi7C1qCsj65rjcXsWcwt9J6D852sboJCwAO758fO3B2sDyWsFeBUu4xw4P+M0sQJ6j1qiBUjFaTVRu0ywTCR0RoGYKi2O3GRMxAWxOQeQA)
Related issues [#30225](https://github.com/microsoft/TypeScript/issues/30225), [#38347](https://github.com/microsoft/TypeScript/issues/38347)
### Sample Code
```TypeScript
const bool = true as boolean;
let a: `${boolean}` = true.toString();
let b: `${boolean}` = false.toString();
let c: `${boolean}` = bool.toString();
let d: `${boolean}` = new Boolean().toString();
let e: `${boolean}` = Boolean().toString();
let f: true = true.valueOf();
let g: false = false.valueOf();
```
### Documentation Link
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Boolean/toString
https://tc39.es/ecma262/multipage/fundamental-objects.html#sec-boolean.prototype.tostring
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Boolean/valueOf
https://tc39.es/ecma262/multipage/fundamental-objects.html#sec-boolean.prototype.valueof | Suggestion,Awaiting More Feedback | low | Major |
2,664,183,859 | material-ui | [icons] Support converting icons to PNG | ### Steps to reproduce
[ Question doesn't need reproduction. ]
### Current behavior
I have also tried `ReactDOMServer.renderToString`, `ReactDOMServer.renderToStaticMarkup`, `ReactDOM.render` and friends to retrieve the SVG string for `sharp` to convert, which returns an invalid XM Ldocument, crashing `sharp`. Various issues opened over there.
### Expected behavior
We need PNGs for things like email which do not render SVGs.
### Context
For context, we have the $600 paid X license.
Not relevant to this ticket, but 🤷🏻
### Your environment
<details>
<summary><code>npx @mui/envinfo</code></summary>
```
System:
OS: Windows 10 10.0.19045
Binaries:
Node: 22.11.0 - C:\Program Files\nodejs\node.EXE
npm: 10.8.3 - C:\Program Files\nodejs\npm.CMD
pnpm: Not Found
Browsers:
Chrome: Not Found
Edge: Chromium (128.0.2739.54)
npmPackages:
@emotion/react: ^11.13.3 => 11.13.3
@emotion/styled: ^11.13.0 => 11.13.0
@mui/core-downloads-tracker: 6.1.6
@mui/icons-material: ^6.1.6 => 6.1.6
@mui/material: ^6.1.6 => 6.1.6
@mui/material-nextjs: ^6.1.6 => 6.1.6
@mui/private-theming: 6.1.6
@mui/styled-engine: 6.1.6
@mui/system: ^6.1.6 => 6.1.6
@mui/types: 7.2.19
@mui/utils: 6.1.6
@mui/x-data-grid: 7.22.2
@mui/x-data-grid-premium: ^7.22.2 => 7.22.2
@mui/x-data-grid-pro: 7.22.2
@mui/x-internals: 7.21.0
@mui/x-license: 7.21.0
@types/react: ^18.3.12 => 18.3.12
react: ^18.3.1 => 18.3.1
react-dom: ^18.3.1 => 18.3.1
typescript: ^5.6.3 => 5.6.3
```
</details>
**Search keywords**: icon, png, email, inline, embed, sharp, convert | new feature,waiting for 👍,package: icons | low | Critical |
2,664,193,529 | rust | Add mention that `ilog2` and `checked_ilog2` can be used to get the number of bits in an integer | ### Location
- [`{integer}::ilog2`](https://doc.rust-lang.org/nightly/core/primitive.i32.html#method.ilog2)
- [`{integer}::checked_ilog2`](https://doc.rust-lang.org/nightly/core/primitive.i32.html#method.checked_ilog2)
- [`NonZero::ilog2`](https://doc.rust-lang.org/nightly/core/num/struct.NonZero.html#method.ilog2)
### Summary
The standard library of some programming languages has a method to get the number of bits in an integer:
- [`bits.Len()`](https://pkg.go.dev/math/bits@go1.23.3#Len) in Go
- [`int.bit_length()`](https://docs.python.org/3.13/library/stdtypes.html#int.bit_length) in Python
- [`Integer.bit_length`](https://docs.ruby-lang.org/en/3.3/Integer.html#method-i-bit_length) in Ruby
The number of bits in binary representation of a positive integer $n$ is the integer part of $1 + \log_2 n$, i.e.
> $\lfloor \log_2 n \rfloor + 1$
Rust doesn't have a method like the above, but it does have [`ilog2`](https://doc.rust-lang.org/nightly/core/primitive.i32.html#method.ilog2) and [`checked_ilog2`](https://doc.rust-lang.org/nightly/core/primitive.i32.html#method.checked_ilog2). Therefore, we can get the number of bits in an integer by using `ilog2` or `checked_ilog2` as follows:
```rust
// n > 0
assert_eq!(42_u8.ilog2() + 1, 6);
// supports `n == 0`
assert_eq!(u8::MIN.checked_ilog2().map_or(0, |n| n + 1), 0);
// similar to Python's `int.bit_length()`
assert_eq!((-42_i8).unsigned_abs().ilog2() + 1, 6);
```
I don't think everyone knows that we can get the number of bits in an integer by this way.
Personally, I don't think there's any need to add a method like `bits.Len()` and `int.bit_length()` to integer types, since we can get the number of bits in an integer using $\lfloor \log_2 n \rfloor + 1$. However, since the standard library of some programming languages has a method to get the number of bits in an integer, I think this is a feature that is in some demand.
Therefore, I think it would be helpful to mention in the explanation of `ilog2` and `checked_ilog2` that the number of bits in an integer can be obtained by using $\lfloor \log_2 n \rfloor + 1$.
See also: [Rust Internals](https://internals.rust-lang.org/t/add-methods-that-return-the-number-of-bits-necessary-to-represent-an-integer-in-binary-to-the-standard-library/21870) | A-docs,T-libs | low | Minor |
2,664,194,084 | PowerToys | Focus/Study mode | ### Description of the new feature / enhancement
This feature would stop you from using certain apps which could hamper productivity.
### Scenario when this would be used?
Useful for students or professionals looking to focus on their work/studies
### Supporting information
An app with similar functionality to Focus mode in Android phones
https://www.samsung.com/ae/support/mobile-devices/how-to-use-the-focus-mode-feature-in-android-q-with-one-ui-20/ | Needs-Triage | low | Minor |
2,664,265,010 | node | resolve hook is not run for require | ### Version
23.0.0
### Platform
```text
Linux hooks 6.8.0-47-generic #47~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Wed Oct 2 16:16:55 UTC 2 x86_64 x86_64 x86_64 GNU/Linux
```
### Subsystem
customization hooks
### What steps will reproduce the bug?
Set up a [module resolution hook](https://nodejs.org/docs/latest-v22.x/api/module.html#resolvespecifier-context-nextresolve) that replaces the core `fs` import with `fake.js`. Then try to import and require `fs`, respectively. When importing, `fs` is replaced as expected. When requiring, the hook is never run.
**For the setup:**
_register.js:_
```
import { register } from 'node:module';
register('./hook.js', import.meta.url);
```
_hook.js:_
```
import { fileURLToPath } from 'node:url';
import { dirname, join } from 'node:path';
export async function resolve(specifier, context, nextResolve) {
const path = fileURLToPath(import.meta.url);
const dir = dirname(path);
if (/fs/.test(specifier)) specifier = join(dir, 'fake.js');
return nextResolve(specifier, context);
};
```
_fake.js:_
```
export function readFileSync() { return 'foo'; }
```
**Now we're ready for the money part:**
_index.js_
```
import {readFileSync} from 'fs';
import { fileURLToPath } from 'node:url';
console.log(readFileSync(fileURLToPath(import.meta.url), 'utf8')); // 'foo'
```
_index.cjs:_
```
const readFileSync = require('fs').readFileSync;
console.log(readFileSync(__filename, 'utf8')); // Prints out the source file
```
NB: `type` is set to `module` in `package.json`.
### How often does it reproduce? Is there a required condition?
Can be reliably reproduced every time.
### What is the expected behavior? Why is that the expected behavior?
Resolve hook should be run even for `require`, and replace `fs` with `fake.js`. The output should be `foo`.
I am basing this expectation off of the documentation (emphasis mine):
[`module#resolve`](https://nodejs.org/docs/latest-v22.x/api/module.html#resolvespecifier-context-nextresolve)
> The resolve hook chain is responsible for telling Node.js where to find and how to cache a given import statement or expression, **or require call**.
[section "enabling"](https://nodejs.org/docs/latest-v22.x/api/module.html#enabling):
> my-app.js can also be CommonJS. Customization hooks will run for any modules that it references via import **(and optionally require)**.
### What do you see instead?
`node --import=./register.js index.js` produces `foo`, as expected.
`node --import=./register.js index.cjs` prints out the source file - bad. Annotating the resolve hook with a `console.log` statement shows it is never run.
### Additional information
I have asked about this on [Stack Overflow](https://stackoverflow.com/questions/79122415/resolve-hook-not-run-for-require) but not received any answers. | loaders | low | Critical |
2,664,274,720 | deno | thread 'main' panicked at ext/cache/sqlite.rs on cache creation | Version: Deno 2.0.6
Script:
```ts
await caches.open("example")
```
Command: `deno run -A main.ts`
Error:
```
============================================================
Deno has panicked. This is a bug in Deno. Please report this
at https://github.com/denoland/deno/issues/new.
If you can reliably reproduce this panic, include the
reproduction steps and re-run with the RUST_BACKTRACE=1 env
var set and include the backtrace in your report.
Platform: linux x86_64
Version: 2.0.6
Args: ["deno", "run", "main.ts"]
thread 'main' panicked at ext/cache/sqlite.rs:48:10:
failed to create cache dir: Os { code: 13, kind: PermissionDenied, message: "Permission denied" }
stack backtrace:
0: rust_begin_unwind
1: core::panicking::panic_fmt
2: core::result::unwrap_failed
3: deno_cache::sqlite::SqliteBackedCache::new
4: deno_runtime::worker::MainWorker::from_options::{{closure}}::{{closure}}
5: deno_cache::get_cache
6: deno_cache::op_cache_storage_open::op_cache_storage_open<CA>::call::{{closure}}
7: <deno_core::runtime::op_driver::futures_unordered_driver::FuturesUnorderedDriver<C> as deno_core::runtime::op_driver::OpDriver<C>>::submit_op_fallible
8: deno_cache::op_cache_storage_open::op_cache_storage_open<CA>::v8_fn_ptr
9: Builtins_CallApiCallbackGeneric
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
```
Here is the output of the `deno info` command:
```
DENO_DIR location: /home/pomdtr.me/.cache/deno
Remote modules cache: /home/pomdtr.me/.cache/deno/remote
npm modules cache: /home/pomdtr.me/.cache/deno/npm
Emitted modules cache: /home/pomdtr.me/.cache/deno/gen
Language server registries cache: /home/pomdtr.me/.cache/deno/registries
Origin storage: /home/pomdtr.me/.cache/deno/location_data
```
I'm not sure why deno is unable to create the cache dir. I did run:
```
chown -R pomdtr.me:pomdtr.me /home/pomdtr.me/.cache
```
And checked sure that the script was running with my pomdtr.me uid using `Deno.uid`
I can create a directory in the cache folder just fine using `mkdir` | needs investigation | low | Critical |
2,664,275,828 | deno | make deno installable in stackblitz | Attempting to install Deno inside a Stackblitz Web Container fails immediately (note that currently `curl` in a Stackblitz Web Container environment doesn't support `-f` nor `-S`):
```console
❯ curl -sL https://deno.land/install.sh | sh
curl: socket hang up
```
Inspecting network traffic for the browser document via DevTools reveals that the first issue is a CORS issue:
> Access to fetch at 'https://deno.land/install.sh' from origin 'https://something.w-credentialless-staticblitz.com' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
I then tried pasting the contents of https://deno.land/install.sh into a file and running it directly:
```console
❯ sh install.sh
install.sh: line 7: unsupported shell syntax
```
For reference (and as the install.sh script might change with time), here is line 7:
```sh
if ! command -v unzip >/dev/null && ! command -v 7z >/dev/null; then
```
The Stackblitz Web Container shell (`/bin/jsh`/) appears to have limited functionality but as Stackblitz supports Node.js it seems to me that supporting Deno should also be feasible.
It might require some special setup from Stackblitz though. I've also created https://github.com/stackblitz/webcontainer-core/issues/1608. | suggestion | low | Minor |
2,664,275,943 | pytorch | [Windows] pytorch >= 2.5 | ### 🐛 Describe the bug
I build pytorch from github but when I try to use I get this error:
```
Traceback (most recent call last):
File "C:\Users\Admin\Desktop\Python\0.LLMs\AutoAWQ\AutoAWQ\setup.py", line 2, in <module>
import torch
File "C:\Users\Admin\Desktop\Python\0.LLMs\AutoAWQ\venv\Lib\site-packages\torch\_init_.py", line 262, in <module>
_load_dll_libraries()
File "C:\Users\Admin\Desktop\Python\0.LLMs\AutoAWQ\venv\Lib\site-packages\torch\_init_.py", line 258, in _load_dll_libraries
raise err
OSError: [WinError 126] Não foi possível encontrar o módulo especificado. Error loading "C:\Users\Admin\Desktop\Python\0.LLMs\AutoAWQ\venv\Lib\site-packages\torch\lib\shm.dll" or one of its dependencies.
```
But the file is there.
### Versions
(venv) C:\Users\Admin\Desktop\Python\0.LLMs\AutoAWQ>python collect_env.py
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 Pro (10.0.19045 64 bits)
GCC version: (MinGW-W64 x86_64-ucrt-posix-seh, built by Brecht Sanders, r2) 14.2.0
Clang version: 19.1.3
CMake version: version 3.30.4
Libc version: N/A
Python version: 3.12.7 (tags/v3.12.7:0b05ead, Oct 1 2024, 03:06:41) [MSC v.1941 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19045-SP0
Is CUDA available: N/A
CUDA runtime version: 12.6.77
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce MX330
Nvidia driver version: 566.14
cuDNN version: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.6\bin\cudnn_ops64_9.dll
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Name: Intel(R) Core(TM) i5-10210U CPU @ 1.60GHz
Manufacturer: GenuineIntel
Family: 205
Architecture: 9
ProcessorType: 3
DeviceID: CPU0
CurrentClockSpeed: 1609
MaxClockSpeed: 2112
L2CacheSize: 1024
L2CacheSpeed: None
Revision: None
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] torch==2.5.0a0+gita8d6afb
[pip3] triton==3.1.0
[conda] Could not collect
cc @malfet @seemethere @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex | module: build,module: windows,triaged | low | Critical |
2,664,279,543 | next.js | Query parameters incorrect when navigating from App Router to Pages Router | ### Link to the code that reproduces this issue
https://github.com/ledbetterljoshua/link-bug-reproduction
### To Reproduce
### Live Demo
https://link-bug-reproduction.vercel.app/
### Repository
https://github.com/ledbetterljoshua/link-bug-reproduction
### Steps to reproduce
1. Start on the home page (App Router)
2. Click any of the "pages route" links
3. Observe that regardless of which link you click, you always get the query parameter from the first link (`link=1`)
### Current vs. Expected behavior
### Current behavior
Each Link component should navigate to the Pages Router page with its specified query parameter intact:
- Clicking "pages route link 1" should navigate to `/pages-route?link=1`
- Clicking "pages route link 2" should navigate to `/pages-route?link=2`
- Clicking "pages route link 3" should navigate to `/pages-route?link=3`
### Actual behavior
- First click works correctly (navigates to `/pages-route?link=1`)
- All subsequent clicks also navigate to `/pages-route?link=1`, ignoring their specified query parameters
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.0.0: Thu Jun 20 20:38:33 PDT 2024; root:xnu-11215.0.115.501.3~1/RELEASE_ARM64_T8112
Available memory (MB): 24576
Available CPU cores: 8
Binaries:
Node: 20.10.0
npm: 10.2.3
Yarn: 1.22.17
pnpm: N/A
Relevant Packages:
next: 15.0.3 // Latest available version is detected (15.0.3).
eslint-config-next: 15.0.3
react: 19.0.0-rc-66855b96-20241106
react-dom: 19.0.0-rc-66855b96-20241106
typescript: 5.6.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Navigation, Pages Router
### Which stage(s) are affected? (Select all that apply)
Vercel (Deployed)
### Additional context
## Description
When navigating from a page in the App Router to a page in the Pages Router using Next.js Link components, query parameters are not being properly maintained. The first query parameter encountered becomes "sticky" and is used for all subsequent navigations to the Pages Router page, regardless of the actual query parameters specified in the Links and in the DOM.
### Additional observations
- I tested the v15.0.4-canary.13 version and got the same result.
- This issue does not occur in local development. It only occurs on vercel.
- This only occurs when navigating from App Router to Pages Router
- Links to App Router pages work correctly (query parameters are maintained)
- Opening links in new tabs works correctly (query parameters are maintained)
- This issue is blocking migration from Pages Router to App Router in our production application
## Environment
- Next.js version: 15.0.3
- React version: 19.0.0-rc-66855b96-20241106
- Node.js version: v20.10.0
## Notes
This appears to be a client-side navigation issue, as:
1. Server-side navigation (new tabs) works correctly
2. The DOM shows the correct href attributes on the links, but the navigation results are incorrect
Would appreciate guidance on whether this is a known issue or if there's a recommended workaround while we gradually migrate from Pages Router to App Router. | bug,Navigation,Pages Router | low | Critical |
2,664,318,665 | godot | Godot Randomly Crashes even when idle, Event Viewer says it's dxgi faulting | ### Tested versions
Reproducible in: 4.4 Dev, 4.3 stable, both in mono and default
### System information
Godot v4.4.dev4.mono - Windows 10.0.19045 - Multi-window, 1 monitor - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1650 (NVIDIA; 32.0.15.6614) - 12th Gen Intel(R) Core(TM) i5-12400F (12 threads)
### Issue description
Godot will crash at random moments, at first I thought it was due to having visual studio open in the background but this time i was just working on a blend tree and it crashed, then I opened the project again and this time it crashed within seconds. I've tried everything, I've tried logging any possible crash by using console and verbose but I get nothing, I tried working on 4.3 stable thinking it was the dev version but nop it was still crashing even on a new project. ONLY source I can get a reason for the crash is the Event Viewer, which says that the crash was due to faulting dxgi.dll? That is pretty weird. I even tried updating directX, using windows tools (for example CHKDSK which the error recommends) and nothing seems to be able to fix it. This doesn't happen with any other app, and I've used both of my ssds to run Godot just in case that was the fault. My last resort was coming here. Here are the 3 errors i get in event viewer:
1st one:
Application: Godot_v4.4-dev4_mono_win64.exe
CoreCLR Version: 8.0.1024.46610
.NET Version: 8.0.10
Description: The process was terminated due to an unhandled exception.
Exception Info: exception code c000001d, exception address 00007FF8113C18C3
2nd one:
Faulting application name: Godot_v4.4-dev4_mono_win64.exe, version: 4.4.0.0, time stamp: 0x672de8b1
Faulting module name: dxgi.dll, version: 10.0.19041.5072, time stamp: 0x9198c6b9
Exception code: 0xc000001d
Fault offset: 0x00000000000018c3
Faulting process id: 0x1d78
Faulting application start time: 0x01db382618b30bfe
Faulting application path: D:\Godot Engine\Godot_v4.4-dev4_mono_win64\Godot_v4.4-dev4_mono_win64.exe
Faulting module path: C:\WINDOWS\SYSTEM32\dxgi.dll
Report Id: 54870284-251c-45be-a8ea-970d1ec91df1
Faulting package full name:
Faulting package-relative application ID:
3rd one:
Windows cannot access the file for one of the following reasons: there is a problem with the network connection, the disk that the file is stored on, or the storage drivers installed on this computer; or the disk is missing. Windows closed the program Godot Engine because of this error.
Program: Godot Engine
File:
The error value is listed in the Additional Data section.
User Action
1. Open the file again. This situation might be a temporary problem that corrects itself when the program runs again.
2. If the file still cannot be accessed and
- It is on the network, your network administrator should verify that there is not a problem with the network and that the server can be contacted.
- It is on a removable disk, for example, a floppy disk or CD-ROM, verify that the disk is fully inserted into the computer.
3. Check and repair the file system by running CHKDSK. To run CHKDSK, click Start, click Run, type CMD, and then click OK. At the command prompt, type CHKDSK /F, and then press ENTER.
4. If the problem persists, restore the file from a backup copy.
5. Determine whether other files on the same disk can be opened. If not, the disk might be damaged. If it is a hard disk, contact your administrator or computer hardware vendor for further assistance.
Additional Data
Error value: 00000000
Disk type: 0
### Steps to reproduce
2 easiest I could do that most of the time worked was first: to go change the render device, and as soon as the dropdown appeared it would crash, the second one would be to have a C# script open in the background in visual studio. Happened even when I was afk.
### Minimal reproduction project (MRP)
pretty much any project | bug,topic:editor,needs testing,crash | low | Critical |
2,664,407,741 | PowerToys | Bug deactivation of Ctrl when keyboard manager turned on | ### Microsoft PowerToys version
0.86.0
### Installation method
Microsoft Store
### Running as admin
Yes
### Area(s) with issue?
Keyboard Manager
### Steps to reproduce
I assigned only one shortcut to Play/Pause in order to pause and play music on spotify without leaving my actual window.
### ✔️ Expected Behavior
I was expecting that the shortcut add shortcut functionnality for Spotify.
### ❌ Actual Behavior
It overrides other shortcuts from other apps while running. Sometimes it even deactivate system shortcuts like Ctrl+V and Ctrl+C.
### Other Software
With shortcuts from Steelseries GG app (version 75.0.0). | Issue-Bug,Needs-Triage | low | Critical |
2,664,422,625 | ui | [bug]: Installing Dependencies issue | ### Describe the bug
I'm using the latest version of Next.js and attempted to integrate ShadCN, but I keep encountering errors. I've tried several approaches, including adding this to my package.json:
"overrides": {
"react-is": "^19.0.0-rc-69d4b800-20241021"
}
I also tried manually installing dependencies and adjusting the React version, but the errors persist. Unfortunately, I haven't found any reliable solutions online, and the official documentation doesn't offer much help either.
### Affected component/components
installation
### How to reproduce
N/A
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
N/A
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,664,503,079 | vscode | Editor GPU: Render line numbers on the GPU | null | plan-item,performance,editor-gpu | low | Minor |
2,664,506,669 | vscode | Editor GPU: Render link underline decorations on GPU | Related: https://github.com/microsoft/vscode/issues/227094

Blocked on https://github.com/microsoft/vscode/issues/234473
Link underlines specifically are also tricky as they use multiple classes. It's unclear if the current system will handle that yet properly. | plan-item,editor-gpu | low | Minor |
2,664,507,388 | vscode | Editor GPU: Render deprecated strikethrough decorations on the GPU | 
Blocked on https://github.com/microsoft/vscode/issues/234473 | plan-item,editor-gpu | low | Minor |
2,664,508,735 | PowerToys | 键盘管理器中的重映射快捷键无效 | ### Microsoft PowerToys version
0.86
### Installation method
Microsoft Store
### Running as admin
No
### Area(s) with issue?
Keyboard Manager
### Steps to reproduce
我键盘管理器中的重映射快捷键无效 配置alt+w 发送 ctrl+w 无效
### ✔️ Expected Behavior
_No response_
### ❌ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Product-Keyboard Shortcut Manager,Needs-Triage | low | Minor |
2,664,602,440 | rust | compiletest: cdb version detection locally seems funky | While investigating #133107, it seems that even if I have a suitably-versioned `cdb` e.g. `10.0.26100.2161` in `PATH`, compiletest cdb version detection seems to consider that I failed to satisfy
```
//@ min-cdb-version: 10.0.26100.2161
``` | A-debuginfo,C-enhancement,T-bootstrap,A-docs,A-compiletest | low | Critical |
2,664,644,777 | pytorch | fr_trace usability issues | Tried to analyze the output from https://github.com/pytorch/pytorch/issues/140563 and ran into a number of issues
1) interestingly i had a `fr_trace` in /usr/local/bin/fr_trace on my devserver. not sure how it got there, but it confused me a for a while becuase it looks like a modern version of the script. BUT it requires a 'mast job id' and i don't have one.
Then, I tried manually running python tools/flight_recorder/fr_trace.py from tip of repo-
2) read_dir complains `args.folder` is non existent. Indeed, args contains only `trace_dir` not `folder`. Seems like a regression?
3) the dump format from the user in #140563 is impossible for the analyzer to parse. The filenames have both hostname and rank number in them, so there is no 'common prefix'. How did they get dumped in this format?
e.g.
...
nccl_trace_rank_container-node02_5
...
nccl_trace_rank_container-node07_40
...
Gave up at this point..
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @d4l3k @c-p-i-o | oncall: distributed,triaged | low | Minor |
2,664,644,815 | rust | Debuginfo tests are highly sensitive to debugger versions and thus very fragile | AFAIK debugger visualization output usually don't guarantee output format stability, which as a corollary of #126092, this means that our debuginfo test suite is extremely fragile and prone to breakage from debugger version updates in CI. | A-testsuite,A-debuginfo,T-compiler,T-bootstrap,C-bug,A-compiletest | low | Critical |
2,664,645,517 | rust | Tracking issue for release notes of #46571: Tracking issue for `const_size_of_val` and `const_align_of_val` |
This issue tracks the release notes text for #46571.
### Steps
- [ ] Proposed text is drafted by PR author (or team) making the noteworthy change.
- [ ] Issue is nominated for release team review of clarity for wider audience.
- [ ] Release team includes text in release notes/blog posts.
### Release notes text
The responsible team for the underlying change should edit this section to replace the automatically generated link with a succinct description of what changed, drawing upon text proposed by the author (either in discussion or through direct editing).
````markdown
# Const Stabilized APIs
- [`mem::size_of_val`](https://doc.rust-lang.org/stable/std/mem/fn.size_of_val.html)
- [`mem::align_of_val`](https://doc.rust-lang.org/stable/std/mem/fn.align_of_val.html)
````
> [!TIP]
> Use the [previous releases](https://doc.rust-lang.org/nightly/releases.html) categories to help choose which one(s) to use.
> The category will be de-duplicated with all the other ones by the release team.
>
> *More than one section can be included if needed.*
### Release blog section
If the change is notable enough for inclusion in the blog post, the responsible team should add content to this section.
*Otherwise leave it empty.*
````markdown
````
cc @Gankra -- origin issue/PR authors and assignees for starting to draft text
| T-lang,relnotes,A-const-eval,relnotes-tracking-issue | low | Minor |
2,664,646,606 | godot | `SpinBox` fires `ValueChanged` when focus changes to a `TreeItem` and it souldn't | ### Tested versions
v4.3.stable.mono.official [77dcf97d8]
tested on Windows11 and Ubuntu
### System information
v4.3.stable.mono.official
### Issue description
***Update:*** If you checked this issue earlier, the MRP link was broken. I fixed it.
We have a `SpinBox` that edits a selected value. The value is changed when `SpinBox.ValueChanged` is called.
There are 2 ways of selecting a value: using `Buttons` or via a `Tree`
When selecting what value to edit, we update the `SpinBox` to that value with `SetValueNoSignal`.
So when the `SpinBox` is focused and selecting the value by clicking on a button, thus moving the to the button, everything works as expected (the `SpinBox` shows the selected value and no `ValueChanged` signal fired).
But! when Selecting the value from a `Tree`, the `SpinBox` fires a `ValueChanged` signal with the old value (after its value was changed to the new one).. this is weird and should NOT happen. I think it happens when the focus changes to a `TreeItem`, but I could be wrong.
### Steps to reproduce
1. Run the attached project
2. Select the first or the second value to edit using buttons or using the tree
3. The spinbox updates with the selected value
4. Click on the spinbox to set the focus to it
5. if you select the value from the tree, the spinbox will fire a ValueChanged signal
Notice that if tou select another value by buttons, no signal is fired by the SpinBox
### Minimal reproduction project (MRP)
Minimal projecct:
[spinbox_bug.zip](https://github.com/user-attachments/files/17790457/spinbox_bug.zip)
| bug,topic:gui | low | Critical |
2,664,650,316 | pytorch | Implement Cut Cross Entropy | ### 🚀 The feature, motivation and pitch
https://github.com/apple/ml-cross-entropy Better version of cross entropy for large vocab sizes, this allows small models to have a dramatically higher batch size than otherwise. Implementing this conditionally would be trivial based on the input / output dimension and dtype of the input. There is a Pure triton implementation available that we can use in a compiler pass or add a dynamic dispatch for the faster version for A100s
### Alternatives
_No response_
### Additional context
_No response_
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki | feature,module: nn,triaged,module: python frontend | low | Minor |
2,664,652,650 | go | cmd/compile: provide default PGO profiles that cover the runtime | Given no PGO coverage should have minimal impact and there seems to be limited ways to use the runtime codepaths.
It *might* be worth collecting average profiles for some go programs, filter for the runtime (or maybe std) and apply it if no profile at all were provided. This wouldn't make your code faster, but might for std and runtime.
Both seems benchmarkable claims, then we could discuss the process costs to collecting and maintaining such profiles.
similar to #70291 this idea came up during a brief PGO discussion at the Go contributor summit at golab.io | Performance,NeedsInvestigation,FeatureRequest,compiler/runtime | low | Minor |
2,664,725,965 | stable-diffusion-webui | [Bug]: OSError: None is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models' If this is a private repository, make sure to pass a token having permission to this repo either by logging in with `huggingface-cli login` or by passing `token=<your_token>` | ### Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [X] The issue has been reported before but has not been fixed yet
### What happened?
error when tryin to load pony diffusion v6xl model to automatic1111
### Steps to reproduce the problem
lunch webui
trying to load pony diffusion v6xl
fail
### What should have happened?
webui should load the model succesfully
### What browsers do you use to access the UI ?
Google Chrome
### Sysinfo
{
"Platform": "Windows-10-10.0.22621-SP0",
"Python": "3.10.6",
"Version": "v1.10.1-amd-17-g745b20b7",
"Commit": "745b20b7c69a1fa10ee789b3981484389ac80aef",
"Git status": "On branch master\nYour branch is up to date with 'origin/master'.\n\nChanges not staged for commit:\n (use \"git add <file>...\" to update what will be committed)\n (use \"git restore <file>...\" to discard changes in working directory)\n\tmodified: modules/sd_disable_initialization.py\n\tmodified: webui-user.bat\n\nUntracked files:\n (use \"git add <file>...\" to include in what will be committed)\n\tgitattributes\n\nno changes added to commit (use \"git add\" and/or \"git commit -a\")",
"Script path": "D:\\sd\\stable-diffusion-webui-amdgpu",
"Data path": "D:\\sd\\stable-diffusion-webui-amdgpu",
"Extensions dir": "D:\\sd\\stable-diffusion-webui-amdgpu\\extensions",
"Checksum": "046d87d71c59007d22cde88971947ae9218414149ed5def2376c40ddd53d80aa",
"Commandline": [
"launch.py",
"--share",
"--use-directml",
"--opt-sub-quad-attention",
"--no-half",
"--disable-nan-check",
"--autolaunch"
],
"Torch env info": {
"torch_version": "2.4.1+cpu",
"is_debug_build": "False",
"cuda_compiled_version": null,
"gcc_version": null,
"clang_version": null,
"cmake_version": null,
"os": "Microsoft Windows 11 Pro",
"libc_version": "N/A",
"python_version": "3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] (64-bit runtime)",
"python_platform": "Windows-10-10.0.22621-SP0",
"is_cuda_available": "False",
"cuda_runtime_version": null,
"cuda_module_loading": "N/A",
"nvidia_driver_version": null,
"nvidia_gpu_models": null,
"cudnn_version": null,
"pip_version": "pip3",
"pip_packages": [
"numpy==1.26.2",
"onnx==1.16.2",
"onnxruntime==1.20.0",
"onnxruntime-directml==1.20.0",
"open-clip-torch==2.20.0",
"pytorch-lightning==1.9.4",
"torch==2.4.1",
"torch-directml==0.2.5.dev240914",
"torchdiffeq==0.2.3",
"torchmetrics==1.6.0",
"torchsde==0.2.6",
"torchvision==0.19.1"
],
"conda_packages": null,
"hip_compiled_version": "N/A",
"hip_runtime_version": "N/A",
"miopen_runtime_version": "N/A",
"caching_allocator_config": "",
"is_xnnpack_available": "True",
"cpu_info": [
"Architecture=9",
"CurrentClockSpeed=3300",
"DeviceID=CPU0",
"Family=206",
"L2CacheSize=5120",
"L2CacheSpeed=",
"Manufacturer=GenuineIntel",
"MaxClockSpeed=3300",
"Name=12th Gen Intel(R) Core(TM) i3-12100F",
"ProcessorType=3",
"Revision="
]
},
"Exceptions": [
{
"exception": "None is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'\nIf this is a private repository, make sure to pass a token having permission to this repo either by logging in with `huggingface-cli login` or by passing `token=<your_token>`",
"traceback": [
[
"D:\\sd\\stable-diffusion-webui-amdgpu\\modules\\sd_models.py, line 831, load_model",
"sd_model = instantiate_from_config(sd_config.model, state_dict)"
],
[
"D:\\sd\\stable-diffusion-webui-amdgpu\\modules\\sd_models.py, line 775, instantiate_from_config",
"return constructor(**params)"
],
[
"D:\\sd\\stable-diffusion-webui-amdgpu\\repositories\\generative-models\\sgm\\models\\diffusion.py, line 61, __init__",
"self.conditioner = instantiate_from_config("
],
[
"D:\\sd\\stable-diffusion-webui-amdgpu\\repositories\\generative-models\\sgm\\util.py, line 175, instantiate_from_config",
"return get_obj_from_str(config[\"target\"])(**config.get(\"params\", dict()))"
],
[
"D:\\sd\\stable-diffusion-webui-amdgpu\\repositories\\generative-models\\sgm\\modules\\encoders\\modules.py, line 88, __init__",
"embedder = instantiate_from_config(embconfig)"
],
[
"D:\\sd\\stable-diffusion-webui-amdgpu\\repositories\\generative-models\\sgm\\util.py, line 175, instantiate_from_config",
"return get_obj_from_str(config[\"target\"])(**config.get(\"params\", dict()))"
],
[
"D:\\sd\\stable-diffusion-webui-amdgpu\\repositories\\generative-models\\sgm\\modules\\encoders\\modules.py, line 361, __init__",
"self.transformer = CLIPTextModel.from_pretrained(version)"
],
[
"D:\\sd\\stable-diffusion-webui-amdgpu\\modules\\sd_disable_initialization.py, line 68, CLIPTextModel_from_pretrained",
"res = self.CLIPTextModel_from_pretrained(None, *model_args, config=pretrained_model_name_or_path, state_dict={}, **kwargs)"
],
[
"D:\\sd\\stable-diffusion-webui-amdgpu\\venv\\lib\\site-packages\\transformers\\modeling_utils.py, line 3506, from_pretrained",
"resolved_config_file = cached_file("
],
[
"D:\\sd\\stable-diffusion-webui-amdgpu\\venv\\lib\\site-packages\\transformers\\utils\\hub.py, line 426, cached_file",
"raise EnvironmentError("
]
]
},
{
"exception": "None is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'\nIf this is a private repository, make sure to pass a token having permission to this repo either by logging in with `huggingface-cli login` or by passing `token=<your_token>`",
"traceback": [
[
"D:\\sd\\stable-diffusion-webui-amdgpu\\modules\\sd_models.py, line 831, load_model",
"sd_model = instantiate_from_config(sd_config.model, state_dict)"
],
[
"D:\\sd\\stable-diffusion-webui-amdgpu\\modules\\sd_models.py, line 775, instantiate_from_config",
"return constructor(**params)"
],
[
"D:\\sd\\stable-diffusion-webui-amdgpu\\repositories\\stable-diffusion-stability-ai\\ldm\\models\\diffusion\\ddpm.py, line 563, __init__",
"self.instantiate_cond_stage(cond_stage_config)"
],
[
"D:\\sd\\stable-diffusion-webui-amdgpu\\repositories\\stable-diffusion-stability-ai\\ldm\\models\\diffusion\\ddpm.py, line 630, instantiate_cond_stage",
"model = instantiate_from_config(config)"
],
[
"D:\\sd\\stable-diffusion-webui-amdgpu\\repositories\\stable-diffusion-stability-ai\\ldm\\util.py, line 89, instantiate_from_config",
"return get_obj_from_str(config[\"target\"])(**config.get(\"params\", dict()))"
],
[
"D:\\sd\\stable-diffusion-webui-amdgpu\\repositories\\stable-diffusion-stability-ai\\ldm\\modules\\encoders\\modules.py, line 104, __init__",
"self.transformer = CLIPTextModel.from_pretrained(version)"
],
[
"D:\\sd\\stable-diffusion-webui-amdgpu\\modules\\sd_disable_initialization.py, line 68, CLIPTextModel_from_pretrained",
"res = self.CLIPTextModel_from_pretrained(None, *model_args, config=pretrained_model_name_or_path, state_dict={}, **kwargs)"
],
[
"D:\\sd\\stable-diffusion-webui-amdgpu\\venv\\lib\\site-packages\\transformers\\modeling_utils.py, line 3506, from_pretrained",
"resolved_config_file = cached_file("
],
[
"D:\\sd\\stable-diffusion-webui-amdgpu\\venv\\lib\\site-packages\\transformers\\utils\\hub.py, line 426, cached_file",
"raise EnvironmentError("
]
]
}
],
"CPU": {
"model": "Intel64 Family 6 Model 151 Stepping 5, GenuineIntel",
"count logical": 8,
"count physical": 4
},
"RAM": {
"total": "16GB",
"used": "12GB",
"free": "4GB"
},
"Extensions": [],
"Inactive extensions": [],
"Environment": {
"COMMANDLINE_ARGS": "--share --use-directml --opt-sub-quad-attention --no-half --disable-nan-check --autolaunch",
"GRADIO_ANALYTICS_ENABLED": "False"
},
"Config": {
"ldsr_steps": 100,
"ldsr_cached": false,
"SCUNET_tile": 256,
"SCUNET_tile_overlap": 8,
"SWIN_tile": 192,
"SWIN_tile_overlap": 8,
"SWIN_torch_compile": false,
"hypertile_enable_unet": false,
"hypertile_enable_unet_secondpass": false,
"hypertile_max_depth_unet": 3,
"hypertile_max_tile_unet": 256,
"hypertile_swap_size_unet": 3,
"hypertile_enable_vae": false,
"hypertile_max_depth_vae": 3,
"hypertile_max_tile_vae": 128,
"hypertile_swap_size_vae": 3,
"sd_model_checkpoint": "ponyDiffusionV6XL_v6StartWithThisOne.safetensors [67ab2fd8ec]",
"sd_checkpoint_hash": "6ce0161689b3853acaa03779ec93eafe75a02f4ced659bee03f50797806fa2fa",
"outdir_samples": "",
"outdir_txt2img_samples": "outputs\\txt2img-images",
"outdir_img2img_samples": "outputs\\img2img-images",
"outdir_extras_samples": "outputs\\extras-images",
"outdir_grids": "",
"outdir_txt2img_grids": "outputs\\txt2img-grids",
"outdir_img2img_grids": "outputs\\img2img-grids",
"outdir_save": "log\\images",
"outdir_init_images": "outputs\\init-images",
"onnx_cached_models_path": "D:\\sd\\stable-diffusion-webui-amdgpu\\models\\ONNX\\cache",
"onnx_temp_dir": "D:\\sd\\stable-diffusion-webui-amdgpu\\models\\ONNX\\temp",
"samples_save": true,
"samples_format": "png",
"samples_filename_pattern": "",
"save_images_add_number": true,
"save_images_replace_action": "Replace",
"grid_save": true,
"grid_format": "png",
"grid_extended_filename": false,
"grid_only_if_multiple": true,
"grid_prevent_empty_spots": false,
"grid_zip_filename_pattern": "",
"n_rows": -1,
"font": "",
"grid_text_active_color": "#000000",
"grid_text_inactive_color": "#999999",
"grid_background_color": "#ffffff",
"save_images_before_face_restoration": false,
"save_images_before_highres_fix": false,
"save_images_before_color_correction": false,
"save_mask": false,
"save_mask_composite": false,
"jpeg_quality": 80,
"webp_lossless": false,
"export_for_4chan": true,
"img_downscale_threshold": 4.0,
"target_side_length": 4000.0,
"img_max_size_mp": 200.0,
"use_original_name_batch": true,
"use_upscaler_name_as_suffix": false,
"save_selected_only": true,
"save_write_log_csv": true,
"save_init_img": false,
"temp_dir": "",
"clean_temp_dir_at_start": false,
"save_incomplete_images": false,
"notification_audio": true,
"notification_volume": 100,
"save_to_dirs": true,
"grid_save_to_dirs": true,
"use_save_to_dirs_for_ui": false,
"directories_filename_pattern": "[date]",
"directories_max_prompt_words": 8,
"auto_backcompat": true,
"use_old_emphasis_implementation": false,
"use_old_karras_scheduler_sigmas": false,
"no_dpmpp_sde_batch_determinism": false,
"use_old_hires_fix_width_height": false,
"hires_fix_use_firstpass_conds": false,
"use_old_scheduling": false,
"use_downcasted_alpha_bar": false,
"refiner_switch_by_sample_steps": false,
"lora_functional": false,
"extra_networks_show_hidden_directories": true,
"extra_networks_dir_button_function": false,
"extra_networks_hidden_models": "When searched",
"extra_networks_default_multiplier": 1,
"extra_networks_card_width": 0.0,
"extra_networks_card_height": 0.0,
"extra_networks_card_text_scale": 1,
"extra_networks_card_show_desc": true,
"extra_networks_card_description_is_html": false,
"extra_networks_card_order_field": "Path",
"extra_networks_card_order": "Ascending",
"extra_networks_tree_view_style": "Dirs",
"extra_networks_tree_view_default_enabled": true,
"extra_networks_tree_view_default_width": 180.0,
"extra_networks_add_text_separator": " ",
"ui_extra_networks_tab_reorder": "",
"textual_inversion_print_at_load": false,
"textual_inversion_add_hashes_to_infotext": true,
"sd_hypernetwork": "None",
"sd_lora": "None",
"lora_preferred_name": "Alias from file",
"lora_add_hashes_to_infotext": true,
"lora_bundled_ti_to_infotext": true,
"lora_show_all": false,
"lora_hide_unknown_for_versions": [],
"lora_in_memory_limit": 0,
"lora_not_found_warning_console": false,
"lora_not_found_gradio_warning": false,
"onnx_enable": false,
"diffusers_pipeline": "ONNX Stable Diffusion",
"diffusers_vae_upcast": "default",
"onnx_execution_provider": "DmlExecutionProvider",
"onnx_cache_converted": true,
"olive_enable": false,
"olive_submodels": [],
"olive_float16": true,
"olive_vae_encoder_float32": false,
"olive_static_dims": true,
"olive_cache_optimized": true,
"cross_attention_optimization": "Automatic",
"s_min_uncond": 0,
"s_min_uncond_all": false,
"token_merging_ratio": 0,
"token_merging_ratio_img2img": 0,
"token_merging_ratio_hr": 0,
"pad_cond_uncond": false,
"pad_cond_uncond_v0": false,
"persistent_cond_cache": true,
"batch_cond_uncond": true,
"fp8_storage": "Disable",
"cache_fp16_weight": false,
"directml_memory_provider": "Performance Counter",
"hide_samplers": [],
"eta_ddim": 0,
"eta_ancestral": 1,
"ddim_discretize": "uniform",
"s_churn": 0,
"s_tmin": 0,
"s_tmax": 0,
"s_noise": 1,
"sigma_min": 0.0,
"sigma_max": 0.0,
"rho": 0.0,
"eta_noise_seed_delta": 0,
"always_discard_next_to_last_sigma": false,
"sgm_noise_multiplier": false,
"uni_pc_variant": "bh1",
"uni_pc_skip_type": "time_uniform",
"uni_pc_order": 3,
"uni_pc_lower_order_final": true,
"sd_noise_schedule": "Default",
"skip_early_cond": 0,
"beta_dist_alpha": 0.6,
"beta_dist_beta": 0.6,
"sd_checkpoints_limit": 1,
"sd_checkpoints_keep_in_cpu": true,
"sd_checkpoint_cache": 0,
"sd_unet": "Automatic",
"enable_quantization": false,
"emphasis": "Original",
"enable_batch_seeds": true,
"comma_padding_backtrack": 20,
"sdxl_clip_l_skip": false,
"CLIP_stop_at_last_layers": 1,
"upcast_attn": false,
"randn_source": "GPU",
"tiling": false,
"hires_fix_refiner_pass": "second pass",
"enable_prompt_comments": true,
"sd3_enable_t5": false,
"sdxl_crop_top": 0.0,
"sdxl_crop_left": 0.0,
"sdxl_refiner_low_aesthetic_score": 2.5,
"sdxl_refiner_high_aesthetic_score": 6.0,
"sd_vae_checkpoint_cache": 0,
"sd_vae": "sdxl_vae.safetensors",
"sd_vae_overrides_per_model_preferences": true,
"auto_vae_precision_bfloat16": false,
"auto_vae_precision": true,
"sd_vae_encode_method": "Full",
"sd_vae_decode_method": "Full",
"inpainting_mask_weight": 1,
"initial_noise_multiplier": 1,
"img2img_extra_noise": 0,
"img2img_color_correction": false,
"img2img_fix_steps": false,
"img2img_background_color": "#ffffff",
"img2img_editor_height": 720,
"img2img_sketch_default_brush_color": "#ffffff",
"img2img_inpaint_mask_brush_color": "#ffffff",
"img2img_inpaint_sketch_default_brush_color": "#ffffff",
"return_mask": false,
"return_mask_composite": false,
"img2img_batch_show_results_limit": 32,
"overlay_inpaint": true,
"return_grid": true,
"do_not_show_images": false,
"js_modal_lightbox": true,
"js_modal_lightbox_initially_zoomed": true,
"js_modal_lightbox_gamepad": false,
"js_modal_lightbox_gamepad_repeat": 250.0,
"sd_webui_modal_lightbox_icon_opacity": 1,
"sd_webui_modal_lightbox_toolbar_opacity": 0.9,
"gallery_height": "",
"open_dir_button_choice": "Subdirectory",
"enable_pnginfo": true,
"save_txt": false,
"add_model_name_to_info": true,
"add_model_hash_to_info": true,
"add_vae_name_to_info": true,
"add_vae_hash_to_info": true,
"add_user_name_to_info": false,
"add_version_to_infotext": true,
"disable_weights_auto_swap": true,
"infotext_skip_pasting": [],
"infotext_styles": "Apply if any",
"show_progressbar": true,
"live_previews_enable": true,
"live_previews_image_format": "png",
"show_progress_grid": true,
"show_progress_every_n_steps": 10,
"show_progress_type": "Approx NN",
"live_preview_allow_lowvram_full": false,
"live_preview_content": "Prompt",
"live_preview_refresh_period": 1000.0,
"live_preview_fast_interrupt": false,
"js_live_preview_in_modal_lightbox": false,
"prevent_screen_sleep_during_generation": true,
"keyedit_precision_attention": 0.1,
"keyedit_precision_extra": 0.05,
"keyedit_delimiters": ".,\\/!?%^*;:{}=`~() ",
"keyedit_delimiters_whitespace": [
"Tab",
"Carriage Return",
"Line Feed"
],
"keyedit_move": true,
"disable_token_counters": false,
"include_styles_into_token_counters": true,
"extra_options_txt2img": [],
"extra_options_img2img": [],
"extra_options_cols": 1,
"extra_options_accordion": false,
"compact_prompt_box": false,
"samplers_in_dropdown": true,
"dimensions_and_batch_together": true,
"sd_checkpoint_dropdown_use_short": false,
"hires_fix_show_sampler": false,
"hires_fix_show_prompts": false,
"txt2img_settings_accordion": false,
"img2img_settings_accordion": false,
"interrupt_after_current": true,
"localization": "None",
"quicksettings_list": [
"sd_model_checkpoint",
"sd_vae",
"CLIP_stop_at_last_layers"
],
"ui_tab_order": [],
"hidden_tabs": [],
"ui_reorder_list": [],
"gradio_theme": "Default",
"gradio_themes_cache": true,
"show_progress_in_title": true,
"send_seed": true,
"send_size": true,
"enable_reloading_ui_scripts": false,
"api_enable_requests": true,
"api_forbid_local_requests": true,
"api_useragent": "",
"prioritized_callbacks_app_started": [],
"prioritized_callbacks_model_loaded": [],
"prioritized_callbacks_ui_settings": [],
"prioritized_callbacks_infotext_pasted": [],
"prioritized_callbacks_script_unloaded": [],
"prioritized_callbacks_before_ui": [],
"prioritized_callbacks_list_optimizers": [],
"prioritized_callbacks_before_token_counter": [],
"prioritized_callbacks_script_before_process": [],
"prioritized_callbacks_script_process": [],
"prioritized_callbacks_script_post_sample": [],
"prioritized_callbacks_script_on_mask_blend": [],
"prioritized_callbacks_script_postprocess_maskoverlay": [],
"profiling_enable": false,
"profiling_activities": [
"CPU"
],
"profiling_record_shapes": true,
"profiling_profile_memory": true,
"profiling_with_stack": true,
"profiling_filename": "trace.json",
"auto_launch_browser": "Local",
"enable_console_prompts": false,
"show_warnings": false,
"show_gradio_deprecation_warnings": true,
"memmon_poll_rate": 8,
"samples_log_stdout": false,
"multiple_tqdm": true,
"enable_upscale_progressbar": true,
"print_hypernet_extra": false,
"list_hidden_files": true,
"disable_mmap_load_safetensors": false,
"hide_ldm_prints": true,
"dump_stacks_on_signal": false,
"face_restoration": false,
"face_restoration_model": "CodeFormer",
"code_former_weight": 0.5,
"face_restoration_unload": false,
"postprocessing_enable_in_main_ui": [],
"postprocessing_disable_in_extras": [],
"postprocessing_operation_order": [],
"upscaling_max_images_in_cache": 5,
"postprocessing_existing_caption_action": "Ignore",
"ESRGAN_tile": 192,
"ESRGAN_tile_overlap": 8,
"realesrgan_enabled_models": [
"R-ESRGAN 4x+",
"R-ESRGAN 4x+ Anime6B"
],
"dat_enabled_models": [
"DAT x2",
"DAT x3",
"DAT x4"
],
"DAT_tile": 192,
"DAT_tile_overlap": 8,
"set_scale_by_when_changing_upscaler": false,
"unload_models_when_training": false,
"pin_memory": false,
"save_optimizer_state": false,
"save_training_settings_to_txt": true,
"dataset_filename_word_regex": "",
"dataset_filename_join_string": " ",
"training_image_repeats_per_epoch": 1,
"training_write_csv_every": 500.0,
"training_xattention_optimizations": false,
"training_enable_tensorboard": false,
"training_tensorboard_save_images": false,
"training_tensorboard_flush_every": 120.0,
"canvas_hotkey_zoom": "Alt",
"canvas_hotkey_adjust": "Ctrl",
"canvas_hotkey_shrink_brush": "Q",
"canvas_hotkey_grow_brush": "W",
"canvas_hotkey_move": "F",
"canvas_hotkey_fullscreen": "S",
"canvas_hotkey_reset": "R",
"canvas_hotkey_overlap": "O",
"canvas_show_tooltip": true,
"canvas_auto_expand": true,
"canvas_blur_prompt": false,
"canvas_disabled_functions": [
"Overlap"
],
"interrogate_keep_models_in_memory": false,
"interrogate_return_ranks": false,
"interrogate_clip_num_beams": 1,
"interrogate_clip_min_length": 24,
"interrogate_clip_max_length": 48,
"interrogate_clip_dict_limit": 1500.0,
"interrogate_clip_skip_categories": [],
"interrogate_deepbooru_score_threshold": 0.5,
"deepbooru_sort_alpha": true,
"deepbooru_use_spaces": true,
"deepbooru_escape": true,
"deepbooru_filter_tags": ""
},
"Startup": {
"total": 16.544913053512573,
"records": {
"initial startup": 0.021877288818359375,
"prepare environment/checks": 0.019692659378051758,
"prepare environment/git version info": 0.09234809875488281,
"prepare environment/clone repositores": 0.1839299201965332,
"prepare environment/run extensions installers": 0.0,
"prepare environment": 14.426397800445557,
"launcher": 0.0019998550415039062,
"import torch": 0.0,
"import gradio": 0.0,
"setup paths": 0.0009992122650146484,
"import ldm": 0.0021588802337646484,
"import sgm": 0.0,
"initialize shared": 1.1551156044006348,
"other imports": 0.043064117431640625,
"opts onchange": 0.0,
"setup SD model": 0.0010004043579101562,
"setup codeformer": 0.0015113353729248047,
"setup gfpgan": 0.01451253890991211,
"set samplers": 0.0009946823120117188,
"list extensions": 0.0045125484466552734,
"restore config state file": 0.0,
"list SD models": 0.0312657356262207,
"list localizations": 0.0010027885437011719,
"load scripts/custom_code.py": 0.0030031204223632812,
"load scripts/img2imgalt.py": 0.0009925365447998047,
"load scripts/loopback.py": 0.0,
"load scripts/outpainting_mk_2.py": 0.0010085105895996094,
"load scripts/poor_mans_outpainting.py": 0.0,
"load scripts/postprocessing_codeformer.py": 0.0009922981262207031,
"load scripts/postprocessing_gfpgan.py": 0.0005042552947998047,
"load scripts/postprocessing_upscale.py": 0.0,
"load scripts/prompt_matrix.py": 0.0,
"load scripts/prompts_from_file.py": 0.0010037422180175781,
"load scripts/sd_upscale.py": 0.0,
"load scripts/xyz_grid.py": 0.0009996891021728516,
"load scripts/ldsr_model.py": 0.22868919372558594,
"load scripts/lora_script.py": 0.19754981994628906,
"load scripts/scunet_model.py": 0.029579639434814453,
"load scripts/swinir_model.py": 0.029569149017333984,
"load scripts/hotkey_config.py": 0.0009942054748535156,
"load scripts/extra_options_section.py": 0.001001119613647461,
"load scripts/hypertile_script.py": 0.05925774574279785,
"load scripts/postprocessing_autosized_crop.py": 0.0009982585906982422,
"load scripts/postprocessing_caption.py": 0.0,
"load scripts/postprocessing_create_flipped_copies.py": 0.001260519027709961,
"load scripts/postprocessing_focal_crop.py": 0.0019989013671875,
"load scripts/postprocessing_split_oversized.py": 0.0,
"load scripts/soft_inpainting.py": 0.0,
"load scripts/comments.py": 0.02957892417907715,
"load scripts/refiner.py": 0.0,
"load scripts/sampler.py": 0.0010001659393310547,
"load scripts/seed.py": 0.0,
"load scripts": 0.5899817943572998,
"load upscalers": 0.0029916763305664062,
"refresh VAE": 0.0020041465759277344,
"refresh textual inversion templates": 0.0,
"scripts list_optimizers": 0.0020203590393066406,
"scripts list_unets": 0.0,
"reload hypernetworks": 0.0010018348693847656,
"initialize extra networks": 0.01405644416809082,
"scripts before_ui_callback": 0.001999378204345703,
"create ui": 0.3370859622955322,
"gradio launch": 6.960802316665649,
"add APIs": 0.007516384124755859,
"app_started_callback/lora_script.py": 0.0010023117065429688,
"app_started_callback": 0.0010023117065429688
}
},
"Packages": [
"accelerate==0.21.0",
"aenum==3.1.15",
"aiofiles==23.2.1",
"aiohappyeyeballs==2.4.3",
"aiohttp==3.11.2",
"aiosignal==1.3.1",
"alembic==1.14.0",
"altair==5.4.1",
"antlr4-python3-runtime==4.9.3",
"anyio==3.7.1",
"async-timeout==5.0.1",
"attrs==24.2.0",
"blendmodes==2022",
"certifi==2024.8.30",
"charset-normalizer==3.4.0",
"clean-fid==0.1.35",
"click==8.1.7",
"clip @ https://github.com/openai/CLIP/archive/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1.zip#sha256=b5842c25da441d6c581b53a5c60e0c2127ebafe0f746f8e15561a006c6c3be6a",
"colorama==0.4.6",
"coloredlogs==15.0.1",
"colorlog==6.9.0",
"contourpy==1.3.1",
"cycler==0.12.1",
"datasets==2.14.4",
"deprecation==2.1.0",
"diffusers==0.30.2",
"dill==0.3.7",
"diskcache==5.6.3",
"einops==0.4.1",
"exceptiongroup==1.2.2",
"facexlib==0.3.0",
"fastapi==0.94.0",
"ffmpy==0.4.0",
"filelock==3.16.1",
"filterpy==1.4.5",
"flatbuffers==24.3.25",
"fonttools==4.55.0",
"frozenlist==1.5.0",
"fsspec==2024.10.0",
"ftfy==6.3.1",
"gitdb==4.0.11",
"GitPython==3.1.32",
"gradio==3.41.2",
"gradio_client==0.5.0",
"greenlet==3.1.1",
"h11==0.12.0",
"httpcore==0.15.0",
"httpx==0.24.1",
"huggingface-hub==0.26.2",
"humanfriendly==10.0",
"idna==3.10",
"imageio==2.36.0",
"importlib_metadata==8.5.0",
"importlib_resources==6.4.5",
"inflection==0.5.1",
"Jinja2==3.1.4",
"jsonmerge==1.8.0",
"jsonschema==4.23.0",
"jsonschema-specifications==2024.10.1",
"kiwisolver==1.4.7",
"kornia==0.6.7",
"lark==1.1.2",
"lazy_loader==0.4",
"lightning-utilities==0.11.8",
"llvmlite==0.43.0",
"Mako==1.3.6",
"MarkupSafe==2.1.5",
"matplotlib==3.9.2",
"mpmath==1.3.0",
"multidict==6.1.0",
"multiprocess==0.70.15",
"narwhals==1.13.5",
"networkx==3.4.2",
"numba==0.60.0",
"numpy==1.26.2",
"olive-ai==0.7.1.1",
"omegaconf==2.2.3",
"onnx==1.16.2",
"onnxruntime==1.20.0",
"onnxruntime-directml==1.20.0",
"open-clip-torch==2.20.0",
"opencv-python==4.10.0.84",
"optimum==1.23.3",
"optuna==4.1.0",
"orjson==3.10.11",
"packaging==24.2",
"pandas==2.2.3",
"piexif==1.1.3",
"Pillow==9.5.0",
"pillow-avif-plugin==1.4.3",
"pip==24.3.1",
"propcache==0.2.0",
"protobuf==3.20.2",
"psutil==5.9.5",
"pyarrow==18.0.0",
"pydantic==1.10.19",
"pydub==0.25.1",
"pyparsing==3.2.0",
"pyreadline3==3.5.4",
"python-dateutil==2.9.0.post0",
"python-multipart==0.0.17",
"pytorch-lightning==1.9.4",
"pytz==2024.2",
"PyWavelets==1.7.0",
"PyYAML==6.0.2",
"referencing==0.35.1",
"regex==2024.11.6",
"requests==2.32.3",
"resize-right==0.0.2",
"rpds-py==0.21.0",
"safetensors==0.4.2",
"scikit-image==0.21.0",
"scipy==1.14.1",
"semantic-version==2.10.0",
"sentencepiece==0.2.0",
"setuptools==69.5.1",
"six==1.16.0",
"smmap==5.0.1",
"sniffio==1.3.1",
"spandrel==0.3.4",
"spandrel_extra_arches==0.1.1",
"SQLAlchemy==2.0.36",
"starlette==0.26.1",
"sympy==1.13.3",
"tifffile==2024.9.20",
"timm==1.0.11",
"tokenizers==0.20.3",
"tomesd==0.1.3",
"torch==2.4.1",
"torch-directml==0.2.5.dev240914",
"torchdiffeq==0.2.3",
"torchmetrics==1.6.0",
"torchsde==0.2.6",
"torchvision==0.19.1",
"tqdm==4.67.0",
"trampoline==0.1.2",
"transformers==4.46.2",
"typing_extensions==4.12.2",
"tzdata==2024.2",
"urllib3==2.2.3",
"uvicorn==0.32.0",
"wcwidth==0.2.13",
"websockets==11.0.3",
"xxhash==3.5.0",
"yarl==1.17.1",
"zipp==3.21.0"
]
}
### Console logs
```Shell
Applying attention optimization: sub-quadratic... done.
Model loaded in 11.4s (load weights from disk: 0.2s, create model: 1.3s, apply weights to model: 4.6s, load VAE: 2.0s, move model to device: 1.5s, calculate empty prompt: 1.7s).
Reusing loaded model v1-5-pruned-emaonly.safetensors [6ce0161689] to load ponyDiffusionV6XL_v6StartWithThisOne.safetensors [67ab2fd8ec]
Loading weights [67ab2fd8ec] from D:\sd\stable-diffusion-webui-amdgpu\models\Stable-diffusion\ponyDiffusionV6XL_v6StartWithThisOne.safetensors
Creating model from config: D:\sd\stable-diffusion-webui-amdgpu\repositories\generative-models\configs\inference\sd_xl_base.yaml
creating model quickly: OSError
Traceback (most recent call last):
File "D:\sd\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils\_http.py", line 406, in hf_raise_for_status
response.raise_for_status()
File "D:\sd\stable-diffusion-webui-amdgpu\venv\lib\site-packages\requests\models.py", line 1024, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/None/resolve/main/config.json
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "D:\sd\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\utils\hub.py", line 403, in cached_file
resolved_file = hf_hub_download(
File "D:\sd\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "D:\sd\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 862, in hf_hub_download
return _hf_hub_download_to_cache_dir(
File "D:\sd\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 969, in _hf_hub_download_to_cache_dir
_raise_on_head_call_error(head_call_error, force_download, local_files_only)
File "D:\sd\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 1484, in _raise_on_head_call_error
raise head_call_error
File "D:\sd\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 1376, in _get_metadata_or_catch_error
metadata = get_hf_file_metadata(
File "D:\sd\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
File "D:\sd\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 1296, in get_hf_file_metadata
r = _request_wrapper(
File "D:\sd\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 277, in _request_wrapper
response = _request_wrapper(
File "D:\sd\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\file_download.py", line 301, in _request_wrapper
hf_raise_for_status(response)
File "D:\sd\stable-diffusion-webui-amdgpu\venv\lib\site-packages\huggingface_hub\utils\_http.py", line 454, in hf_raise_for_status
raise _format(RepositoryNotFoundError, message, response) from e
huggingface_hub.errors.RepositoryNotFoundError: 404 Client Error. (Request ID: Root=1-6738d6cc-66dde5c8221cc356047be236;cae3f0eb-7b89-415b-b7a3-04add98d999e)
Repository Not Found for url: https://huggingface.co/None/resolve/main/config.json.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\tertag hadj ali\AppData\Local\Programs\Python\Python310\lib\threading.py", line 973, in _bootstrap
self._bootstrap_inner()
File "C:\Users\tertag hadj ali\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "D:\sd\stable-diffusion-webui-amdgpu\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
result = context.run(func, *args)
File "D:\sd\stable-diffusion-webui-amdgpu\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
response = f(*args, **kwargs)
File "D:\sd\stable-diffusion-webui-amdgpu\modules\ui_settings.py", line 316, in <lambda>
fn=lambda value, k=k: self.run_settings_single(value, key=k),
File "D:\sd\stable-diffusion-webui-amdgpu\modules\ui_settings.py", line 95, in run_settings_single
if value is None or not opts.set(key, value):
File "D:\sd\stable-diffusion-webui-amdgpu\modules\options.py", line 165, in set
option.onchange()
File "D:\sd\stable-diffusion-webui-amdgpu\modules\call_queue.py", line 14, in f
res = func(*args, **kwargs)
File "D:\sd\stable-diffusion-webui-amdgpu\modules\initialize_util.py", line 181, in <lambda>
shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False)
File "D:\sd\stable-diffusion-webui-amdgpu\modules\sd_models.py", line 992, in reload_model_weights
load_model(checkpoint_info, already_loaded_state_dict=state_dict)
File "D:\sd\stable-diffusion-webui-amdgpu\modules\sd_models.py", line 831, in load_model
sd_model = instantiate_from_config(sd_config.model, state_dict)
File "D:\sd\stable-diffusion-webui-amdgpu\modules\sd_models.py", line 775, in instantiate_from_config
return constructor(**params)
File "D:\sd\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\models\diffusion.py", line 61, in __init__
self.conditioner = instantiate_from_config(
File "D:\sd\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\util.py", line 175, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "D:\sd\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\modules\encoders\modules.py", line 88, in __init__
embedder = instantiate_from_config(embconfig)
File "D:\sd\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\util.py", line 175, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "D:\sd\stable-diffusion-webui-amdgpu\repositories\generative-models\sgm\modules\encoders\modules.py", line 361, in __init__
self.transformer = CLIPTextModel.from_pretrained(version)
File "D:\sd\stable-diffusion-webui-amdgpu\modules\sd_disable_initialization.py", line 68, in CLIPTextModel_from_pretrained
res = self.CLIPTextModel_from_pretrained(None, *model_args, config=pretrained_model_name_or_path, state_dict={}, **kwargs)
File "D:\sd\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\modeling_utils.py", line 3506, in from_pretrained
resolved_config_file = cached_file(
File "D:\sd\stable-diffusion-webui-amdgpu\venv\lib\site-packages\transformers\utils\hub.py", line 426, in cached_file
raise EnvironmentError(
OSError: None is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo either by logging in with `huggingface-cli login` or by passing `token=<your_token>`
Failed to create model quickly; will retry using slow method.
```
### Additional information
_No response_ | bug-report | low | Critical |
2,664,726,300 | deno | `deno test --doc` "All imports in import declaration are unused" TS6192 error. | I'm receiving the error:
```
TS6192 [ERROR]: All imports in import declaration are unused.
```
This is across a couple files and I can't seem to figure out why this is happening. Looking at the doc examples in the referenced locations/files all appears correct to me. I'm assuming this is some sort of Deno issue?
Example:
https://github.com/erictaylor/csp/actions/runs/11872117400/job/33085438634#step:8:18
Files referenced:
- https://github.com/erictaylor/csp/blob/fb7f7cd9a47ec20bd4e63168ec06526eaf0f4785/src/policy.ts#L11
- https://github.com/erictaylor/csp/blob/fb7f7cd9a47ec20bd4e63168ec06526eaf0f4785/src/value.ts#L111
Version: Deno 2.0.6
| needs investigation,testing | low | Critical |
2,664,740,256 | godot | Skeleton3D.set_bone_pose() does not set the real pose transform | ### Tested versions
Reproducible in 4.3.stable
### System information
Godot v4.3.stable (77dcf97d8) - NixOS #1-NixOS SMP PREEMPT_DYNAMIC Thu Sep 12 09:13:13 UTC 2024 - X11 - Vulkan (Mobile) - integrated Intel(R) Graphics (ADL GT2) - 12th Gen Intel(R) Core(TM) i5-1240P (16 Threads)
### Issue description
This issue affects my implementation of XR Hand tracking in the addon https://github.com/Godot-Dojo/Godot-XR-AH
Because people's hands are different sizes, the OpenXR hand tracking interface outputs the XYZ coordinates of the joints of the fingers wherever they are, and does not assume a particular skeleton size.
Since chains of Skeleton3D bones (eg in the fingers from the metacarpals on up) are usually aligned along the Y-axis, I decided I should be able to apply a scaling in the Y-axis of each bone so it would fit the model's joint positions to the joint positions provided by the OpenXR interface.
Consider the index finger. Suppose the proximal (first bone above your knuckle) bone length of the model is `a1` and the distal bone length of the model is `a2`, and suppose the corresponding bone lengths of your real hand are `b1` and `b2`.
Suppose the rotation between these two bones is `rot`.
To make the skeleton fit your real hand properly:
* proximal bone pose_transform `t1.basis = Scale(1,b1/a1,1)`
* distal bone pose_transform `t2.basis = Scale(1,a1/b1,1) * rot * Scale(1,b2/a2,1)`
This left-multiplication of a Scale transform is necessary so that the scale of `t1 * t2` is `Scale(1,b2/a2,1)` to stretch the next bone only along its Y-axis.
The function implementation of [Skeleton3D::set_bone_pose](https://github.com/godotengine/godot/blob/master/scene/3d/skeleton_3d.cpp#L841) is not going to properly extract this pre-scaling and preserve the transform that was set (see Steps to Reproduce)
```C++
void Skeleton3D::set_bone_pose(int p_bone, const Transform3D &p_pose) {
<snip>
bones[p_bone].pose_position = p_pose.origin;
bones[p_bone].pose_rotation = p_pose.basis.get_rotation_quaternion();
bones[p_bone].pose_scale = p_pose.basis.get_scale();
<snip>add
}
```
To put this another way, the `set_bone_pose()` function only allows scaled-conformal transforms, which could be argued is a reasonable limitation if nobody wants to have sheared transforms applied to their meshes. Unfortunately, it is the`global_bone_pose` transforms that are applied to the mesh, and since the product of two scaled-conformal transforms is not always scaled-conformal, you cannot avoid having a sheared transform because it requires non-scaled-conformal transforms in the `bone_pose` values to avoid it.
I don't know how to resolve this issue.
Presently, there is no possible way to use the non-uniform `bone_pose_scale` feature on skeleton bones and get a reasonable result. The [Bone Struct](https://github.com/godotengine/godot/blob/master/scene/3d/skeleton_3d.h#L95) has a `Transform3D pose_cache` attribute, but it is tightly coupled to the three components `pose_position`, `pose_rotation`, and `pose_scale`.
Even if I could set that underlying transform directly, we could forget about animating it, since animations only apply to those three components.
Alternatively, we could add another attribute called `pose_prescale` that allows us to undo the scale from the previous transform, which would add extra computations on all Skeleton3Ds that no one wants right now.
At the very least, this is a serious gotcha, not unlike using [Transform3D.inverse()](https://docs.godotengine.org/en/stable/classes/class_transform3d.html#class-transform3d-method-inverse) when you should use an `affine_inverse()`, except in this case there is no function to do what is expected.
### Steps to reproduce
I can replicate it with this simple bit of EditorScript code on a new Skeleton3D
The main issue is that if I call `set_bone_pose()` with a valid transform, the value of `get_bone_pose()` can be different. This issue has caused be two days of debugging of my own transform code before I suspected this fault.
```GDScript
@tool
extends EditorScript
func _run():
var s = Skeleton3D.new()
s.add_bone("one")
var rot = Basis(Vector3(1,0,0), deg_to_rad(10))
var scaA = Basis.from_scale(Vector3(1,0.5,1))
var scaB = Basis.from_scale(Vector3(1,2,1))
print("Prescale comparison")
var b = scaB*rot
var t = Transform3D(b, Vector3(0,0,0))
s.set_bone_pose(0, t)
var r = s.get_bone_pose(0)
print(r)
print(t)
print("Postscale comparison")
var b1 = rot*scaB
var t1 = Transform3D(b1, Vector3(0,0,0))
s.set_bone_pose(0, t1)
var r1 = s.get_bone_pose(0)
print(r1)
print(t1)
```
outputs:
```
Prescale comparison
[X: (1, 0, 0), Y: (0, 1.969615, 0.173648), Z: (0, -0.091709, 1.040217), O: (0, 0, 0)]
[X: (1, 0, 0), Y: (0, 1.969615, 0.173648), Z: (0, -0.347296, 0.984808), O: (0, 0, 0)]
Postscale comparison
[X: (1, 0, 0), Y: (0, 1.969615, 0.347296), Z: (0, -0.173648, 0.984808), O: (0, 0, 0)]
[X: (1, 0, 0), Y: (0, 1.969615, 0.347296), Z: (0, -0.173648, 0.984808), O: (0, 0, 0)]
```
### Minimal reproduction project (MRP)
see above | bug,topic:xr,topic:3d | low | Critical |
2,664,794,748 | ollama | Clarify JSONL as the Returned Format for Streaming JSON Objects | **Current Documentation**:
[API documentation](https://github.com/ollama/ollama/blob/4759d879f2376ffb9b82f296e442ec8ef137f27b/docs/api.md?plain=1#L79) states:
> A stream of JSON objects is returned.
**Proposal**:
Specify the format explicitly as:
> A stream of JSON objects in [JSON Lines (JSONL)] format is returned.
**Reasoning**:
By constraining the format to JSONL (i.e., each JSON object is serialized on a separate line as per [jsonlines.org](https://jsonlines.org/)), parsing implementations become simpler. Instead of relying on complex JSON parsing to determine object boundaries—especially when JSON content may include brackets—developers can leverage line-buffered iterators to process incoming chunks more efficiently.
This clarification aligns the documentation with common parsing practices for streaming JSON data and reduces potential ambiguity. | documentation | low | Minor |
2,664,798,836 | godot | Joystick Motion not triggering Pressed signal on Button when assigned via Shortcut | ### Tested versions
4.3.stable.mono (official download)
### System information
Godot v4.3.stable.mono - Windows 10.0.19045 - Vulkan (Mobile) - dedicated NVIDIA GeForce RTX 3090 (NVIDIA; 32.0.15.6109) - Intel(R) Core(TM) i9-10900K CPU @ 3.70GHz (20 Threads)
### Issue description
When you attach InputEventAction shortcut to a Button, the button will not trigger it's pressed signal in response to that shortcut if the shortcut is triggered by gamepads joystick. At the same time if you log in the `Input.IsActionPressed` for the same InputEventAction in the `_Process` function, it correctly detects the action as pressed from joystick motion event.
I also tried using InputEventJoypadMotion in the Shortcut (instead of InputEventAction), but this also did not trigger the Shortcut.
### Steps to reproduce
See attached project. There's single button with `ui_up` action assigned via Shortcut. The pressed signal for the button logs "UP", and in the `_Process` the same action is checked and if detected as pressed spam-prints "UP in _Process" line.
Upon launching, if you press up arrow on keyboard or up arrow on the gamepad DPAD, you will see in logs:
```
"UP"
"UP in _Process"
"UP in _Process"
"UP in _Process"
... (repeats the same as long as you keep the action pressed)
```
If you instead use the joystick and move it upwards, the logs will only show:
```
"UP in _Process"
"UP in _Process"
"UP in _Process"
... (repeats the same as long as you keep the action pressed)
```
The button does not respond to the action being pressed, even though the action is clearly marked as pressed from the other code path.
### Minimal reproduction project (MRP)
[joystickinputrepro.zip](https://github.com/user-attachments/files/17787663/joystickinputrepro.zip)
| bug,topic:input,topic:gui | low | Major |
2,664,838,360 | pytorch | `torch.compile` tries to invoke a torch op on numpy.ndarray in nightly release | ### 🐛 Describe the bug
I'm not sure if this is the intended behavior or a bug.
Using the nightly release (`torch-2.6.0.dev20241115+cu124`), when I run the following code (lifted out of [dynamo/test_misc.py#test_numpy_gt()](https://github.com/pytorch/pytorch/blob/main/test/dynamo/test_misc.py#L2530)) but running it with the default device set to `"cuda"` instead of `"cpu"`)
```python
import torch
import numpy as np
with torch.device("cuda"):
x = np.arange(10)
def fn(y):
return y >= 3
r = torch.compile(fn)(x)
print(r)
```
I get the following error:
```
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/google/home/kiuk/.pyenv/versions/venv311/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 2674, in __call__
method_callable = getattr(obj, self.method)
^^^^^^^^^^^^^^^^^^^^^^^^^
torch._dynamo.exc.TorchRuntimeError: Failed running call_function <Wrapped method <original ge>>(*(FakeTensor(..., device='cuda:0', size=(10,), dtype=torch.int64), 3), **{}):
'ndarray' object has no attribute 'ge'
```
Which seems to indicate that the compiled fn is trying to invoke `torch.ge()` on the input `x` as an `np.ndarray` without first converting `x` into a `torch.Tensor`.
The code above runs fine on `torch-2.5.1`.
I also noticed that the same type of error happens for `test_numpy_as_global` and `test_numpy_min` in `dynamo/test_misc.py` if these tests are run with `"cuda"` as the default device.
### Versions
Collecting environment information...
PyTorch version: 2.6.0.dev20241115+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux rodete (x86_64)
GCC version: (Debian 13.2.0-13) 13.2.0
Clang version: 16.0.6 (26)
CMake version: version 3.29.6
Libc version: glibc-2.38
Python version: 3.11.4 (main, Mar 27 2024, 15:05:58) [GCC 13.2.0] (64-bit runtime)
Python platform: Linux-6.9.10-1rodete5-amd64-x86_64-with-glibc2.38
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA RTX A5000
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 3945WX 12-Cores
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU(s) scaling MHz: 84%
CPU max MHz: 4425.7808
CPU min MHz: 2200.0000
BogoMIPS: 7984.65
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 384 KiB (12 instances)
L1i cache: 384 KiB (12 instances)
L2 cache: 6 MiB (12 instances)
L3 cache: 64 MiB (4 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.15.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] mypy==1.10.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.0
[pip3] optree==0.12.1
[pip3] pytorch-triton==3.1.0+cf34004b8a
[pip3] torch==2.6.0.dev20241115+cu124
[pip3] torchsummary==1.5.1
[pip3] triton==3.1.0
[conda] magma-cuda110 2.5.2 1 pytorch
[conda] mkl-include 2024.1.0 intel_691 intel
[conda] mkl-static 2024.1.0 intel_691 intel
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @ezyang | triaged,oncall: pt2,module: dynamo | low | Critical |
2,664,842,864 | ui | [feat]: List ListItem | ### Feature description
I'm migrating from Vue/Quasar to React/ShadUI, I can't find a List and ListItem component.
Something like https://quasar.dev/vue-components/list-and-list-items
or https://mui.com/material-ui/react-list/
### Affected component/components
_No response_
### Additional Context
I'm aware <ul> <ol> exist, but those are not that great if you're trying to display a tappable list with items that lead somewhere.
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | low | Minor |
2,664,846,370 | rust | Tracking Issue for `async_fn_in_dyn_trait` | This is a tracking issue for dyn-compatible traits with async functions in them, called "async functions in dyn trait".
The feature gate for the issue is `#![feature(async_fn_in_dyn_trait)]`.
### About tracking issues
Tracking issues are used to record the overall progress of implementation. They are also used as hubs connecting to other relevant issues, e.g., bugs or open design questions. A tracking issue is however *not* meant for large scale discussion, questions, or bug reports about a feature. Instead, open a dedicated issue for the specific matter and add the relevant feature gate label.
### Steps
- [x] Approve as lang experiment.
- [ ] Implement in nightly.
- https://github.com/rust-lang/rust/pull/133122
- [ ] Accept an RFC.
- [ ] Add documentation to the [dev guide][].
- See the [instructions][doc-guide].
- [ ] Add documentation to the [reference][].
- See the [instructions][reference-instructions].
- [ ] Add formatting for new syntax to the [style guide][].
- See the [nightly style procedure][].
- [ ] Stabilize.
- See the [instructions][stabilization-instructions].
[dev guide]: https://github.com/rust-lang/rustc-dev-guide
[doc-guide]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#documentation-prs
[edition guide]: https://github.com/rust-lang/edition-guide
[nightly style procedure]: https://github.com/rust-lang/style-team/blob/master/nightly-style-procedure.md
[reference]: https://github.com/rust-lang/reference
[reference-instructions]: https://github.com/rust-lang/reference/blob/master/CONTRIBUTING.md
[stabilization-instructions]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#stabilization-pr
[style guide]: https://github.com/rust-lang/rust/tree/master/src/doc/style-guide
### Unresolved Questions
TODO.
### Related
TODO.
cc @rust-lang/lang | T-lang,C-tracking-issue,B-experimental | low | Critical |
2,664,847,717 | godot | Incorrect "Parse Error" for cyclic dependency in Resource | ### Tested versions
- Reproducible in: v4.4.dev4.official [36e6207bb]
### System information
macOS 15.2 Beta (24C5057p)
### Issue description
Opening project with cyclic dependency between scene and resource prints twice incorrect error in output log:
```
scene/resources/resource_format_text.cpp:41 - res://some_my_res.tres:6 - Parse Error:
Failed loading resource: res://some_my_res.tres. Make sure resources have been imported by opening the project in the editor at least once.
scene/resources/resource_format_text.cpp:41 - res://some_my_res.tres:6 - Parse Error:
Failed loading resource: res://some_my_res.tres. Make sure resources have been imported by opening the project in the editor at least once.
```
Resource is preloaded in main scene, and the main scene class name is used in function parameter in there resource.
### Steps to reproduce
Open attached project.
See output panel.
You will see errors.
Then remove parameter type "MyScene" in my_res.gd function "do"
Reload project
No errors in output window
### Minimal reproduction project (MRP)
[testgodotwrongerror.zip](https://github.com/user-attachments/files/17787728/testgodotwrongerror.zip)
| bug,topic:core | low | Critical |
2,664,849,482 | godot | Improper navigation mesh merging in some cases | ### Tested versions
- Reproducible in 4.2.stable, 4.3.stable and 4.4.dev-4
### System information
Godot v4.3.stable - Windows 10.0.22631 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 4070 Ti SUPER (NVIDIA; 32.0.15.6603) - AMD Ryzen 7 7800X3D 8-Core Processor (16 Threads)
### Issue description
The navigation polygon baking doesn't seem to be working properly in some situations as described in the "steps to reproduce" section.
### Steps to reproduce
To reproduce, create a new scene with a Node2D and attach the following script:
```gdscript
extends Node2D
var navigation_region = NavigationRegion2D.new()
func _ready() -> void:
navigation_region.navigation_polygon = NavigationPolygon.new()
var agent_radius = 0
navigation_region.navigation_polygon.baking_rect = Rect2(
0,
0,
500,
500,
)
navigation_region.navigation_polygon.agent_radius = 0
var source_geometry_data = NavigationMeshSourceGeometryData2D.new()
source_geometry_data.add_traversable_outline([
Vector2(0, 0),
Vector2(500, 0),
Vector2(500, 500),
Vector2(0, 500),
])
for j in range(1, 5):
for i in range(1, 5):
var top_left = Vector2(i * 100 - 50, j * 100 - 50)
var bottom_right = Vector2(i * 100 + 50, j * 100 + 50)
var outline = [
top_left,
Vector2(bottom_right.x, top_left.y),
bottom_right,
Vector2(top_left.x, bottom_right.y),
top_left
]
if (
(i == 1 and j == 1)
or (i == 2 and j == 2)
):
source_geometry_data.add_obstruction_outline(outline)
add_child(navigation_region)
NavigationServer2D.bake_from_source_geometry_data(
navigation_region.navigation_polygon,
source_geometry_data
)
```
Enable "Visible Navigation" from the debug menu and run the scene. You should see the following:

This is suppose to show 2 squares without navigable areas. Instead, the second square has a triangle drawn inside.
This issue doesn't happen if `navigation_region.navigation_polygon.agent_radius` is above 0.
I also reproduced the same issue using traversable outlines only, instead of adding obstructions.
Also with the 2D scene editor using polygons:
```
[gd_scene load_steps=3 format=3 uid="uid://b2yb1u0la5ykh"]
[sub_resource type="NavigationPolygon" id="NavigationPolygon_41b13"]
vertices = PackedVector2Array(0, 0, 0, -40, 100, -100, 100, 100, -40, -40, -40, -80, -100, 100, -40, 0, -100, -100, -80, -80, -80, -40)
polygons = Array[PackedInt32Array]([PackedInt32Array(0, 1, 2, 3), PackedInt32Array(2, 1, 4, 5), PackedInt32Array(0, 3, 6, 7), PackedInt32Array(8, 2, 5, 9), PackedInt32Array(6, 8, 9, 10), PackedInt32Array(7, 6, 10, 4), PackedInt32Array(7, 4, 1), PackedInt32Array(7, 1, 4)])
outlines = Array[PackedVector2Array]([PackedVector2Array(-100, -100, 100, -100, 100, 100, -100, 100)])
agent_radius = 0.0
baking_rect = Rect2(-100, -100, 200, 200)
[node name="Node2D" type="Node2D"]
[node name="NavigationRegion2D" type="NavigationRegion2D" parent="."]
navigation_polygon = SubResource("NavigationPolygon_41b13")
[node name="Polygon2D" type="Polygon2D" parent="NavigationRegion2D"]
z_index = -1
polygon = PackedVector2Array(-80, -80, -40, -80, -40, -40, -80, -40)
[node name="Polygon2D2" type="Polygon2D" parent="NavigationRegion2D"]
z_index = -1
position = Vector2(40, 40)
polygon = PackedVector2Array(-80, -80, -40, -80, -40, -40, -80, -40)
```
This gives the following error:
```
E 0:00:00:0602 sync: Navigation map synchronization error. Attempted to merge a navigation mesh polygon edge with another already-merged edge. This is usually caused by crossing edges, overlapping polygons, or a mismatch of the NavigationMesh / NavigationPolygon baked 'cell_size' and navigation map 'cell_size'. If you're certain none of above is the case, change 'navigation/3d/merge_rasterizer_cell_scale' to 0.001.
<C++ Source> modules/navigation/nav_map.cpp:990 @ sync()
```
### Minimal reproduction project (MRP)
[navtest.zip](https://github.com/user-attachments/files/17787734/navtest.zip)
| bug,needs testing,topic:navigation | low | Critical |
2,664,858,025 | godot | The property string "Physics Layer %d" in TileSet editor has not been got translated | ### Tested versions
- since 4.0
- available in 4.4.dev4
- tested only for windows
### System information
Windows
### Issue description
https://github.com/godotengine/godot/blob/master/editor/plugins/tiles/tile_set_atlas_source_editor.cpp#L742
For some reason, the string "Physics Layer %d" is not translated when the Editor is compiled. I tested in both Turkish and Spanish languages, both having a translation for it older than a year.

### Steps to reproduce
- open the editor in a language other than English, start a new empty project
- add a new TileMap or TileMapLayer
- set a new TileSet
- add a new Physics Layer
- under TileSet editor, add a tile sheet, let it auto make an atlas
- click on "select", select a tile
- expand physics property to see this string not translated
### Minimal reproduction project (MRP)
same as above | bug,topic:editor | low | Minor |
2,664,868,259 | next.js | PNPM and output standalone issue running pnpm run dev after pnpm rub build | ### Link to the code that reproduces this issue
https://github.com/Mr-Vipi/test-pnpm
### To Reproduce
1. clone the project I provided, is created from pnpm create next-app@14.2.18
there i changed only `output: standalone` in the next config.
2. run `pnpm run build`. It should terminate succesfully
3. run `pnpm run dev` once the build is finished and youl'll get the error
### Current vs. Expected behavior
I get this error: `Error: Cannot find module 'next/dist/pages/_app'` instead of being able to run the local server.
I have to delete the node modules and install them again to run the local server but once I have to make a build should repeat the process of deleting the node modules.
### Provide environment information
```bash
Operating System:
Platform: win32
Arch: x64
Version: Windows 10 Home
Available memory (MB): 16280
Available CPU cores: 8
Binaries:
Node: 22.11.0
npm: N/A
Yarn: N/A
pnpm: 9.12.3
Relevant Packages:
next: 14.2.18 // An outdated version detected (latest is 15.0.3), upgrade is highly recommended!
eslint-config-next: 14.2.18
react: 18.3.1
react-dom: 18.3.1
typescript: 5.6.3
Next.js Config:
output: standalone
```
### Which area(s) are affected? (Select all that apply)
Developer Experience, Output (export/standalone)
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local)
### Additional context
I tested as well with ▲ Next.js 15.0.4-canary.13, and the error persists
I tested with npm and got no error so it seams related to pnpm | bug,Output (export/standalone) | low | Critical |
2,664,869,119 | godot | 'as' operator breaks expectation for Packed Array types being copied by reference | ### Tested versions
reproducible in 4.3 stable
### System information
Godot v4.3.stable unknown - Arch Linux #1 SMP PREEMPT_DYNAMIC Tue, 22 Oct 2024 18:31:38 +0000 - X11 - Vulkan (Forward+) - dedicated AMD Radeon RX 7900 XTX (RADV NAVI31) - AMD Ryzen 9 3900X 12-Core Processor (24 Threads)
### Issue description
The 'as' operator does not follow the convention described for packed arrays in
https://docs.godotengine.org/en/stable/classes/class_packedbytearray.html
"Note: Packed arrays are always passed by reference. To get a copy of an array that can be modified independently of the original array, use [duplicate](https://docs.godotengine.org/en/stable/classes/class_packedbytearray.html#class-packedbytearray-method-duplicate). This is not the case for built-in properties and methods. The returned packed array of these are a copies, and changing it will not affect the original value. To update a built-in property you need to modify the
returned array, and then assign it to the property again."
In addition, there is no note (that i could find) of an exception on the static typing docs page.
https://docs.godotengine.org/en/stable/tutorials/scripting/gdscript/static_typing.html
The usage of 'as' should not copy packed arrays by value. Or if that's technically infeasible, the docs need to be updated to reflect this strange interaction.
### Steps to reproduce
```gdscript
func _ready() -> void:
var array = PackedByteArray()
array.resize(4)
var array_2 = array
var array_3 = (array as PackedByteArray)
array.set(0, 32)
array_2.set(1, 64)
array_3.set(2, 128)
print(array)
print(array_2)
print(array_3)
```
outputs
```
[32, 64, 0, 0]
[32, 64, 0, 0]
[0, 0, 128, 0]
```
expectation
```
[32, 64, 128, 0]
[32, 64, 128, 0]
[32, 64, 128, 0]
```
Tried this test with both `PackedByteArray` and `PackedInt32Array`.
### Minimal reproduction project (MRP)
N/A | topic:gdscript,documentation | low | Minor |
2,664,869,399 | TypeScript | Non-null assertion operator whitespace allowances can create confusing conditional statements | ### 🔎 Search Terms
You can write a non-null assertion operator with whitespace after a variable name and no white space before certain keywords such as `in` and `instanceof` and the type checker doesn't fail but instead strips the operator creating the opposite condition as it appears.
To a someone not familiar with TypeScript/JavaScript `key !in obj` reads like `if key not in obj` . To them this might be if the tsc treated `key !== 'a'` as `key == 'a'`
### 🕗 Version & Regression Information
- This changed between versions N/A and N/A
- This changed in commit or PR N/A
- This is the behavior in every version I tried, and I reviewed the FAQ for entries about Common "Bugs" That Aren't Bugs
- I was unable to test this on prior versions because N/A
### ⏯ Playground Link
https://www.typescriptlang.org/play/?ts=5.8.0-dev.20241116#code/MYewdgzgLgBAZiEAuGBvAhgfhdATgSzAHMBfGAXjXRQHIAjdXGkgbgChRJYBrAUwE8KMGuhox0EGJ2js2+ODAAUfQQEJC8RAEo0bGFPAQQAG14A6YyCKKEIANoqAulvYk2chTcQx1XdGGBeEAUAeToAK15gKB1UPQNIE3NLayh+AAcghVsXNhIgA
### 💻 Code
```ts
const foo: {a?: string} = {a: 'bar'};
const key = 'a' as const;
if (key !in foo) {
// non-null operator stripped and key exists in obj
console.log(foo[key]);
}
if (foo !instanceof Object) {
// non-null operator stripped and obj is Object
console.log(typeof foo);
}
```
### 🙁 Actual behavior
The TSC stripped the non-null assertion, the operator should not be considered a non-null operator at all in this situation.
### 🙂 Expected behavior
I would expect a syntax error in this situation.
### Additional information about the issue
This is a real problem not just hypothetical, I noticed a discussion started by a person learning typescript complaining about how their code was evaluating true regardless of the "negation operator" being used before the `in` keyword.

| Suggestion,Help Wanted,Experience Enhancement | low | Critical |
2,664,870,993 | rust | debuginfo: does the debugger annotations support revisions? | Noticed while working on #133115. | A-debuginfo,T-compiler,T-bootstrap,C-bug,A-compiletest | low | Critical |
2,664,873,547 | rust | add a note if a type implements a trait with the same name as the required trait | ### Code
```Rust
trait Trait {}
mod m {
pub trait Trait {}
pub struct St;
impl Trait for St {}
}
fn func<T: Trait>(_: T) {}
fn main() {
func(m::St);
}
```
### Current output
```Shell
Compiling playground v0.0.1 (/playground)
error[E0277]: the trait bound `St: Trait` is not satisfied
--> src/main.rs:12:10
|
12 | func(m::St);
| ---- ^^^^^ the trait `Trait` is not implemented for `St`
| |
| required by a bound introduced by this call
|
help: this trait has no implementations, consider adding one
--> src/main.rs:1:1
|
1 | trait Trait {}
| ^^^^^^^^^^^
note: required by a bound in `func`
--> src/main.rs:9:12
|
9 | fn func<T: Trait>(_: T) {}
| ^^^^^ required by this bound in `func`
```
### Desired output
```Shell
Compiling playground v0.0.1 (/playground)
error[E0277]: the trait bound `St: Trait` is not satisfied
--> src/main.rs:12:10
|
12 | func(m::St);
| ---- ^^^^^ the trait `Trait` is not implemented for `St`
| |
| required by a bound introduced by this call
|
note: `St` implements similarly named `crate::m::Trait`, but not `crate::Trait`
help: this trait has no implementations, consider adding one
--> src/main.rs:1:1
|
1 | trait Trait {}
| ^^^^^^^^^^^
note: required by a bound in `func`
--> src/main.rs:9:12
|
9 | fn func<T: Trait>(_: T) {}
| ^^^^^ required by this bound in `func`
```
### Rationale and extra context
This can easily happen if you accidentally have two different versions of the same crate in your dependency tree (see [this URLO thread](https://users.rust-lang.org/t/rust-lacks-a-comprehensive-crypto-library-for-common-algorithms/121209/10)), or have two dependencies that use different trait for the same thing (eg. `tokio::io::AsyncRead` and `futures_io::AsyncRead`)
### Other cases
```Rust
Should also probably trigger if the implemented trait has a small enough edit distance between it and the desired trait, similar to the help messages for misspelled methods and types.
```
### Rust Version
```Shell
1.82.0
```
### Anything else?
_No response_ | A-diagnostics,A-trait-system,T-compiler | low | Critical |
2,664,933,749 | neovim | Lua: clone object/table (like js structuredClone) | ### Problem
(Not fully formed yet, this is just a tracking/reference issue.)
js has `structuredClone` and https://developer.mozilla.org/en-US/docs/Web/API/Window/structuredClone related concepts of serializability and equality.
### Expected behavior
The Nvim Lua stdlib has `vim.deepcopy` but no way to override its "hydrator" algorithm.
- do we need a "transferable objects" concept (as opposed to "cloned objects"). cf. js docs: https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API/Structured_clone_algorithm
- detailed documentation of how the clone is performed, what is/isn't serializable
- do we need a table hashing util function (for equality/comparison)? | enhancement,lua | low | Minor |
2,665,043,049 | pytorch | Replace reduce(operator.mul) with math.prod for computing product of dimensions | The current implementation within the autograd _functions utils.py [file](https://github.com/pytorch/pytorch/blob/main/torch/autograd/_functions/utils.py) relies on reduce and operator.mul to compute the product of a list of dimensions, which can be replaced with the more modern and readable math.prod.
cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | module: onnx,triaged | low | Minor |
2,665,043,220 | node | Some client http2 streams are closed with error code NGHTTP2_INTERNAL_ERROR after receiving GOAWAY frame | ### Version
v23.2.0 (also reproduces on v18.20.4, v20.17.0)
### Platform
```text
Darwin UA-KJP26G976P 23.4.0 Darwin Kernel Version 23.4.0: Wed Feb 21 21:44:54 PST 2024; root:xnu-10063.101.15~2/RELEASE_ARM64_T6030 arm64
```
### Subsystem
http2
### What steps will reproduce the bug?
Start a simple server that imitates a graceful shutdown by sending GOAWAY every N requests.
```js
const http2 = require('http2');
const server = http2.createServer({
peerMaxConcurrentStreams: 100000,
settings: {
maxConcurrentStreams: 100000,
},
maxSessionMemory: 200,
});
const N = 97;
let count = 0;
const maxLastStreamId = 2 ** 31 - 1;
server.on('stream', (stream) => {
count++;
stream.respond({
'content-type': 'text/html',
':status': 200
});
stream.end('Hello World ' + count);
if (count % N === 0) {
console.log('sending goaway frame at count:', count);
stream.session.goaway(0, maxLastStreamId);
}
stream.on('error', (err) => {
console.log('error:', err);
});
});
server.listen(8000);
```
then a client:
```js
const http2 = require('http2');
const payload = Buffer.alloc(1024 * 10, 'a');
let client;
function connect() {
client = http2.connect('http://localhost:8000', {
maxSessionMemory: 200,
peerMaxConcurrentStreams: 100000,
settings: {
maxConcurrentStreams: 100000,
},
});
client.on('goaway', () => {
connect();
});
client.on('error', (err) => {
console.log('client received error:', err);
connect();
});
}
const errorStats = new Map();
function addError(err) {
const key = err.toString();
const count = errorStats.get(key) || 0;
errorStats.set(key, count + 1);
}
function dumpErrorStatistics() {
for (const [key, value] of errorStats) {
console.log('error:', key, 'count:', value);
}
errorStats.clear();
}
const MAX_IN_FLIGHT = 67;
function sendRequests() {
let inFlight = 0;
while (inFlight < MAX_IN_FLIGHT) {
try {
const stream = client.request({
':path': '/',
':method': 'POST',
'content-type': 'text/html',
});
inFlight++;
stream.on('error', (err) => {
addError(err);
});
stream.on('data', () => {});
stream.on('close', () => {});
stream.write(payload);
stream.end();
} catch (err) {
addError(err);
}
}
dumpErrorStatistics();
}
connect();
setInterval(sendRequests, 7);
```
### How often does it reproduce? Is there a required condition?
Reproduces every few seconds on my machine. The specific values of parameters (the number of requests between GOAWAY frames, the request batch size, interval between batches) may require adjustment on another machine.
### What is the expected behavior? Why is that the expected behavior?
The server is sending GOAWAY with last stream id = 2^31 - 1, which should allow completing the existing requests.
This [comment](https://github.com/nodejs/node/blob/669b6927e498a0cfcc1e57afc53d8c5e53ba190f/lib/internal/http2/core.js#L1198) states that the existing streams should complete successfully, while the pending streams should be cancelled.
I would expect to never see `NGHTTP2_INTERNAL_ERROR` on the client in this situation.
Also note that the client immediately reconnects after getting a `goaway` event, so it seems that the client doesn't break the `http2` module contract.
### What do you see instead?
Some client streams are closed with code `NGHTTP2_INTERNAL_ERROR`, as seen in the `client.js` output:
```
error: Error [ERR_HTTP2_STREAM_ERROR]: Stream closed with error code NGHTTP2_INTERNAL_ERROR count: 67
error: Error [ERR_HTTP2_STREAM_ERROR]: Stream closed with error code NGHTTP2_INTERNAL_ERROR count: 67
error: Error [ERR_HTTP2_STREAM_ERROR]: Stream closed with error code NGHTTP2_INTERNAL_ERROR count: 67
```
If we run the client with `NODE_DEBUG=http2`, we can see an attempt to send headers of the new stream after goaway was already processed:
```
HTTP2 1538: Http2Session client: goaway 0 received [last stream id: 2147483647]
HTTP2 1538: Http2Session client: created
HTTP2 1538: Http2Session client: marking session closed
HTTP2 1538: Http2Session client: submitting goaway
...
HTTP2 1538: Http2Session client: error sending frame type 1 on stream 403, code: 2
HTTP2 1538: Http2Stream 403 [Http2Session client]: closed with code 7, closed true, readable true
HTTP2 1538: Http2Stream 403 [Http2Session client]: destroying stream
HTTP2 1538: Http2Session client: error sending frame type 1 on stream 405, code: 2
```
The main issue is the client code receiving `internal error` instead of the `cancelled` error, making it impossible to retry the request in general case.
Another artifact I'm seeing is the client attempts sending a `RST_STREAM` with code 7 (REFUSED_STREAM) to server although the stream is initiated on client and server has never seen this stream. We verified this by looking at the traffic dump, the client is actually sending `RST_STREAM` for a new(!) client stream to server after getting a `GOAWAY` frame.
### Additional information
This is a synthetic minimal reproduction of a real bug we're seeing in production when the gRPC client cannot handle a graceful shutdown of a connection to Envoy proxy.
According to `grpc-node` maintainer, the error code returned by node is too general for gRPC client to handle it gracefully: https://github.com/grpc/grpc-node/issues/2625#issuecomment-2181216770 | http2 | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.