id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k ⌀ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,494,453,636 | yt-dlp | Add Smule | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a new site support request
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that none of provided URLs [violate any copyrights](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and am willing to share it if required
### Region
Austria
### Example URLs
- Song: https://www.smule.com/recording/billie-happier-than-ever-acoustic/33197115_4356089929
- Song: https://www.smule.com/p/33197115_4356089929
### Provide a description that is worded well enough to be understood
Tried to download video/audio with newest nightly build 2024.08.26.232811.
It says the site is unsupported. It think it triggers the wrong thing on the Twitter Card. It should follow the video tag with the mp4 Url.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
yt-dlp -vU https://www.smule.com/p/33197115_4356089929
[debug] Command-line config: ['-vU', 'https://www.smule.com/p/33197115_4356089929']
[debug] Portable config "C:\Users\xxxxxx\scoop\apps\yt-dlp-nightly\current\yt-dlp.conf": []
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@2024.08.26.232811 from yt-dlp/yt-dlp-nightly-builds [41be32e78] (win_exe)
[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 1.1.1k 25 Mar 2021)
[debug] exe versions: ffmpeg N-116785-ge758b24396-20240828 (setts), ffprobe N-116785-ge758b24396-20240828
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.07.04, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.35.5, urllib3-2.2.2, websockets-13.0
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1831 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Latest version: nightly@2024.08.26.232811 from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date (nightly@2024.08.26.232811 from yt-dlp/yt-dlp-nightly-builds)
[generic] Extracting URL: https://www.smule.com/p/33197115_4356089929
[generic] 33197115_4356089929: Downloading webpage
[redirect] Following redirect to https://www.smule.com/recording/billie-happier-than-ever-acoustic/33197115_4356089929
[generic] Extracting URL: https://www.smule.com/recording/billie-happier-than-ever-acoustic/33197115_4356089929
[generic] 33197115_4356089929: Downloading webpage
WARNING: [generic] Falling back on generic information extractor
[generic] 33197115_4356089929: Extracting information
[debug] Looking for embeds
[debug] Identified a twitter:player iframe
[generic] Extracting URL: https://www.smule.com/recording/billie-happier-than-ever-acoustic/33197115_4356089929/twitter?utm_campaign=share&utm_medium=web&utm_source=twitter_card
[generic] twitter?utm_campaign=share&utm_medium=web&utm_source=twitter_card: Downloading webpage
WARNING: [generic] Falling back on generic information extractor
[generic] twitter?utm_campaign=share&utm_medium=web&utm_source=twitter_card: Extracting information
[debug] Looking for embeds
ERROR: Unsupported URL: https://www.smule.com/recording/billie-happier-than-ever-acoustic/33197115_4356089929/twitter?utm_campaign=share&utm_medium=web&utm_source=twitter_card
Traceback (most recent call last):
File "yt_dlp\YoutubeDL.py", line 1626, in wrapper
File "yt_dlp\YoutubeDL.py", line 1761, in __extract_info
File "yt_dlp\extractor\common.py", line 740, in extract
File "yt_dlp\extractor\generic.py", line 2526, in _real_extract
yt_dlp.utils.UnsupportedError: Unsupported URL: https://www.smule.com/recording/billie-happier-than-ever-acoustic/33197115_4356089929/twitter?utm_campaign=share&utm_medium=web&utm_source=twitter_card
```
| site-request | low | Critical |
2,494,499,893 | tauri | [bug] tauri cli rebuild dependencies every time I save file | ### Describe the bug
Every time I save file cargo check invoked from rust analyzer. then tauri trigger reload and recompile.
However it recompile a lot of crates each time I save which takes a lot of time. maybe it has conflict with rust analyzer.
It's affect the development and it's hard to develop when each time I save file it recompile for a minute.
### Reproduction
`bunx tauri dev` in v2 project in vscode in macos with rust analyzer.
### Expected behavior
If I didn't changed anything, it shouldn't rebuild anything.
### Full `tauri info` output
```text
bunx tauri info
[✔] Environment
- OS: Mac OS 14.5.0 X64
✔ Xcode Command Line Tools: installed
✔ rustc: 1.79.0 (129f3b996 2024-06-10)
✔ cargo: 1.79.0 (ffa9cf99a 2024-06-03)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-aarch64-apple-darwin (default)
- node: 20.15.1
- npm: 10.7.0
- bun: 1.1.18
[-] Packages
- tauri [RUST]: 2.0.0-rc.8
- tauri-build [RUST]: 2.0.0-rc.7
- wry [RUST]: 0.42.0
- tao [RUST]: 0.29.1
- @tauri-apps/api [NPM]: 2.0.0-rc.3
- @tauri-apps/cli [NPM]: 2.0.0-beta.23
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:1420/
- framework: React
- bundler: Vite
```
### Stack trace
```text
Comment too long to paste here
https://gist.github.com/thewh1teagle/cc78a89afe1d6496e41899bb6aab79b9
```
### Additional context
_No response_ | type: bug,status: needs triage | medium | Critical |
2,494,634,754 | flutter | [web] ImageFiltered rendering differ on web | ### Steps to reproduce
Run below code on web and Android or iOS.
### Expected results
Rendering should be similar.
### Actual results
Web rendering is significantly more pale.
### Code sample
<details open><summary>Color filter:</summary>
```dart
import 'package:flutter/material.dart';
const ColorFilter grayScaleFilter = ColorFilter.matrix(<double>[
0.2126,
0.7152,
0.0722,
0,
0,
0.2126,
0.7152,
0.0722,
0,
0,
0.2126,
0.7152,
0.0722,
0,
0,
0,
0,
0,
0.25,
0,
]);
```
<summary>Usage:</summary>
```dart
ImageFiltered(
imageFilter: grayScaleFilter,
child: Image(...),
);
```
</details>
### Screenshots or Video


### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console[✓] Flutter (Channel stable, 3.24.1, on Ubuntu 24.04 LTS 6.8.0-105041-tuxedo, locale en_US.UTF-8)
• Flutter version 3.24.1 on channel stable at /home/geoffrey/snap/flutter/common/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 5874a72aa4 (9 days ago), 2024-08-20 16:46:00 -0500
• Engine revision c9b9d5780d
• Dart version 3.5.1
• DevTools version 2.37.2
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /home/geoffrey/Android/Sdk
• Platform android-34, build-tools 34.0.0
• Java binary at: /snap/android-studio/161/jbr/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11609105)
• All Android licenses accepted.
[✓] Chrome - develop for the web
• Chrome at google-chrome
[✓] Linux toolchain - develop for Linux desktop
• clang version 10.0.0-4ubuntu1
• cmake version 3.16.3
• ninja version 1.10.0
• pkg-config version 0.29.1
[✓] Android Studio (version 2024.1)
• Android Studio at /snap/android-studio/161
• Flutter plugin version 81.0.2
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11609105)
[✓] VS Code (version 1.91.1)
• VS Code at /usr/share/code
• Flutter extension can be installed from:
🔨 https://marketplace.visualstudio.com/items?itemName=Dart-Code.flutter
[✓] Connected device (3 available)
• ONEPLUS A6013 (mobile) • 0fabd948 • android-arm64 • Android 11 (API 30)
• Linux (desktop) • linux • linux-x64 • Ubuntu 24.04 LTS 6.8.0-105041-tuxedo
• Chrome (web) • chrome • web-javascript • Google Chrome 127.0.6533.99
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| engine,platform-web,a: images,e: web_canvaskit,has reproducible steps,P2,team-web,triaged-web,found in release: 3.24,found in release: 3.25 | low | Minor |
2,494,753,481 | pytorch | Setting a negative, `int` and `bool` value to `label_smoothing` argument of `nn.CrossEntropyLoss()` works against the error messages | ### 🐛 Describe the bug
Setting the positive value `3.` to `label_smoothing` argument of [nn.CrossEntropyLoss()](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html) gets the error message as shown below:
```python
import torch
tensor1 = torch.tensor([[0.4, 0.8, 0.6], [0.3, 0.0, 0.5]])
tensor2 = torch.tensor([[0.2, 0.9, 0.4], [0.1, 0.8, 0.5]])
cel = nn.CrossEntropyLoss(label_smoothing=3.)
cel(input=tensor1, target=tensor2) # Error
```
> RuntimeError: label_smoothing must be between 0.0 and 1.0. Got: 3
But setting the negative value `-3.` to `label_smoothing` argument works against the error message above as shown below. *`-3.` doesn't affect to the result:
```python
import torch
tensor1 = torch.tensor([[0.4, 0.8, 0.6], [0.3, 0.0, 0.5]])
tensor2 = torch.tensor([[0.2, 0.9, 0.4], [0.1, 0.8, 0.5]])
cel = nn.CrossEntropyLoss(label_smoothing=-3.)
cel(input=tensor1, target=tensor2)
# tensor(1.5941)
```
And, setting `1.+0.j`(`complex`) to `label_smoothing` argument gets the error message as shown below.
```python
import torch
tensor1 = torch.tensor([[0.4, 0.8, 0.6], [0.3, 0.0, 0.5]])
tensor2 = torch.tensor([[0.2, 0.9, 0.4], [0.1, 0.8, 0.5]])
cel = nn.CrossEntropyLoss(label_smoothing=1.+0.j)
cel(input=tensor1, target=tensor2) # Error
```
> TypeError: cross_entropy_loss(): argument 'label_smoothing' (position 6) must be float, not complex
But setting `1`(`int`) and `True`(`bool`) to `label_smoothing` argument works against the error message above as shown below:
```python
import torch
tensor1 = torch.tensor([[0.4, 0.8, 0.6], [0.3, 0.0, 0.5]])
tensor2 = torch.tensor([[0.2, 0.9, 0.4], [0.1, 0.8, 0.5]])
cel = nn.CrossEntropyLoss(label_smoothing=1)
cel(input=tensor1, target=tensor2)
# tensor(1.1156)
cel = nn.CrossEntropyLoss(label_smoothing=True)
cel(input=tensor1, target=tensor2)
# tensor(1.1156)
```
### Versions
```python
import torch
torch.__version__ # 2.4.0+cu121
``` | module: loss,triaged | low | Critical |
2,494,913,830 | flutter | Support WasmGC on Safari | ### Steps to reproduce
WasmGC is now officially supported on Safari Technology Preview 202: https://developer.apple.com/documentation/safari-technology-preview-release-notes/stp-release-202#Web-Assembly
1. Use Safari Technology Preview 202
2. Go to https://flutterweb-wasm.web.app
3. Open the dev tools and see "main.dart.js" being loaded
### Expected results
main.dart.wasm being loaded
### Actual results
main.dart.js was loaded
### Code sample
N/A
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
<img width="862" alt="Screenshot 2024-08-29 at 10 42 12 AM" src="https://github.com/user-attachments/assets/a9d16fae-bb95-4942-beac-2c59815bf58e">
</details>
### Logs
N/A
### Flutter Doctor output
N/A | engine,platform-web,P2,e: wasm,e: web_skwasm,team-web,triaged-web | low | Major |
2,494,943,364 | pytorch | Setting the tensor of the values out of `[0,1]` to `target` argument of `nn.CrossEntropyLoss()` with class probabilities works against the doc | ### 🐛 Describe the bug
[The doc](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html) of `nn.CrossEntropyLoss()` says that `target` argument with class probabilities should be between `[0, 1]` as shown below:
> `Target`: ... If containing class probabilities, same shape as the input and each value should be between `[0,1]`.
But setting the tensor of the values out of `[0,1]` to `target` argument with class probabilities works against the doc above as shown below:
```python
import torch
input = torch.tensor([[0.4, 0.8, 0.6], [0.3, 0.0, 0.5]])
target = torch.tensor([[6.2, -7.9, 3.4], [2.1, 8.6, -12.5]]) # Here
cel = nn.CrossEntropyLoss()
cel(input=input, target=target) # tensor(3.9178)
```
### Versions
```python
import torch
torch.__version__ # 2.4.0+cu121
```
cc @svekars @brycebortree @tstatler | module: docs,module: loss,triaged | low | Critical |
2,494,994,451 | vscode | Add jsdoc to all classes extending ViewPart giving a basic explanation of what they are | While reviewing view parts, I found it hard differentiating some of them like all the different decoration types. It would be nice to have a description that explains them and maybe points to a common example in the case of decorations
- [ ] blockDecorations @hediet
- [x] editorScrollbar
- [x] lines
- [ ] marginDecorations @hediet
- [x] rulers
- [x] viewZones
- [x] contentWidgets
- [ ] glyphMargin @hediet
- [ ] linesDecorations @hediet
- [x] minimap
- [ ] scrollDecoration @ulugbekna @alexdima
- [x] whitespace
- [x] currentLineHighlight
- [x] indentGuides
- [x] linesGpu
- [x] overlayWidgets
- [x] selections
- [ ] decorations @hediet
- [x] lineNumbers
- [x] margin
- [x] overviewRuler
- [x] viewCursors | debt | low | Major |
2,494,994,595 | kubernetes | add to kubectl explain config APIs | ### What would you like to be added?
At the moment `kubectl explain` only allows to explain all resource that are returned by `kubectl api-resources`.
I'd like seeing to add the configuration APIs as well to explain.
### Why is this needed?
When creating kubelet configuration files or other external resources like kubeadm config files or pod-security files and it would be easier to spin up `kubectl explain` rather than going to kubernetes docs and read the provided documentation there: https://kubernetes.io/docs/reference/config-api/ | kind/feature,sig/cli,lifecycle/rotten,needs-triage | low | Major |
2,495,024,096 | vscode | Editor GPU: Review all editor.* settings and ensure they are tracked | null | plan-item,editor-gpu | low | Minor |
2,495,025,848 | vscode | Editor GPU: Reduce memory use in the atlas glyph storage buffers | The buffer that stores all the texture atlas information is fixed to `FloatsPerEntry*5000`. This should be more dynamically sized to reduce memory usage.
https://github.com/microsoft/vscode/blob/bd21f3c8f268b741edac137fc109d75a555c0541/src/vs/editor/browser/viewParts/linesGpu/viewLinesGpu.ts#L291-L292 | plan-item,perf,editor-gpu | low | Minor |
2,495,029,248 | vscode | Editor GPU: Support sub-pixel AA | We draw to a clear canvas right now:
https://github.com/microsoft/vscode/blob/bd21f3c8f268b741edac137fc109d75a555c0541/src/vs/editor/browser/gpu/raster/glyphRasterizer.ts#L75
xterm.js works by drawing to a solid background which then gives us 0xff alpha valids with SPAA working. The complication then is that our shader needs to intelligently merge these values, otherwise rendering will look strange when glyphs overlap. | plan-item,editor-gpu | low | Minor |
2,495,034,719 | vscode | Editor GPU: Make first glyph in first page 0x0 without wasting space in the texture | There's a hack currently to ensure that zeroed out cells in the cell buffer do not get rendered using an actual glyph. This currently wastes the entire first slab since it's reserved for 0x0 glyphs. We should add a special glyph without touching the texture that just points at a 0 width, 0 height rectangle.
https://github.com/microsoft/vscode/blob/bd21f3c8f268b741edac137fc109d75a555c0541/src/vs/editor/browser/gpu/atlas/textureAtlas.ts#L69-L72 | bug,editor-gpu | low | Minor |
2,495,043,623 | go | runtime: mapassign_fast64ptr crashes in malloc | This stack `cysh5A` was [reported by telemetry](https://storage.googleapis.com/prod-telemetry-merged/2024-08-28.json):
```
crash/crash
runtime.systemstack_switch:+4
runtime.persistentalloc:+2
runtime.newBucket:+11
runtime.stkbucket:+53
runtime.mProf_Malloc:+13
runtime.profilealloc:+6
runtime.mallocgc:+270
runtime.newarray:+8
runtime.makeBucketArray:+18
runtime.hashGrow:+10
runtime.mapassign_fast64ptr:+64
go/types.(*Checker).recordSelection:+4
go/types.(*Checker).selector:+232
go/types.(*Checker).exprInternal:+308
go/types.(*Checker).rawExpr:+10
go/types.(*Checker).exprOrType:+1
```
```
golang.org/x/tools/gopls@v0.16.1 go1.23.0 windows/amd64 neovim (1)
```
Issue created by golang.org/x/tools/gopls/internal/telemetry/cmd/stacks.
| NeedsInvestigation,gopls,Tools,compiler/runtime,gopls/telemetry-wins | low | Critical |
2,495,053,014 | vscode | Editor GPU: Add device pixel support to ElementSizeObserver | We have a `ElementSizeObserver` helper already, we should merge the capabilities of `observeDevicePixelDimensions` (from xterm.js) into it.
https://github.com/microsoft/vscode/blob/bd21f3c8f268b741edac137fc109d75a555c0541/src/vs/editor/browser/gpu/gpuUtils.ts#L24-L25 | debt,editor-gpu | low | Minor |
2,495,053,090 | vscode | Editor GPU: Consider changing the strategy for allocating glyphs into slabs | Slabs right now are reserved for glyphs of the exact width and height of the slab. This could be wasteful and will likely just be more so as the device pixel ratio increases. We should consider using a different strategy as called out here:
https://github.com/microsoft/vscode/blob/bd21f3c8f268b741edac137fc109d75a555c0541/src/vs/editor/browser/gpu/atlas/textureAtlasSlabAllocator.ts#L98-L120 | plan-item,perf,editor-gpu | low | Minor |
2,495,053,158 | vscode | Editor GPU: Review scaling of open region allocator code | Does this perform ok when there are many slabs?
https://github.com/microsoft/vscode/blob/bd21f3c8f268b741edac137fc109d75a555c0541/src/vs/editor/browser/gpu/atlas/textureAtlasSlabAllocator.ts#L136-L201 | plan-item,perf,editor-gpu | low | Minor |
2,495,053,255 | vscode | Editor GPU: Reconsider using measureText for the bounding box calculate | Earlier I did some profiling and found that measureText was slower than expected. We should verify this but still probably go that route over the manual bounding box calculation as we can then ensure the glyph will be drawn within the bounds of the glyph rasterizer canvas (which is currently sized to 3xfontSize).
https://github.com/microsoft/vscode/blob/bd21f3c8f268b741edac137fc109d75a555c0541/src/vs/editor/browser/gpu/raster/glyphRasterizer.ts#L96-L147 | plan-item,perf,editor-gpu | low | Major |
2,495,053,367 | vscode | Editor GPU: Support custom glyphs | The terminal has a feature that allows box drawing and block characters to be rendered pixel perfect instead of being pulled from the font. The biggest win here is that boxes in the editor will connect perfectly, even if line height or letter spacing changes.
Current DOM-based using font 12px Hack:

Compare with terminal's rendering:

| feature-request,editor-gpu | low | Minor |
2,495,053,420 | vscode | Editor GPU: Support ligatures (editor.fontLigatures) | Ligatures aren't supported out of the box when using canvas. [The approach xterm.js uses](https://github.com/xtermjs/xterm.js/tree/master/addons/addon-ligatures) (which is not yet in vscode due to some bundling problems) is to read the font file to get the ligatures (requires a accepting permissions on web) and render those together using the concept of a "character joiner". | feature-request,editor-gpu | low | Minor |
2,495,053,489 | vscode | Editor GPU: Support editor.fontVariations | Is this just a matter of passing it to the rasterizer canvas context? | plan-item,editor-gpu | low | Minor |
2,495,053,562 | vscode | Editor GPU: Support editor.fontWeight | Test case: Make sure Cascadia code with font weight 350 works. See https://github.com/microsoft/vscode/issues/221145#issuecomment-2587698553 | plan-item,editor-gpu | low | Minor |
2,495,061,988 | vscode | Editor GPU: Support editor.roundedSelection | null | plan-item,editor-gpu | low | Minor |
2,495,069,523 | kubernetes | Cannot mount the same PVC multiple times on one Pod | ### What happened?
When I mount the same PVC multiple times on one Pod, the Pod is stuck in `containerCreating` state.
Same issue described here: https://stackoverflow.com/questions/65931457/why-cant-i-mount-the-same-pvc-twice-with-different-subpaths-to-single-pod
### What did you expect to happen?
The mount of volumes should succeed, and the Pod should be created successfully.
Or, if Kubernetes treats this as a mis-configuration, a clear error message should be returned.
### How can we reproduce it (as minimally and precisely as possible)?
I am testing on a GKE cluster. The same PVC is used by one Pod twice.
```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: standard
---
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: nginx
volumeMounts:
- name: my-pvc
mountPath: /data/path1
- name: my-pvc-2
mountPath: /data/path2
volumes:
- name: my-pvc
persistentVolumeClaim:
claimName: my-claim
- name: my-pvc-2
persistentVolumeClaim:
claimName: my-claim
```
### Anything else we need to know?
We have another similar issue documented: https://github.com/GoogleCloudPlatform/gcs-fuse-csi-driver/issues/48
According to the kubelet code: https://github.com/kubernetes/kubernetes/blob/8f15859afc9cfaeb05d4915ffa204d84da512094/pkg/kubelet/volumemanager/cache/desired_state_of_world.go#L296-L298
> For non-attachable and non-device-mountable volumes, generate a unique name based on the pod namespace and name and the name of the volume within the pod.
Since the user has two same PVCs specified on the Pod spec, the tow PVCs must bound to the same PV, meaning the `volumeHandle` is the same for the two volumes. Therefore, different volumes will be treated as the same volume. After kubelet mounts one of the volumes, the other volume will be treated as already mounted. As a result, the Pod will be stuck in volume mount stage.
### Kubernetes version
<details>
```console
$ kubectl version
Client Version: v1.29.7
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.6-gke.1326000
```
Note that this issue is not limited to any specific k8s verisons.
</details>
### Cloud provider
<details>
None
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/storage,triage/accepted | low | Critical |
2,495,139,050 | neovim | treesitter: undoing confirmed substitution does not trigger reparse | ### Problem
Given the following lua file:
```lua
local foo = {
attribute1 = false,
attribute2 = false,
}
```
If you do `%s/attribute/attr/gc`, confirm both changes and then press `u`ndo, the tree sitter parsing gets out of sync.

cc @gpanders and @clason
### Steps to reproduce
1. `nvim --clean`
2. `:hi @boolean guifg=red`
3. Paste in these file contents:
```lua
local foo = {
attribute1 = false,
attribute2 = false,
}
```
4. `:%s/attribute/attr/gc`
5. `y` `y`
6. `u`
### Expected behavior
`false` should be formatted in red and `attribute2` formatted in grey.
### Neovim version (nvim -v)
0.10.1
### Vim (not Nvim) behaves the same?
n/a
### Operating system/version
macos 14.6
### Terminal name/version
ghostty fcb8b040
### $TERM environment variable
xterm-ghostty
### Installation
homebrew | bug,highlight,treesitter | low | Minor |
2,495,139,071 | vscode | Cell Diff has 'A' for label it should be styled differently since it is modification SCM status | Type: <b>Bug</b>
Cell Diff has 'A' as label along with cell number.


VS Code version: Code - Insiders 1.93.0-insider (ae45c9d4b0f71d53151edc6d18be09107903c229, 2024-08-29T05:04:03.909Z)
OS version: Windows_NT x64 10.0.22631
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i7-1065G7 CPU @ 1.30GHz (8 x 1498)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|31.60GB (16.67GB free)|
|Process Argv|--log info --log ms-python.python=info --crash-reporter-id 4fb1ebc1-cf4c-4880-a88a-47738ec3768d|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (22)</summary>
Extension|Author (truncated)|Version
---|---|---
tsl-problem-matcher|amo|0.6.2
ruff|cha|2024.42.0
vscode-eslint|dba|3.0.10
gitlens|eam|15.3.1
EditorConfig|Edi|0.16.4
prettier-vscode|esb|11.0.0
copilot|Git|1.223.1074
copilot-chat|Git|0.20.2024082901
vscode-github-actions|git|0.26.3
vscode-pull-request-github|Git|0.95.2024082904
copilot-for-security|Mic|0.3.1
vscode-language-pack-de|MS-|1.93.2024082809
debugpy|ms-|2024.10.0
python|ms-|2024.13.2024082701
vscode-pylance|ms-|2024.8.2
jupyter|ms-|2024.8.2024081201
jupyter-keymap|ms-|1.1.2
jupyter-renderers|ms-|1.0.19
vscode-jupyter-cell-tags|ms-|0.1.9
vscode-jupyter-slideshow|ms-|0.1.6
remote-wsl|ms-|0.88.2
code-spell-checker|str|3.0.1
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vsc_aacf:30263846
vscod805cf:30301675
vsaa593cf:30376535
py29gd2263:31024238
c4g48928:30535728
vscrpc:30624061
a9j8j154:30646983
962ge761:30841072
pythongtdpath:30726887
welcomedialog:30812478
pythonnoceb:30776497
asynctok:30898717
dsvsc014:30777825
dsvsc015:30821418
pythonregdiag2:30926734
pythonmypyd1:30859725
h48ei257:31000450
pythontbext0:30879054
accentitlementst:30870582
dsvsc016:30879898
dsvsc017:30880771
dsvsc018:30880772
cppperfnew:30980852
pythonait:30973460
0ee40948:31013168
a69g1124:31018687
dvdeprecation:31040973
dwnewjupytercf:31046870
newcmakeconfigv2:31071590
impr_priority:31057980
nativerepl1:31104042
refactort:31084545
pythonrstrctxt:31093868
flighttreat:31119334
wkspc-onlycs-c:31111717
nativeloc1:31118317
wkspc-ranged-c:31125598
jh802675:31111929
notreesitter:31116712
e80f6927:31120813
ei213698:31121563
aajjf12562:31125793
```
</details>
<!-- generated by issue reporter --> | feature-request,multi-diff-editor | low | Critical |
2,495,180,792 | vscode | Notebook scrolls to reveal partially hidden cell when uncommenting cells | 1. Select several cells in a row
2. Comment them with Ctrl+/
3. Scroll down so that the first cell is partially outside viewport
4. Uncomment them with Ctrl+/
5. :bug: notebook scrolls to top | bug,notebook-layout,notebook-cell-editor | low | Critical |
2,495,254,025 | terminal | Incorrect french translation | ### Windows Terminal version
1.20.11781.0
### Windows build number
10.0.19045.4780
### Other Software
_No response_
### Steps to reproduce
Right-click when creating a new tab on a terminal on a Windows in French version
### Expected Behavior
The displayed text is correct french: `En tant qu'administrateur`
### Actual Behavior
The displayed text contains the incorrect "En temps qu'administrateur"
https://github.com/microsoft/terminal/blob/93d592bb4156e85f66f34a57ee7f707cc7ba0f15/src/cascadia/TerminalApp/Resources/fr-FR/Resources.resw#L858
It should be instead:
```
<value>Exécuter en tant qu'administrateur (restreint)</value>
``` | Issue-Bug,Product-Terminal,Area-Localization | low | Minor |
2,495,315,339 | vscode | Support for experimental settings | We currently have a very unstructured approach to settings related to experimental features. A few examples:
- `editor.experimental.treeSitterTelemetry`
- `editor.experimentalWhitespaceRendering`
- `accessibility.signalOptions.experimental.delays.warningAtPosition`
- `C_Cpp.experimentalFeatures`
- `timeline.pageOnScroll`, its description starts with `Experimental.`
This is problematic in a few key ways:
- There's no deterministic way to discover all experimental settings
- There's no clear way of knowing whether a specific setting is experimental
- There's no easy way to mark a setting as non-experimental; one must migrate
We should:
1. Support 1st party labeling of settings to be experimental
2. Surface this labeling in the settings UI
3. Support an easy `@experimental` query in the settings UI
Optionally:
1. Migrate all existing experimental settings to the new infrastructure
2. Add eslint rules preventing from using `*experiment*` in the setting name | feature-request,config,settings-editor | low | Major |
2,495,343,497 | rust | Invalid suggestion when using `serde` `serialize_with` | ### Code
```Rust
use serde::{Serialize, Serializer};
fn f<T, S: Serializer>(_: &(), _: S) -> Result<<S as Serializer>::Ok, <S as Serializer>::Error> {
todo!()
}
#[derive(Serialize)]
pub struct Matrix {
#[serde(serialize_with = "f")]
matrix: (),
}
```
### Current output
```Shell
error[E0282]: type annotations needed
--> src/lib.rs:9:30
|
9 | #[serde(serialize_with = "f")]
| ^^^ cannot infer type of the type parameter `T` declared on the function `f`
|
help: consider specifying the generic arguments
|
9 | #[serde(serialize_with = "f"::<_, __S>)]
| ++++++++++
```
### Desired output
```Shell
error[E0282]: type annotations needed
--> src/lib.rs:9:30
|
9 | #[serde(serialize_with = "f")]
| ^^^ cannot infer type of the type parameter `T` declared on the function `f`
```
### Rationale and extra context
Inspired by #120922. The given suggestion is invalid syntax.
### Other cases
_No response_
### Rust Version
```Shell
rustc 1.82.0 playground
```
### Anything else?
_No response_ | A-diagnostics,T-compiler | low | Critical |
2,495,345,419 | electron | [Bug]: Enabling logging on Windows writes log files to numbered files in the current directory | ### Preflight Checklist
- [X] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [X] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [X] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
30.4.0
### What operating system(s) are you using?
Windows
### Operating System Version
Windows 11 Home version 24H2
### What arch are you using?
x64
### Last Known Working Electron version
29.0.0
### Expected Behavior
Enabling logging should only write to the specified log file. In our Electron-based app we set `--enable-logging=file` and `--log-file=<resolved_file_path>` from the main process (as in the attached Fiddle). For versions of Electron prior to v30 this only wrote to the log file which was specified.
### Actual Behavior
Starting with Electron v30 we noticed that using the same configuration as above resulted in log files being written to the current directory from which Electron was run. For example, if we run Electron from `C:\Users\username\repo` then we would get numbered files in that directory with names like `1234`, `1111`, etc. Omitting the `--log-file` option also reproduces the issue. When the app is run from a desktop shortcut the files will be written to the desktop.
### Testcase Gist URL
https://gist.github.com/derekcicerone/69a9eae69fd731c786ec157389bcf5f6
### Additional Information
We were able to track this issue back to https://github.com/chromium/chromium/commit/9d2b795f72832aa7216d7c84bdbf5d2358346548. Reverting that commit fixes the issue for Electron v30 but in v31+ there are merge conflicts.
When running the Electron Fiddle the extra log files will be written to the Fiddle temp directory. In this example the file name was `2756`:
 | platform/windows,bug :beetle:,has-repro-gist,30-x-y | low | Critical |
2,495,368,050 | godot | Screen not clearing in transparent background | ### Tested versions
v4.3.beta3.official [82cedc83c]
### System information
Godot v4.3.beta3 - Windows 10.0.22631 - GLES3 (Compatibility) - NVIDIA GeForce RTX 2060 SUPER (NVIDIA; 31.0.15.5176) - Intel(R) Core(TM) i5-10400 CPU @ 2.90GHz (12 Threads)
### Issue description
In transparent background, screen doesn't clearing

### Steps to reproduce
gl_compatibility
WorldEnvironment:background_mode: canvas, glow_enabled = true
rendering/viewport/transparent_background = true
rendering/viewport/hdr_2d = true
### Minimal reproduction project (MRP)
[background_test.zip](https://github.com/user-attachments/files/16804856/background_test.zip) | bug,topic:rendering,confirmed | low | Minor |
2,495,376,011 | go | cmd/compile: type parameter involving constraint with channels seems like it should be inferrable | ### Go version
go1.23 and go tip
### Output of `go env` in your module/workspace:
```shell
n/a, using go.dev/play
```
### What did you do?
https://go.dev/play/p/-psvliJDE_j
```go
package main
import "fmt"
// Constraint to allow function to operate on a channel,
// regardless of the channel's directionality.
type channelConstraint[T any] interface {
chan T | <-chan T | chan<- T
}
func main() {
var sendOnlyCh chan<- string = make(chan string)
printCap(sendOnlyCh)
// Needs to be:
printCap[string](sendOnlyCh)
var receiveOnlyCh <-chan int = make(chan int, 1)
printCap(receiveOnlyCh)
// Needs to be:
printCap[int](receiveOnlyCh)
bidiCh := make(chan any)
printCap(bidiCh)
// Needs to be:
printCap[any](bidiCh)
}
// printCap prints the capacity of ch,
// regardless of ch's directionality.
func printCap[T any, C channelConstraint[T]](ch C) {
fmt.Println(cap(ch))
}
```
I want to write a small utility that takes a specific action that is determined by a channel's length and/or capacity. In this codebase, there are a general mix of directional and bidirectional channels. I thought I would be able to write a constraint such that my helper can accept a channel of any directionality. And I can indeed write that constraint, but when I call a function with that constraint, I have to provide the channel's element type as a type parameter in order to get the program to compile.
I am surprised that I have to provide the channel's element type, but maybe I am missing something about constraints or generics.
### What did you see happen?
> `./prog.go:13:10: in call to printCap, cannot infer T (prog.go:28:15)`
### What did you expect to see?
I expected that `printCap(ch)` would compile; that I would not have to write `printCap[int](ch)`.
I tried searching the issues for variations of "constraints", "channels", and "direction" or "directionality" but I was not able to find any issues that looked similar. | TypeInference,compiler/runtime | low | Major |
2,495,438,616 | yt-dlp | Cannot download Twitch VOD with initialization fragments after media fragments | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
United States
### Provide a description that is worded well enough to be understood
When downloading https://www.twitch.tv/videos/2223719525 in any format other than 1440p60 (Source) or Audio_Only, yt-dlp fails to extract with `ERROR: Initialization fragment found after media fragments, unable to download`.
Formats table:
```
[info] Available formats for v2223719525:
ID EXT RESOLUTION FPS │ FILESIZE TBR PROTO │ VCODEC ACODEC ABR MORE INFO
────────────────────────────────────────────────────────────────────────────────────────────────
Audio_Only mp4 audio only │ ~192.42MiB 201k m3u8 │ audio only mp4a.40.2 201k
480p mp4 854x480 30 │ ~ 1.13GiB 1206k m3u8 │ avc1.64001F mp4a.40.2
720p60 mp4 1280x720 60 │ ~ 3.49GiB 3729k m3u8 │ avc1.640020 mp4a.40.2
1080p60 mp4 1920x1080 60 │ ~ 5.83GiB 6231k m3u8 │ hvc1.1.2.L123 mp4a.40.2
1440p60 mp4 2560x1440 60 │ ~ 10.52GiB 11240k m3u8 │ av01.0.12M.08 mp4a.40.2 Source
```
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
-------------------
Attempt 1 - 480p30
-------------------
[debug] Command-line config: ['https://www.twitch.tv/videos/2223719525', '-f', 'worst', '-vU']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@2024.08.26.232811 from yt-dlp/yt-dlp-nightly-builds [41be32e78] (win_exe)
[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1k 25 Mar 2021)
[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.07.04, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.35.5, urllib3-2.2.2, websockets-13.0
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1831 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Latest version: nightly@2024.08.26.232811 from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date (nightly@2024.08.26.232811 from yt-dlp/yt-dlp-nightly-builds)
[twitch:vod] Extracting URL: https://www.twitch.tv/videos/2223719525
[twitch:vod] 2223719525: Downloading stream metadata GraphQL
[twitch:vod] 2223719525: Downloading video access token GraphQL
[twitch:vod] 2223719525: Downloading m3u8 information
[twitch:vod] 2223719525: Downloading storyboard metadata JSON
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id
[info] v2223719525: Downloading 1 format(s): 480p
[debug] Invoking hlsnative downloader on "https://dgeft87wbj63p.cloudfront.net/09b88c14ef8300ea9fc4_r0dn3y_42737372968_1723598775/480p30/index-dvr.m3u8"
[hlsnative] Downloading m3u8 manifest
[hlsnative] Total fragments: 805
[download] Destination: [QSV AV1 1440p] Minecraft + Sable [v2223719525].mp4
[debug] File locking is not supported. Proceeding without locking
ERROR: Initialization fragment found after media fragments, unable to download
File "yt_dlp\__main__.py", line 17, in <module>
File "yt_dlp\__init__.py", line 1081, in main
File "yt_dlp\__init__.py", line 1071, in _real_main
File "yt_dlp\YoutubeDL.py", line 3607, in download
File "yt_dlp\YoutubeDL.py", line 3582, in wrapper
File "yt_dlp\YoutubeDL.py", line 1615, in extract_info
File "yt_dlp\YoutubeDL.py", line 1626, in wrapper
File "yt_dlp\YoutubeDL.py", line 1782, in __extract_info
File "yt_dlp\YoutubeDL.py", line 1841, in process_ie_result
File "yt_dlp\YoutubeDL.py", line 3015, in process_video_result
File "yt_dlp\YoutubeDL.py", line 179, in wrapper
File "yt_dlp\YoutubeDL.py", line 3483, in process_info
File "yt_dlp\YoutubeDL.py", line 3203, in dl
File "yt_dlp\downloader\common.py", line 466, in download
File "yt_dlp\downloader\hls.py", line 211, in real_download
File "yt_dlp\YoutubeDL.py", line 1092, in report_error
File "yt_dlp\YoutubeDL.py", line 1020, in trouble
-------------------
Attempt 2 - 1080p60
-------------------
[debug] Command-line config: ['https://www.twitch.tv/videos/2223719525', '-f', '1080p60', '-vU']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@2024.08.26.232811 from yt-dlp/yt-dlp-nightly-builds [41be32e78] (win_exe)
[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1k 25 Mar 2021)
[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.07.04, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.35.5, urllib3-2.2.2, websockets-13.0
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1831 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Latest version: nightly@2024.08.26.232811 from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date (nightly@2024.08.26.232811 from yt-dlp/yt-dlp-nightly-builds)
[twitch:vod] Extracting URL: https://www.twitch.tv/videos/2223719525
[twitch:vod] 2223719525: Downloading stream metadata GraphQL
[twitch:vod] 2223719525: Downloading video access token GraphQL
[twitch:vod] 2223719525: Downloading m3u8 information
[twitch:vod] 2223719525: Downloading storyboard metadata JSON
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id
[info] v2223719525: Downloading 1 format(s): 1080p60
[debug] Invoking hlsnative downloader on "https://dgeft87wbj63p.cloudfront.net/09b88c14ef8300ea9fc4_r0dn3y_42737372968_1723598775/1080p60/index-dvr.m3u8"
[hlsnative] Downloading m3u8 manifest
[hlsnative] Total fragments: 805
[download] Destination: [QSV AV1 1440p] Minecraft + Sable [v2223719525].mp4
[debug] File locking is not supported. Proceeding without locking
ERROR: Initialization fragment found after media fragments, unable to download
File "yt_dlp\__main__.py", line 17, in <module>
File "yt_dlp\__init__.py", line 1081, in main
File "yt_dlp\__init__.py", line 1071, in _real_main
File "yt_dlp\YoutubeDL.py", line 3607, in download
File "yt_dlp\YoutubeDL.py", line 3582, in wrapper
File "yt_dlp\YoutubeDL.py", line 1615, in extract_info
File "yt_dlp\YoutubeDL.py", line 1626, in wrapper
File "yt_dlp\YoutubeDL.py", line 1782, in __extract_info
File "yt_dlp\YoutubeDL.py", line 1841, in process_ie_result
File "yt_dlp\YoutubeDL.py", line 3015, in process_video_result
File "yt_dlp\YoutubeDL.py", line 179, in wrapper
File "yt_dlp\YoutubeDL.py", line 3483, in process_info
File "yt_dlp\YoutubeDL.py", line 3203, in dl
File "yt_dlp\downloader\common.py", line 466, in download
File "yt_dlp\downloader\hls.py", line 211, in real_download
File "yt_dlp\YoutubeDL.py", line 1092, in report_error
File "yt_dlp\YoutubeDL.py", line 1020, in trouble
```
| site-bug,triage | low | Critical |
2,495,510,407 | godot | [Dotnet]Exporting a project in c# after switching aot can be problematic | ### Tested versions
v4.4.dev1.mono.official [28a72fa43]
### System information
Godot v4.4.dev1.mono - Arch Linux #1 ZEN SMP PREEMPT_DYNAMIC Tue, 20 Aug 2024 15:33:58 +0000 - Wayland - GLES3 (Compatibility) - Mesa Intel(R) UHD Graphics 630 (CFL GT2) - Intel(R) Core(TM) i5-9300H CPU @ 2.40GHz (8 Threads)
### Issue description
If aot is not enabled for the first export of c# project, and is enabled for the second export, their c# data will be mixed with each other. under linux this will result in the runtime of the second exported game actually using the c# assembly from the first export, not the aot binary from the second export. (windows seems to have a worse problem with this)
### Steps to reproduce
1. Export the project and run it, the screen should display `Hello CSharp!`.
2. Modify `PublishAot` in `TestAot.csproj` to `true`.
3. Use the same export path as the first time, re-export and run. Ideally it should display `Hello CSharp Aot!`. Under Linux, it will actually display `Hello CSharp!`. I have not tested it under Windows.
(When the export option of `embed_build_outputs` is enabled, this problem will still occur even if the two export paths are inconsistent)
### Minimal reproduction project (MRP)
[test-aot.zip](https://github.com/user-attachments/files/16805517/test-aot.zip)
| enhancement,topic:dotnet,topic:export | low | Minor |
2,495,512,575 | TypeScript | Find all references in TS is off by 1 line in this specific case | Appologies for the slow reaction in https://github.com/microsoft/TypeScript/issues/57888. Creating a new issue because I can't reopen the existing issue.
I can still reproduce this in latest vscode Insiders (tried 2 times and reproduced 2 times).
Steps:
* clone the vscode repository
* `git checkout c2d75edf8a7`
* open `src/vs/editor/common/viewModel/viewModelImpl.ts`
* reload window
* while the window is loading do in the terminal `git checkout 3f0f51bde08 `
* find all references on `ViewModel` (line 45)
* observe that the references in `codeEditorWidget.ts` are off by one line
https://github.com/user-attachments/assets/6b247ff2-1a28-4cd4-8f85-35cffef861b1
```
Version: 1.93.0-insider
Commit: ae45c9d4b0f71d53151edc6d18be09107903c229
Date: 2024-08-29T05:04:03.909Z
Electron: 30.4.0
ElectronBuildId: 10073054
Chromium: 124.0.6367.243
Node.js: 20.15.1
V8: 12.4.254.20-electron.0
OS: Darwin arm64 23.6.0
```

| Bug,Help Wanted | low | Major |
2,495,516,610 | godot | New Vulkan errors in Godot 4.3 when rendering Billboard enabled AnimatedSprite3D on specific Android device | ### Tested versions
Reproducible in: 4.3.stable, **4.3.rc1**
Not reproducible in: **4.3.beta3**, 4.3.beta2, 4.3.beta1, 4.2.stable
### System information
Device: Moto G84, OS: Android 14, Renderer: Vulkan (Mobile Renderer), GPU: Adreno 619
### Issue description
After updating from Godot 4.2.stable to 4.3.stable, one of our developers experienced a significant performance drop on their device (Moto G84), resulting in a very low frame rate. This issue was not present in version 4.2.stable. Upon investigation, we found the following error messages repeatedly appearing in the debugger:
```
E 0:00:08:0275 render_pipeline_create: vkCreateGraphicsPipelines failed with error -13.
<C++ Error> Condition "err" is true. Returning: PipelineID()
<C++ Source> drivers/vulkan/rendering_device_driver_vulkan.cpp:4596 @ render_pipeline_create()
E 0:00:08:0275 render_pipeline_create: Condition "!pipeline.driver_id" is true. Returning: RID()
<C++ Source> servers/rendering/rendering_device.cpp:3317 @ render_pipeline_create()
E 0:00:08:0275 _generate_version: Condition "pipeline.is_null()" is true. Returning: RID()
<C++ Source> servers/rendering/renderer_rd/pipeline_cache_rd.cpp:61 @ _generate_version()
E 0:00:08:0275 draw_list_bind_render_pipeline: Parameter "pipeline" is null.
<C++ Source> servers/rendering/rendering_device.cpp:3840 @ draw_list_bind_render_pipeline()
E 0:00:08:0275 draw_list_draw: The vertex format used to create the pipeline does not match the vertex format bound.
<C++ Error> Condition "dl->validation.pipeline_vertex_format != dl->validation.vertex_format" is true.
<C++ Source> servers/rendering/rendering_device.cpp:4058 @ draw_list_draw()`
```
By testing previous versions of Godot, we found that this error occurs starting from version 4.3.rc1 but not in earlier versions, such as 4.3.beta3. We also detected that the issue specifically arises when rendering AnimatedSprite3D objects with Billboard enabled in our game.
Given the Moto G84's specifications, it should be able to handle our project without issues. So this could potentially be related to Vulkan compatibility with the device's GPU (Adreno 619).
### Steps to reproduce
Please note that this issue has only been observed on one specific device (Moto G84), and we are not entirely confident about the reproduction steps' general applicability. But it's been occurring consistently in our tries, even when building the project on different machines and project settings. Nevertheless, here are the steps to reproduce the issue:
1. Create a new scene in Godot.
2. Set this new scene as the Main Scene.
3. Add an AnimatedSprite3D node.
4. Set the AnimatedSprite3D's flag Billboard to "enabled"
5. Assign a new SpriteFrames resource to the AnimatedSprite3D. (it needs something to be rendered)
6. Add the Godot Icon as one of the frames in the SpriteFrames.
7. Add a Camera3D node to the scene.
8. Position the Camera3D so that the AnimatedSprite3D is visible.
9. Configure the project for Android export.
10. Run the project with Remote Debugging enabled and check for the error messages in the Debugger.
### Minimal reproduction project (MRP)
[bug_reproduction.zip](https://github.com/user-attachments/files/16805603/bug_reproduction.zip) | bug,platform:android,topic:rendering,regression,topic:3d | low | Critical |
2,495,520,208 | vscode | Flaky `Ternary Search Tree` test | https://dev.azure.com/vscode/VSCode/_build/results?buildId=130125&view=logs&j=f3dd96f0-7bf9-51c6-1e87-22c4cfa2ad69&t=cf063406-446c-522c-4fbd-8e958d7b189e
```
8800 passing (3m)
74 pending
1 failing
1) Ternary Search Tree
TernarySearchTree: Cannot read property '1' of undefined #138284 (random):
RangeError: Invalid array length
at Array.push (<anonymous>)
at TernarySearchTree._delete (http://localhost:33887/079953d7545e7ca308c82fce64738984/out/vs/base/common/ternarySearchTree.js:427:27)
at TernarySearchTree._delete (http://localhost:33887/079953d7545e7ca308c82fce64738984/out/vs/base/common/ternarySearchTree.js:466:30)
at TernarySearchTree.delete (http://localhost:33887/079953d7545e7ca308c82fce64738984/out/vs/base/common/ternarySearchTree.js:415:25)
at Context.<anonymous> (http://localhost:33887/079953d7545e7ca308c82fce64738984/out/vs/base/test/common/ternarySearchtree.test.js:732:31)
``` | bug,unit-test-failure | low | Critical |
2,495,524,856 | pytorch | Dynamo x autograd.Function x funtorch transform: intermediate sliced tensor doesn't require grad caused incorrect result | ### 🐛 Describe the bug
Let's use this small repro to demonstrate this issue:
```
import torch
import inspect
@torch.compile(backend="eager", fullgraph=True)
def fn(x):
x = torch.vmap(gn)(x)
return x + 1
def gn(x: torch.Tensor) -> torch.Tensor:
if x.requires_grad:
return x + 1
else:
return x - 1
x = torch.ones(2, 3, requires_grad=True)
print(fn(x))
```
The output:
```
tensor([[1., 1., 1.],
[1., 1., 1.]], grad_fn=<AddBackward0>)
```
This means the ```gn``` function goes with the ```else``` branch, because of the intermediate sliced tensor doesn't require grad. However, if ```gn``` is an autograd function, then we would check if the input tensor requires grad and decide how to proceed.
https://github.com/pytorch/pytorch/blob/b977abd5de0ea8c37f6100e7fc170830a07ff47b/torch/_dynamo/variables/misc.py#L652-L656
We need to revisit the behavior of how to compile autograd function regarding to ```requires_grad=True/False```.
### Versions
N/A
cc @ezyang @chauhang @penguinwu @zou3519 @Chillee @samdow @kshitij12345 @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @rec | triaged,oncall: pt2,module: functorch,module: dynamo,dynamo-autograd-function,dynamo-functorch | low | Critical |
2,495,532,046 | ollama | Inconsistent API Behavior | ### What is the issue?
I'm calling the generate API as follows:
```
url = 'http://localhost:11434/api/generate'
data = {
"model": model_name,
"stream": False,
"options": {
"temperature": 0.2,
"top_p": 0.8,
"seed": 42,
"num_predict": 300,
},
"system": set_role()
}
response = requests.post(url, json=data).json()
```
Although I set the stream flag to false, sometimes I don't get the whole response in one package (done = False in the first received package).
The other problem is that sometimes, even when the package is final (done=true), I don't see all the expected additional information there, for example, prompt_eval_count is missing.
This problem also persists with the Python library. I’ve carefully checked the documentation, and I believe it might be some sort of bug.
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.5 | bug | low | Critical |
2,495,560,402 | neovim | grepprg settings have unexpected behavior since 0.10 | ### Problem
Since switching the default grep program to rg, unfortunately it appears that any `grepprg` values that explicitly call grep no longer work properly. I noticed this originally because I had the following line in one of my local .lvimrc files:
```
setlocal grepprg=grep\ -nIH\ --exclude=tags\ --exclude=tags.vim\ --exclude='*.o*'\ --exclude='Module.symvers'\ --exclude='.*.mod.cmd'\ --exclude='*.mod'
```
From what I can tell - the problem appears to be that when ripgrep is installed on the system (this is a machine I just setup, I wasn't even aware I installed this) neovim doesn't just use it as a default - but it also changes the default value of `grepformat`. As a result - it doesn't process the text output from `grep` correctly, leading to some confusing behavior where it suddenly seems as if the location list has stopped working. Some example screenshots:
(Before)

(After)

### Steps to reproduce
nvim --clean
:setlocal grepprg=grep\ -nIH
:lgrep [some arguments…]
### Expected behavior
I would expect that between upgrades, unless it's documented otherwise: the behavior of grepprg should remain the same. Or at the very least, it should remain the same when the grep binary is explicitly mentioned in the command line…
Considering this bug has probably been around for a while of course, I don't think it'd be reasonable for me to ask for the behavior to change! But perhaps it should be documented in https://neovim.io/doc/user/news-0.10.html that `grepformat` also changes and this might change the behavior of custom `grepprg` settings? The page there does mention that the default grep program that gets used is changed, but I don't think this change in behavior is particularly obvious if you're in a situation like me where it's been years since you touched `grepprg` last
### Neovim version (nvim -v)
0.10
### Vim (not Nvim) behaves the same?
no
### Operating system/version
Fedora Linux 40
### Terminal name/version
neovim-gtk (first screenshot)/tilix (second screenshot)
### $TERM environment variable
xterm-256color
### Installation
dnf (package manager) | documentation,defaults | low | Critical |
2,495,561,676 | PowerToys | Additional File Explorer add-on to selectively hide folders that are system folders | ### Description of the new feature / enhancement
I would love to be able to have an add-on that lets me collapse or hide specific folders that I not care to see. For example, I want to know these are hidden, maybe with one collapsed folder. For example, you could take all these folders, selectively hide them, and then they get grouped into a new hidden virtual folder, that is not a folder, but just a collapsed representation of your hidden folders.
You could possible even set it to do wild card all .* folder names for example, folders starting with backup*
So this

Could be this

### Scenario when this would be used?
This would just save me eye strain and distance scrolling around windows and folders
Set it to do wild card all ".*" folder names
Folders starting with "backup-*"
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,495,601,030 | flutter | integration_test deprecation warning: 'windows' is deprecated: first deprecated in iOS 15.0 - Use UIWindowScene.windows on a relevant window scene instead | When targeting iOS 15 or later, integration_test has an analysis warning:
```
integration_test/IntegrationTestPlugin.m:76:55 'windows' is deprecated: first deprecated in iOS 15.0 - Use UIWindowScene.windows on a relevant window scene instead
```
https://github.com/flutter/flutter/blob/77a17873beaa12a3b7a8611dff1090edce3a707a/packages/integration_test/ios/integration_test/Sources/integration_test/IntegrationTestPlugin.m#L76-L77
To reproduce, open the [example app](https://github.com/flutter/flutter/tree/77a17873beaa12a3b7a8611dff1090edce3a707a/packages/integration_test/example) iOS project and set the iOS minimum target to iOS 15. | f: integration_test,P3,team-ios,triaged-ios | low | Minor |
2,495,623,518 | angular | docs: fix the brokens links on the API pages. | ### Describe the problem that you experienced
The angular.dev toolchain generates warnings whenever it encounters broken links inside the API entries.
For example:
```
INFO: From Action packages/common/common_docs_html:
WARNING: ***@link Router*** is invalid, Router is unknown in this context
WARNING: ***@link * Router*** is invalid, * is unknown in this context
WARNING: ***@link Component*** is invalid, Component is unknown in this context
WARNING: ***@link Component*** is invalid, Component is unknown in this context
```
We'll make this an umbrella issue, to keep track that there is work to do on this side and we'll welcome doc contributions around this !
To run the docs locally, run `yarn docs`, and check the console for the remaing warnings | help wanted,good first issue,P3,area: docs-infra | low | Critical |
2,495,675,272 | vscode | Add debug window context menu contribution points | Forking https://github.com/microsoft/vscode/issues/200880 into this issue for context menus in the debug view.
https://github.com/microsoft/vscode/pull/212501 | feature-request,debug | low | Critical |
2,495,695,910 | PowerToys | Abysmal performance from SVG Thumbnail Provider | ### Microsoft PowerToys version
0.83.0
### Installation method
GitHub
### Running as admin
Yes
### Area(s) with issue?
File Explorer: Thumbnail preview
### Steps to reproduce
- Clear your cached thumbnails using Windows' Disk Cleanup
- Open a folder with a large amount of SVG files (>100)
- Observe the speed of thumbnail creation
My bug report log: [PowerToysReport_2024-08-29-18-54-31.zip](https://github.com/user-attachments/files/16806487/PowerToysReport_2024-08-29-18-54-31.zip)
### ✔️ Expected Behavior
Thumbnails should be rendered in less than 500 milliseconds each.
In the video bellow, [SvgSee](https://github.com/tibold/svg-explorer-extension) is used to make the thumbnails. My computer uses a Intel Core i5-3330; NVIDIA GeForce GTX 650 Ti; 16 GB DDR3-1333 SDRAM; and Windows 10 Enterprise v10.0.19045.4780 installed on a SATA SSD.
https://github.com/user-attachments/assets/74a4690a-bc5a-470c-9c96-e65f08cb2326
### ❌ Actual Behavior
- Thumbnails takes ~2 seconds to render.
- A `msedgewebview2.exe` process is created for every file.
- Many `PowerToys.SvgThumbnailProvider.exe` processes are created.
- There are unnecessary network activity, `msedgewebview2.exe` calls home every time a thumbnail is created. (turning the network off doesn't seem to have any effect)
- Windows Defender is used but I am uncertain of it's impact.
In this video, PowerToys is used to render the thumbnails. Note the time it takes to render all the files.
https://github.com/user-attachments/assets/f6221f7b-dd6f-47fd-9eeb-0d0765d859a9
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Critical |
2,495,697,011 | deno | `fmt` Breaks GitHub Markdown's Callout Syntax | **Describe the bug**
When running `deno fmt` against Markdown files it will move the start of the callout to the first line and then spread `>` block quotes over multiple lines if they are of a certain length.
``` example.md
> [!NOTE] "Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do
> eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim > veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea..."
```
Which renders on GitHub as:
> [!NOTE] "Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do
> eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim > veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea..."
**Steps to Reproduce**
1. Create a Markdown files.
2. Add a callout using [correct syntax](https://github.com/orgs/community/discussions/16925).
3. Run `deno fmt file.md`.
4. See incorrect formatting.
**Expected behavior**
What should happen in order for the syntax to match up to GFM and correctly render the callout on GitHub:
``` correct.md
> [!NOTE]
> "Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea..."
```
Which renders on GitHub as:
> [!NOTE]
> "Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea..."
**Environment**
<!-- please complete the following information -->
- OS: macOS Version 15.0 Beta (24A5309e)
- deno version: deno 1.46.1
- std version: whichever is bundled with `deno`
| needs info,deno fmt,needs triage | low | Critical |
2,495,749,314 | rust | internal compiler error: TyKind::Error constructed but no error reported, internal compiler error: TyKind::Error constructed but no error reported | <!--
Thank you for finding an Internal Compiler Error! 🧊 If possible, try to provide
a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for
how to create smaller examples.
http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/
-->
[rustc-ice-2024-08-29T22_49_11-231717.txt](https://github.com/user-attachments/files/16808116/rustc-ice-2024-08-29T22_49_11-231717.txt)
### Code
```Rust
//src/test_handling/test_runner.rs
use crate::serial_println;
use crate::quemu_handler::{
exit_qemu,
QemuExitCode
};
#[cfg(test)]
pub fn test_runner(tests: &[&dyn Fn()]) {
serial_println!("Running {} tests", tests.len());
for test in tests {
test();
}
exit_qemu(QemuExitCode::Success);
}
#[test_case]
use crate::serial_print;
fn trivial_assertion() {
serial_print!("trivial assertion... ");
assert_eq!(1, 1);
serial_println!("[ok]");
}
```
```Rust
//src/serial/mod.rs
use uart_16550::SerialPort;
use spin::Mutex;
use lazy_static::lazy_static;
lazy_static! {
pub static ref SERIAL1: Mutex<SerialPort> = {
let mut serial_port = unsafe { SerialPort::new(0x3F8) };
serial_port.init();
Mutex::new(serial_port)
};
}
#[doc(hidden)]
pub fn _print(args: ::core::fmt::Arguments) {
use core::fmt::Write;
SERIAL1.lock().write_fmt(args).expect("Printing to serial failed");
}
/// Prints to the host through the serial interface.
#[macro_export]
macro_rules! serial_print {
($($arg:tt)*) => {
$crate::serial::_print(format_args!($($arg)*));
};
}
/// Prints to the host through the serial interface, appending a newline.
#[macro_export]
macro_rules! serial_println {
() => ($crate::serial_print!("\n"));
($fmt:expr) => ($crate::serial_print!(concat!($fmt, "\n")));
($fmt:expr, $($arg:tt)*) => ($crate::serial_print!(
concat!($fmt, "\n"), $($arg)*));
}
```
```Rust
//src/main.rs
#![no_std]
#![no_main]
#![reexport_test_harness_main = "test_main"]
#![feature(custom_test_frameworks)]
#![test_runner(test_handling::test_runner)]
mod vga_buffer;
mod test_handling;
mod quemu_handler;
mod serial;
use core::panic::PanicInfo;
#[no_mangle]
pub extern "C" fn _start() -> ! {
println!("Hello World{}", "!");
#[cfg(test)]
test_main();
loop {}
}
#[panic_handler]
fn panic(info: &PanicInfo) -> ! {
println!("{}", info);
loop {}
}
```
```Rust
//src/quemu_handler/exit_handling.rs
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
#[repr(u32)]
pub enum QemuExitCode {
Success = 0x10,
Failed = 0x11,
}
pub fn exit_qemu(exit_code: QemuExitCode) {
use x86_64::instructions::port::Port;
unsafe {
let mut port = Port::new(0xf4);
port.write(exit_code as u32);
}
}
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.82.0-nightly (100fde524 2024-08-28)
binary: rustc
commit-hash: 100fde5246bf56f22fb5cc85374dd841296fce0e
commit-date: 2024-08-28
host: x86_64-unknown-linux-gnu
release: 1.82.0-nightly
LLVM version: 19.1.0
```
### Error output
```
note: no errors encountered even though delayed bugs were created
note: those delayed bugs will now be shown as internal compiler errors
error: internal compiler error: `Res::Err` but no error emitted
--> src/test_handling/test_runner.rs:19:1
|
19 | use crate::serial_print;
| ^^^^^^^^^^^^^^^^^^^^^^^^
|
note: delayed at compiler/rustc_hir_typeck/src/expr.rs:506:32 - disabled backtrace
--> src/test_handling/test_runner.rs:19:1
|
19 | use crate::serial_print;
| ^^^^^^^^^^^^^^^^^^^^^^^^
error: internal compiler error: TyKind::Error constructed but no error reported
|
= note: delayed at compiler/rustc_hir_typeck/src/expr.rs:429:43 - disabled backtrace
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: please make sure that you have updated to the latest nightly
note: please attach the file at `/home/usia/Documents/osaka_os/rustc-ice-2024-08-29T22_47_13-229684.txt` to your bug report
note: compiler flags: -C embed-bitcode=no -C debuginfo=2 -C incremental=[REDACTED] -Z unstable-options
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
end of query stack
warning: `osaka_os` (bin "osaka_os" test) generated 3 warnings
error: could not compile `osaka_os` (bin "osaka_os" test); 3 warnings emitted
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary><strong>Backtrace</strong></summary>
<p>
```
error: internal compiler error: TyKind::Error constructed but no error reported
|
= note: delayed at compiler/rustc_hir_typeck/src/expr.rs:429:43
0: <rustc_errors::DiagCtxtInner>::emit_diagnostic
1: <rustc_errors::DiagCtxtHandle>::emit_diagnostic
2: <rustc_span::ErrorGuaranteed as rustc_errors::diagnostic::EmissionGuarantee>::emit_producing_guarantee
3: <rustc_errors::DiagCtxtHandle>::span_delayed_bug::<rustc_span::span_encoding::Span, &str>
4: <rustc_middle::ty::Ty>::new_misc_error
5: <rustc_hir_typeck::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
6: <rustc_hir_typeck::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
7: <rustc_hir_typeck::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
8: <rustc_hir_typeck::fn_ctxt::FnCtxt>::confirm_builtin_call
9: <rustc_hir_typeck::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
10: <rustc_hir_typeck::fn_ctxt::FnCtxt>::check_block_with_expected
11: <rustc_hir_typeck::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
12: rustc_hir_typeck::check::check_fn
13: rustc_hir_typeck::typeck
14: rustc_query_impl::plumbing::__rust_begin_short_backtrace::<rustc_query_impl::query_impl::typeck::dynamic_query::{closure#2}::{closure#0}, rustc_middle::query::erase::Erased<[u8; 8]>>
15: rustc_query_system::query::plumbing::try_execute_query::<rustc_query_impl::DynamicConfig<rustc_query_system::query::caches::VecCache<rustc_span::def_id::LocalDefId, rustc_middle::query::erase::Erased<[u8; 8]>>, false, false, false>, rustc_query_impl::plumbing::QueryCtxt, true>
16: rustc_query_impl::query_impl::typeck::get_query_incr::__rust_end_short_backtrace
17: <rustc_middle::hir::map::Map>::par_body_owners::<rustc_hir_analysis::check_crate::{closure#4}>::{closure#0}
18: rustc_hir_analysis::check_crate
19: rustc_interface::passes::run_required_analyses
20: rustc_interface::passes::analysis
21: rustc_query_impl::plumbing::__rust_begin_short_backtrace::<rustc_query_impl::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle::query::erase::Erased<[u8; 1]>>
22: rustc_query_system::query::plumbing::try_execute_query::<rustc_query_impl::DynamicConfig<rustc_query_system::query::caches::SingleCache<rustc_middle::query::erase::Erased<[u8; 1]>>, false, false, false>, rustc_query_impl::plumbing::QueryCtxt, true>
23: rustc_query_impl::query_impl::analysis::get_query_incr::__rust_end_short_backtrace
24: rustc_interface::interface::run_compiler::<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}
25: std::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface::util::run_in_thread_with_globals<rustc_interface::util::run_in_thread_pool_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>
26: <<std::thread::Builder>::spawn_unchecked_<rustc_interface::util::run_in_thread_with_globals<rustc_interface::util::run_in_thread_pool_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#1} as core::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
27: std::sys::pal::unix::thread::Thread::new::thread_start
28: start_thread
29: __GI___clone3
error: internal compiler error: TyKind::Error constructed but no error reported
|
= note: delayed at compiler/rustc_hir_typeck/src/expr.rs:429:43
0: <rustc_errors::DiagCtxtInner>::emit_diagnostic
1: <rustc_errors::DiagCtxtHandle>::emit_diagnostic
2: <rustc_span::ErrorGuaranteed as rustc_errors::diagnostic::EmissionGuarantee>::emit_producing_guarantee
3: <rustc_errors::DiagCtxtHandle>::span_delayed_bug::<rustc_span::span_encoding::Span, &str>
4: <rustc_middle::ty::Ty>::new_misc_error
5: <rustc_hir_typeck::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
6: <rustc_hir_typeck::fn_ctxt::FnCtxt>::confirm_builtin_call
7: <rustc_hir_typeck::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
8: <rustc_hir_typeck::fn_ctxt::FnCtxt>::check_block_with_expected
9: <rustc_hir_typeck::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
10: rustc_hir_typeck::check::check_fn
11: rustc_hir_typeck::typeck
12: rustc_query_impl::plumbing::__rust_begin_short_backtrace::<rustc_query_impl::query_impl::typeck::dynamic_query::{closure#2}::{closure#0}, rustc_middle::query::erase::Erased<[u8; 8]>>
13: rustc_query_system::query::plumbing::try_execute_query::<rustc_query_impl::DynamicConfig<rustc_query_system::query::caches::VecCache<rustc_span::def_id::LocalDefId, rustc_middle::query::erase::Erased<[u8; 8]>>, false, false, false>, rustc_query_impl::plumbing::QueryCtxt, true>
14: rustc_query_impl::query_impl::typeck::get_query_incr::__rust_end_short_backtrace
15: <rustc_middle::hir::map::Map>::par_body_owners::<rustc_hir_analysis::check_crate::{closure#4}>::{closure#0}
16: rustc_hir_analysis::check_crate
17: rustc_interface::passes::run_required_analyses
18: rustc_interface::passes::analysis
19: rustc_query_impl::plumbing::__rust_begin_short_backtrace::<rustc_query_impl::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle::query::erase::Erased<[u8; 1]>>
20: rustc_query_system::query::plumbing::try_execute_query::<rustc_query_impl::DynamicConfig<rustc_query_system::query::caches::SingleCache<rustc_middle::query::erase::Erased<[u8; 1]>>, false, false, false>, rustc_query_impl::plumbing::QueryCtxt, true>
21: rustc_query_impl::query_impl::analysis::get_query_incr::__rust_end_short_backtrace
22: rustc_interface::interface::run_compiler::<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}
23: std::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface::util::run_in_thread_with_globals<rustc_interface::util::run_in_thread_pool_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>
24: <<std::thread::Builder>::spawn_unchecked_<rustc_interface::util::run_in_thread_with_globals<rustc_interface::util::run_in_thread_pool_with_globals<rustc_interface::interface::run_compiler<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#0}::{closure#0}, core::result::Result<(), rustc_span::ErrorGuaranteed>>::{closure#1} as core::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
25: std::sys::pal::unix::thread::Thread::new::thread_start
26: start_thread
27: __GI___clone3
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: please make sure that you have updated to the latest nightly
note: please attach the file at `/home/usia/Documents/osaka_os/rustc-ice-2024-08-29T22_49_11-231717.txt` to your bug report
note: compiler flags: -C embed-bitcode=no -C debuginfo=2 -C incremental=[REDACTED] -Z unstable-options
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
end of query stack
warning: `osaka_os` (bin "osaka_os" test) generated 3 warnings
error: could not compile `osaka_os` (bin "osaka_os" test); 3 warnings emitted
```
</p>
</details>
| I-ICE,T-compiler,C-bug,T-types | low | Critical |
2,495,801,375 | pytorch | enhanced debugging prints in torch.distributed | ### 🚀 The feature, motivation and pitch
It's hard to debug nccl timeout issue at a2a op but enhanced debugging prints can make life a bit better.
In `distributed/distributed_c10d.py`
```
work = group.alltoall_base(
output, input, output_split_sizes, input_split_sizes, opts
)
```
I can definitely insert my own adhoc print here to debug timeout and maybe more. I wonder if PT-D team can initiate a centralized design to enhance debug prints.
### Alternatives
_No response_
### Additional context
_No response_
cc @XilunWu @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed,triaged | low | Critical |
2,495,810,087 | pytorch | FlopsProfiler double counts sdpa flops | ### 🐛 Describe the bug
We found that the flops counter is reporting incorrect flops number for sdpa operations.
This issue is not in torch 2.4+cu121 release.
Repro code:
```
from torch.utils.flop_counter import sdpa_flop, mm_flop
import torch.nn.functional as F
from torch.utils.flop_counter import FlopCounterMode
import torch
batch = 2
headdim = 4
nheads = 5
seqlen = 9
causal = False
### Expected Behavior
q = torch.randn(seqlen, headdim).cuda().bfloat16()
k = torch.randn(headdim, seqlen).cuda().bfloat16()
target_flops = mm_flop(q.shape, k.shape)
flops_counter = FlopCounterMode(display=False)
flops_counter.__enter__()
q @ k
flops_counter.__exit__(None, None, None)
assert(flops_counter.get_total_flops() == target_flops), f"{flops_counter.get_total_flops()} != {target_flops}"
### SDPA double counts
q = torch.randn(batch, nheads, seqlen, headdim).cuda().bfloat16()
k = torch.randn(batch, nheads, seqlen, headdim).cuda().bfloat16()
v = torch.randn(batch, nheads, seqlen, headdim).cuda().bfloat16()
target_flops = sdpa_flop(q.shape, k.shape, v.shape)
flops_counter = FlopCounterMode(display=False)
flops_counter.__enter__()
F.scaled_dot_product_attention(q, k, v)
flops_counter.__exit__(None, None, None)
assert(flops_counter.get_total_flops() == target_flops),f"{flops_counter.get_total_flops()} != {target_flops}"
```
### Versions
PyTorch version: 2.5.0.dev20240808+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.146+-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.161.07
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.3
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.3
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 208
On-line CPU(s) list: 0-207
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8481C CPU @ 2.70GHz
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 52
Socket(s): 2
Stepping: 8
BogoMIPS: 5399.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rtm avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 arat avx512vbmi umip avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid cldemote movdiri movdir64b fsrm md_clear serialize avx512_fp16 arch_capabilities
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 4.9 MiB (104 instances)
L1i cache: 3.3 MiB (104 instances)
L2 cache: 208 MiB (104 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-51,104-155
NUMA node1 CPU(s): 52-103,156-207
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] facenet-pytorch==2.5.1
[pip3] guided-filter-pytorch==3.7.5
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.23.0
[pip3] onnx==1.14.0
[pip3] onnxruntime==1.15.1
[pip3] open-clip-torch==2.22.0
[pip3] pytorch-lightning==2.2.4
[pip3] pytorch-triton==3.0.0+dedb7bdf33
[pip3] pytorch3d==0.7.5
[pip3] torch==2.5.0.dev20240808+cu124
[pip3] torch-fidelity==0.3.0
[pip3] torch-tb-profiler==0.4.1
[pip3] torchao-nightly==2024.8.12+cu124
[pip3] torchaudio==2.4.0.dev20240808+cu124
[pip3] torchlibrosa==0.0.7
[pip3] torchmetrics==1.3.1
[pip3] torchray==1.0.0.2
[pip3] torchvision==0.20.0.dev20240808+cu124
[conda] Could not collect
bash: Comment: command not found
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @jbschlosser @bhosmer @cpuhrsch @erichan1 @drisspg @mikaylagawarecki | high priority,triaged,module: regression,module: flop counter,module: sdpa | low | Critical |
2,495,822,212 | TypeScript | TS2454 Variable is used before being assigned with strictNullChecks when variable is clearly assigned but can be undefined | ### 🔎 Search Terms
Variable is used before being assigned TS2454 TS(2454) undefined strictNullChecks
### 🕗 Version & Regression Information
- This happens at least since version 3.7.5 (tested in the playground)
- This is the behavior in every version I tried, and I reviewed the FAQ
### ⏯ Playground Link
https://www.typescriptlang.org/play/?#code/DYUwLgBGIM6QvBArgOwCYgGYEsUjQNwBQR0cEiA5JRAIYwRwBOuA5hAD7Lpa77GlYYAPwA6UClZgAFgSA
### 💻 Code
```ts
let test = undefined;
test = '' as string | undefined;
test?.length; // Variable 'test' is used before being assigned.
```
### 🙁 Actual behavior
Typescript throws an error complaining about `test` being used before being assigned but it's clearly assigned to undefined and then to ''.
Mine is a very simplicistic problem, but the same issues happens if a replace `''` with a call to a function.
### 🙂 Expected behavior
Typescript should assign the type `string | undefined` to the variable `test` and let me use it.
### Additional information about the issue
_No response_ | Bug | low | Critical |
2,495,826,138 | PowerToys | Display activations codes/product keys/PC serial number | ### Description of the new feature / enhancement
A tool that lists all the microsoft product keys installed on your PC, and additionally the PC serial number if that's possible.
For exemple : Office (Word, Powerpoint, Excel, Project, Viso, etc.) product keys, and Windows 11 activation key. Such tools exists, and are available from the Store, but can they be trusted ?
### Scenario when this would be used?
1. For backup purposes
2. Before a system reinstallation
3. Before a computer hardware replacement
4. etc.
### Supporting information
Several techniques exists, but are difficult to use for a non tech person (eg. open a cmd in admin mode and type Get-WmiObject –query 'select * from SoftwareLicensingService...) | Needs-Triage | low | Minor |
2,495,838,230 | vscode | Random freezes while typing after previous video frame shows up |
Type: <b>Performance Issue</b>
While typing, if a previous state or rendered frame of the editor suddenly appears, the whole VSCode window will freeze for a few minutes then resume as if nothing happened.
VS Code version: Code 1.92.2 (fee1edb8d6d72a0ddff41e5f71a671c23ed924b9, 2024-08-14T17:29:30.058Z)
OS version: Darwin arm64 23.6.0
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Apple M3 Max (14 x 2400)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|2, 2, 2|
|Memory (System)|36.00GB (9.65GB free)|
|Process Argv|--crash-reporter-id 289b7be8-8e99-4b12-8403-363888175bba|
|Screen Reader|no|
|VM|0%|
</details><details>
<summary>Process Info</summary>
```
CPU % Mem MB PID Process
12 295 39732 code main
0 37 39736 utility-network-service
0 848 39737 window [1] (errors.ts — bridgecmdr)
0 74 40186 ptyHost
0 0 40191 /bin/zsh -il
0 332 40187 extensionHost [1]
0 184 40281 electron-nodejs (tsserver.js )
0 848 40284 electron-nodejs (tsserver.js )
0 111 40462 electron-nodejs (typingsInstaller.js typesMap.js )
0 147 40451 electron-nodejs (server.js )
0 74 40779 /Applications/Visual Studio Code.app/Contents/Frameworks/Code Helper (Plugin).app/Contents/MacOS/Code Helper (Plugin) /Applications/Visual Studio Code.app/Contents/Resources/app/extensions/json-language-features/server/dist/node/jsonServerMain --node-ipc --clientProcessId=40187
0 590 40912 electron-nodejs (eslintServer.js )
0 111 47926 /Users/matthewholder/.nvm/versions/node/v22.3.0/bin/node --dns-result-order=ipv4first /Users/matthewholder/.vscode/extensions/vitest.explorer-1.2.0/dist/worker.js
0 0 47931 /Volumes/Projects/bridgecmdr/node_modules/@esbuild/darwin-arm64/bin/esbuild --service=0.21.5 --ping
0 111 47927 /Users/matthewholder/.nvm/versions/node/v22.3.0/bin/node --dns-result-order=ipv4first /Users/matthewholder/.vscode/extensions/vitest.explorer-1.2.0/dist/worker.js
0 0 47930 /Volumes/Projects/bridgecmdr/node_modules/@esbuild/darwin-arm64/bin/esbuild --service=0.21.5 --ping
0 184 40188 shared-process
0 0 50544 /bin/ps -ax -o pid=,ppid=,pcpu=,pmem=,command=
0 74 40189 fileWatcher [1]
0 74 50337 gpu-process
0 111 50533 window [3] (Issue Reporter)
```
</details>
<details>
<summary>Workspace Info</summary>
```
| Window (errors.ts — bridgecmdr)
| Folder (bridgecmdr): 459 files
| File types: html(101) ts(96) pak(58) vue(20) json(13) js(10) yml(6)
| gitignore(4) so(4) sh(3)
| Conf files: launch.json(1) settings.json(1) dockerfile(1)
| package.json(1) tsconfig.json(1)
| Launch Configs: node chrome;
```
</details>
<details><summary>Extensions (28)</summary>
Extension|Author (truncated)|Version
---|---|---
vscode-eslint|dba|3.0.10
githistory|don|0.6.20
dotenv-vscode|dot|0.28.1
gitlens|eam|15.3.1
EditorConfig|Edi|0.16.4
prettier-vscode|esb|11.0.0
codespaces|Git|1.17.2
remotehub|Git|0.62.0
vscode-github-actions|git|0.26.3
vscode-pull-request-github|Git|0.94.0
todo-tree|Gru|0.0.226
npm|ide|1.7.4
i18n-ally|lok|2.12.0
vscode-docker|ms-|1.29.2
remote-containers|ms-|0.380.0
hexeditor|ms-|1.10.0
remote-explorer|ms-|0.4.3
remote-repositories|ms-|0.40.0
remote-server|ms-|1.5.2
uuid-generator|net|0.0.5
material-icon-theme|PKi|5.10.0
material-product-icons|PKi|1.7.1
vscode-yaml|red|1.15.0
scala|sca|0.5.7
sass-indented|syl|1.8.31
explorer|vit|1.2.0
volar|Vue|2.1.2
quokka-vscode|Wal|1.0.649
</details>
<!-- generated by issue reporter --> | info-needed | low | Critical |
2,495,852,106 | godot | Editor cannot assign custom node to member in C# script when using [Tool] annotation | ### Tested versions
v4.3.stable.mono.official [77dcf97d8]
### System information
Godot v4.3.stable.mono - Windows 10.0.22631 - Vulkan (Forward+) - integrated AMD Radeon(TM) Graphics (Advanced Micro Devices, Inc.; 31.0.12029.10015) - AMD Ryzen 9 5900HX with Radeon Graphics (16 Threads)
### Issue description
When assigning a custom node to an [Export] member in a C# script, a cast error occurs because the item is assigned with the super type instead of the actual custom node type. This issue only occurs when the script is executed in the editor using the [Tool] annotation. When the project runs normally, the types are assigned correctly.
```
/root/godot/modules/mono/glue/GodotSharp/GodotSharp/Core/NativeInterop/ExceptionUtils.cs:113 - System.InvalidCastException: Unable to cast object of type 'Godot.Node' to type 'CustomNode'.
at class_error.FailureNode.SetGodotClassPropertyValue(godot_string_name& name, godot_variant& value) in C:\Mateus\Godot Engine\Projetos\class_error\Godot.SourceGenerators\Godot.SourceGenerators.ScriptPropertiesGenerator\class_error.FailureNode_ScriptProperties.generated.cs:line 27
at Godot.Bridge.CSharpInstanceBridge.Set(IntPtr godotObjectGCHandle, godot_string_name* name, godot_variant* value) in /root/godot/modules/mono/glue/GodotSharp/GodotSharp/Core/Bridge/CSharpInstanceBridge.cs:line 57
```
### Steps to reproduce
**Steps to reproduce:**
1. Create a custom node in C#.
2. Create a new scene and attach a C# script to a node in the scene.
3. Annotate the C# script with [Tool] and use the [Export] attribute to specify the custom node type for a member.
4. In the editor, try to assign the custom node to the exported member.
**Expected behavior:** The custom node should be correctly assigned to the member of its specific type.
**Actual behavior:** A cast error occurs because the assigned item type is its super type rather than the specific custom node type.
**Note:** This issue only occurs when the script is executed in the editor. When the project runs normally, the node types are assigned correctly.
### Minimal reproduction project (MRP)
[mrp.zip](https://github.com/user-attachments/files/16808411/class_error.zip)
In the "failure_scene.tscn", there is a custom node property that cannot be assigned in the editor. | enhancement,discussion,topic:editor,topic:dotnet | low | Critical |
2,495,852,184 | rust | Next-gen Trait Solver does not implement Generic Const Expressions | I was fluffing around with different generic stuff and ended up enabling the Next-gen solver, where I realized that it looked like Generic Const Expressions just don't work at all.
This code seems to work fine with the default trait solver, but when you enable the `-Znext-solver` flag, it throws a normalization error.
```rust
#![feature(generic_const_exprs)]
pub struct Vec<T, const N: usize>
where
[T; N * N]: Sized,
{
packed: [T; N * N],
}
```
I understand this is all very unstable stuff, but thought it was still probably slightly unintended behavior at the least.
### Meta
`rustc --version --verbose`:
```
rustc 1.82.0-nightly (100fde524 2024-08-28)
binary: rustc
commit-hash: 100fde5246bf56f22fb5cc85374dd841296fce0e
commit-date: 2024-08-28
host: x86_64-unknown-linux-gnu
release: 1.82.0-nightly
LLVM version: 19.1.0
```
<details><summary>Backtrace</summary>
<p>
```
error[E0284]: type annotations needed: cannot normalize `Vec<T, N>::packed::{constant#0}`
--> src/main.rs:6:13
|
6 | packed: [T; N * N],
| ^^^^^^^^^^ cannot normalize `Vec<T, N>::packed::{constant#0}`
```
</p>
</details>
| T-compiler,C-bug,A-const-generics,F-generic_const_exprs,T-types,WG-trait-system-refactor,requires-incomplete-features | low | Critical |
2,495,859,272 | storybook | [Bug]: Addon Does Not Contribute Addons | ### Describe the bug
I cloned the Addon Kit repo (moreso added a copy of it into our monorepo)
I followed the setup from Addon Essentials, primarily being that the root `index.ts` exports an `addons` function which returns an array of addons.
When we run storybook with our custom addon (based on Addon Kit), none of the addons contributed by our addon are loaded.
Further, adding any amount of logging (or throws) in the index.ts does not produce any result. Therefore, my suspicion is that storybook does not even load the index.ts
### Reproduction link
In progress
### Reproduction steps
I know this bug report is half assed, I'm sorry (truly). I've spent all day debugging issues related to storybook not loading our SSL certificates correctly --> so I'm a little exhausted right now.
I'll work on creating a repro, but will also likely not be on stackblitz because I don't know how to create one with a custom addon.
Further, the lack of documentation for how to write good addons is a little frustrating. Even the Addon Kit example does not work with latest storybook out of the box (at least not for me).
I really just wanted to get this bug report filed as a placeholder. If any of this makes sense to someone, great! Otherwise please feel free to ignore until I have a clear repro.
### System
```bash
Storybook Environment Info:
System:
OS: macOS 14.5
CPU: (10) arm64 Apple M1 Max
Shell: 5.9 - /bin/zsh
Binaries:
Node: 20.12.2 - ~/.nvm/versions/node/v20.12.2/bin/node
Yarn: 4.4.0 - ~/.nvm/versions/node/v20.12.2/bin/yarn <----- active
npm: 10.5.0 - ~/.nvm/versions/node/v20.12.2/bin/npm
pnpm: 9.1.1 - ~/.nvm/versions/node/v20.12.2/bin/pnpm
Browsers:
Chrome: 128.0.6613.85
Safari: 17.5
```
### Additional context
_No response_ | bug,needs triage | low | Critical |
2,495,914,408 | pytorch | DISABLED test_garbage_collect_expandable (__main__.TestCudaMallocAsync) | Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_garbage_collect_expandable&suite=TestCudaMallocAsync&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/29448105511).
Over the past 3 hours, it has been determined flaky in 30 workflow(s) with 90 failures and 30 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_garbage_collect_expandable`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_cuda.py", line 4146, in test_garbage_collect_expandable
a = alloc(40)
File "/var/lib/jenkins/workspace/test/test_cuda.py", line 4141, in alloc
return torch.ones(n * mb, dtype=torch.int8, device="cuda")
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 40.00 MiB. GPU 0 has a total capacity of 21.98 GiB of which 20.36 GiB is free. Process 329937 has 248.00 MiB memory in use. Process 329975 has 358.00 MiB memory in use. Process 330076 has 248.00 MiB memory in use. Process 330111 has 520.00 MiB memory in use. Process 331287 has 248.00 MiB memory in use. 120.00 MiB allowed; Of the allocated memory 50.00 MiB is allocated by PyTorch, with 14.00 MiB allocated in private pools (e.g., CUDA Graphs), and 90.00 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/test_cuda.py TestCudaMallocAsync.test_garbage_collect_expandable
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_cuda.py`
cc @ptrblck @msaroufim @clee2000 | module: cuda,triaged,module: flaky-tests,skipped | low | Critical |
2,495,919,047 | PowerToys | Change File Icon | ### Description of the new feature / enhancement
A tool to change file association icons.
### Scenario when this would be used?
I want to make `.code-workspace` icons more prominent, changing them from this:

to this:

Basically, an editor for those registry entries:

| Needs-Triage | low | Minor |
2,495,937,562 | terminal | Incorrect Korean translation | ### Windows Terminal version
v1.21.2361.0
### Windows build number
_No response_
### Other Software
_No response_
### Steps to reproduce
Browse this repository.
### Expected Behavior
1. `<value>파일에서 설정을 로드하지 못했습니다. 후행 쉼표를 포함하여 구문 오류가 있는지 확인합니다.</value>`
2. `<value>새 탭 만들기</value> `
3. ` <value>읽기 전용 터미널을 닫으려고 합니다. 계속하시겠습니까?</value> `
4. `<value>진홍</value>`
### Actual Behavior
1. https://github.com/microsoft/terminal/blob/93d592bb4156e85f66f34a57ee7f707cc7ba0f15/src/cascadia/TerminalApp/Resources/ko-KR/Resources.resw#L121
2. https://github.com/microsoft/terminal/blob/17a55da0f9889aafd84df024299982e7b94f5482/src/cascadia/TerminalApp/Resources/ko-KR/Resources.resw#L322
3. https://github.com/microsoft/terminal/blob/17a55da0f9889aafd84df024299982e7b94f5482/src/cascadia/TerminalApp/Resources/ko-KR/Resources.resw#L527
4. https://github.com/microsoft/terminal/blob/93d592bb4156e85f66f34a57ee7f707cc7ba0f15/src/cascadia/TerminalApp/Resources/ko-KR/Resources.resw#L619
String ID | Current | Suggestion
--------- | ------- | ----------
InitialJsonParseErrorText | 파일에서 설정을 로드 하지 못했습니다. 후행 쉼표를 포함 하 여 구문 오류가 있는지 확인 합니다. | 파일에서 설정을 로드하지 못했습니다. 후행 쉼표를 포함하여 구문 오류가 있는지 확인합니다.
CmdNewTabDesc | 새 작업 만들기 | 새 탭 만들기
CloseReadOnlyDialog.Content | 읽기 전용 터미널을 닫으려고 합니다. 계속하시겠어요? | 읽기 전용 터미널을 닫으려고 합니다. 계속하시겠습니까?
CrimsonColorButton.[using:Windows.UI.Xaml.Controls]ToolTipService.ToolTip | 심홍 | 진홍 | Issue-Bug,Product-Terminal,In-PR,Area-Localization | medium | Critical |
2,495,943,016 | TypeScript | Type PermissionName for Permission API doesn't contain microphone and camera devices | ### 🔍 Search Terms
microphone, camera, device, permission, state, API, navigator
### ✅ Viability Checklist
- [X] This wouldn't be a breaking change in existing TypeScript/JavaScript code
- [X] This wouldn't change the runtime behavior of existing JavaScript code
- [X] This could be implemented without emitting different JS based on the types of the expressions
- [X] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, new syntax sugar for JS, etc.)
- [X] This isn't a request to add a new utility type: https://github.com/microsoft/TypeScript/wiki/No-New-Utility-Types
- [X] This feature would agree with the rest of our Design Goals: https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals
### ⭐ Suggestion
In both files, webworker.generated.d.ts and dom.generated.d.ts in line 9361 and 27992.
The modification would be add two terms in the type PermissionName. 'camera' and 'microphone' to check access in both devices
### 📃 Motivating Example
```
const checkForPermission = (type: PermissionName) => {
navigator.permissions
.query({
name: type,
})
.then(permissionStatus => {
// Will be 'granted', 'denied' or 'prompt':
console.log(permissionStatus);
// permissionStatus.state = 'granted'
// Listen for changes to the permission state
permissionStatus.onchange = () => {
console.log(permissionStatus.state);
};
});
};
checkForPermission('microphone');
checkForPermission('camera');
```
**Expected behavior:**
'microphone' and 'camera' should be valid types for PermissionName type.
**Actual behavior:**
'microphone' and 'camera' are not valid types for PermissionName type.
Typescript is showing error:
Argument of type '"microphone"' is not assignable to parameter of type 'PermissionName'.
### 💻 Use Cases
1. What do you want to use this for?
Check if camera or microphone are enabled/disabled
3. What shortcomings exist with current approaches?
To type the promise from navigator.permission in typescript, we have to add a inclusion on type PermissionName like: { PermissionName } & { name: 'camera' | 'microphone' }.
5. What workarounds are you using in the meantime?
I'm using it: { PermissionName } & { name: 'camera' | 'microphone' }
| Needs Investigation | low | Critical |
2,495,964,153 | opencv | Crashes when dealing with large images | <!--
If you have a question rather than reporting a bug please go to https://forum.opencv.org where you get much faster responses.
If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute).
This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library.
-->
##### System information (version)
<!-- Example
- OpenCV => 4.6
- Operating System / Platform =>OpenKylin
- Compiler => Visual Studio Code
-->
- OpenCV => :grey_question:
- Operating System / Platform => :grey_question:
- Compiler => :grey_question:
##### Detailed description
imread processing large images causes the program to crash
<!-- your description -->
When I imread a large picture, the process will crash directly, by querying the information on the Internet are solved by modifying the environment variables, if I don't want to use this means, opencv when processing large pictures, if imread can't handle it, why not return an error code, but cause the program to crash? Why this design? Is there a way for OpenCV not to crash when faced with a large image that can't be handled? My current solution is to use try catch, but I still don't think it's a good solution
##### Steps to reproduce
<!-- to add code example fence it with triple backticks and optional file extension
```.cpp
// C++ code example
```
cv::Mat image = cv::imread(filePath);
or attach as .txt or .zip file
-->
[error.txt](https://github.com/user-attachments/files/16809181/error.txt)
##### Issue submission checklist
- [ ] I report the issue, it's not a question
<!--
OpenCV team works with forum.opencv.org, Stack Overflow and other communities
to discuss problems. Tickets with questions without a real issue statement will be
closed.
-->
- [ ] I checked the problem with documentation, FAQ, open issues,
forum.opencv.org, Stack Overflow, etc and have not found any solution
<!--
Places to check:
* OpenCV documentation: https://docs.opencv.org
* FAQ page: https://github.com/opencv/opencv/wiki/FAQ
* OpenCV forum: https://forum.opencv.org
* OpenCV issue tracker: https://github.com/opencv/opencv/issues?q=is%3Aissue
* Stack Overflow branch: https://stackoverflow.com/questions/tagged/opencv
-->
- [ ] I updated to the latest OpenCV version and the issue is still there
<!--
master branch for OpenCV 4.x and 3.4 branch for OpenCV 3.x releases.
OpenCV team supports only the latest release for each branch.
The ticket is closed if the problem is not reproduced with the modern version.
-->
- [ ] There is reproducer code and related data files: videos, images, onnx, etc
<!--
The best reproducer -- test case for OpenCV that we can add to the library.
Recommendations for media files and binary files:
* Try to reproduce the issue with images and videos in opencv_extra repository
to reduce attachment size
* Use PNG for images, if you report some CV related bug, but not image reader
issue
* Attach the image as an archive to the ticket, if you report some reader issue.
Image hosting services compress images and it breaks the repro code.
* Provide ONNX file for some public model or ONNX file with random weights,
if you report ONNX parsing or handling issue. Architecture details diagram
from netron tool can be very useful too. See https://lutzroeder.github.io/netron/
-->
| invalid,question (invalid tracker) | low | Critical |
2,495,988,240 | pytorch | torch.onnx.verification.verify() lacks the option to pass in SessionOptions - crucial for custom upset support | ### 🚀 The feature, motivation and pitch
Thank you for making torch.onnx.verification.verify()! It really helps!
It does not work, however, for models that use onnx_extensions and custom operations.
To make them work, we simply need to add an option to pass SessionOptions to torch.onnx.verification.verify - and from there to here:
https://github.com/pytorch/pytorch/blob/cf11fc0dcbb9c907cf6e851109b92f4157e445c9/torch/onnx/verification.py#L177
Best regards,
-Boris.
### Alternatives
_No response_
### Additional context
_No response_ | module: onnx,triaged,onnx-needs-info | low | Minor |
2,496,009,832 | next.js | Third-party script synchronous loading Hydration problem | ### Link to the code that reproduces this issue
https://github.com/yanqc1996/GD-Next
### To Reproduce
Description of the problem encountered: I have to rely on an external third-party js in the project to perform a/b testing on the page, in which we may have a test for page diversion, which forces me to divert before loading the page content (to prevent users from seeing page a before page b), which also forces me to load this js script synchronously to block page rendering, so next/script does not apply to me because there is no synchronization option. I use native <script> loading (eslint will report an error when srcipt is loaded synchronously, but this can be prohibited by custom rules)
The deployment address of vercel is: https://gd-next-un88.vercel.app/. When you open the console, you can see that there are many Hydration errors in the right window.
When in official use, these errors will even affect the rendering of the page. I must trigger a click before rendering other things. For example, the test page https://vibe-portal-rik-vibeus.vercel.app/products/vibe-smart-whiteboard-s1/, you can try to open the page without a trace, and you will find that the loading of home page pictures/videos will be stuck, and can only be loaded after triggering a click
I don’t think this is a Next.js problem. I just want to ask if there is a solution that allows me to load only a synchronous blocking rendering js script on the client?
If you find that there is no such problem in the deployment during testing, this should be caused by the need for the third-party script to specify the corresponding domain name in the background. I tried it. If the effective domain name is not specified, this error will not appear
### Current vs. Expected behavior
I hope to be able to implement a third-party js synchronous blocking loading logic on the client side normally
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.5.0: Wed May 1 20:13:18 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6030
Available memory (MB): 18432
Available CPU cores: 11
Binaries:
Node: 18.17.0
npm: 9.6.7
Yarn: 1.22.21
pnpm: 7.1.0
Relevant Packages:
next: 14.2.7 // Latest available version is detected (14.2.7).
eslint-config-next: 14.2.7
react: 18.3.1
react-dom: 18.3.1
typescript: 5.5.4
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Script (next/script)
### Which stage(s) are affected? (Select all that apply)
Vercel (Deployed)
### Additional context
_No response_ | bug,Script (next/script) | low | Critical |
2,496,152,079 | pytorch | Add decompositions for nanmedian and median | ### 🚀 The feature, motivation and pitch
https://github.com/pytorch/pytorch/pull/134819/files#diff-13f137071d25329adefbbd5df98327d792a1a9cd4eaea026b4f37b9f9ccb82aaR152
It should be pretty plausible to add decompositions for nanmedian and median that work well with compilation.
For example, something like this:
```
def nanmedian3(x):
size = x.shape[0]
sorted_vals = torch.sort(x)[0]
k = ((size - 1) - sorted_vals.isnan().sum()) // 2
return torch.ops.aten.index(sorted_vals, (k,))
```
cc: @isuruf
### Alternatives
_No response_
### Additional context
_No response_
cc @SherlockNoMad | triaged,module: decompositions | low | Minor |
2,496,172,491 | storybook | [Bug]: Backgrounds toolbar addon causes multiple rerenders and errors when switching backgrounds on a docs page with over 4 stories | ### Describe the bug
When switching to another background via the backgrounds toolbar addon on a docs page with over 4 stories, the docs page rerenders and reloads multiple times and the ff errors are thrown as shown on the console:

```
manager received storyRenderPhaseChanged but was unable to determine the source of the event
manager received storyRendered but was unable to determine the source of the event
manager received storybook/docs/snippet-rendered but was unable to determine the source of the event
```
### Reproduction link
https://github.com/merri-dei/storybook-nextjs-issue/tree/main
### Reproduction steps
1. Go to the above link.
2. Clone the repository.
3. Open the repository in an IDE.
4. Install all dependencies via `pnpm install`.
5. Run `npm run storybook`.
6. Visit `http://localhost:6006`.
7. Open the Button docs page.
8. Change the background via the Backgrounds toolbar addon. You might need to do this multiple times to see the errors on the console.
Expected:
The docs page rerenders once to update the background of the stoires and doesn't reload. No error is logged onto the console.
Actual:
The docs page rerenders multiple times and reloads. Errors are logged onto the console:
https://github.com/user-attachments/assets/13ce6e61-c1e5-42dd-b977-15b64518c42f
Note that this doesn't happen on docs pages with 4 stories or less:
https://github.com/user-attachments/assets/3fb03824-55db-4bf3-8f71-eb601c9c9100
### System
```bash
Storybook Environment Info:
System:
OS: macOS 14.6.1
CPU: (8) arm64 Apple M1
Shell: 5.9 - /bin/zsh
Binaries:
Node: 18.19.0 - ~/.nvm/versions/node/v18.19.0/bin/node
Yarn: 1.22.21 - ~/.nvm/versions/node/v18.19.0/bin/yarn
npm: 10.2.3 - ~/.nvm/versions/node/v18.19.0/bin/npm
pnpm: 8.15.4 - ~/.nvm/versions/node/v18.19.0/bin/pnpm <----- active
Browsers:
Chrome: 128.0.6613.113
Edge: 128.0.2739.42
Safari: 17.6
npmPackages:
@storybook/addon-console: ^3.0.0 => 3.0.0
@storybook/addon-essentials: ^8.2.9 => 8.2.9
@storybook/addon-interactions: ^8.2.9 => 8.2.9
@storybook/addon-links: ^8.2.9 => 8.2.9
@storybook/addon-onboarding: ^8.2.9 => 8.2.9
@storybook/blocks: ^8.2.9 => 8.2.9
@storybook/nextjs: ^8.2.9 => 8.2.9
@storybook/react: ^8.2.9 => 8.2.9
@storybook/test: ^8.2.9 => 8.2.9
eslint-plugin-storybook: ^0.8.0 => 0.8.0
storybook: ^8.2.9 => 8.2.9
```
### Additional context
_No response_ | bug,needs triage | low | Critical |
2,496,182,060 | godot | Android splash screen followed by gray wave transition into Godot splash screen | ### Tested versions
4.3.stable.official on Windows; Godot app 4.3 on Android
Only one Android phone tested, though.
### System information
Windows 10 - Godot 4.3; Android Pixel 7a, all launchers
### Issue description
Somewhat related to https://github.com/godotengine/godot/pull/92965, when a Godot-built APK launches, the android splash screen always plays a swipe cut transition from the top of the screen into the Godot game splash screen. The swipe cut resembles a gray wave. On my app, I discovered this because I wanted to launch into a pure white screen that matches my game's start screen. In my case:
1. I updated AndroidManifest.xml to make the windowSplashScreenBackground white
2. I set my game to start at a white screen
3. I set Godot's splash screen to just the color white
So I expected the app to launch all white, and then the first time I'd see any change was after my game had started. However, there's a very ugly gray, cut-off transition after the white splash screen from Android's launcher, which makes my cool app feel crappy and lame.
You can see this in the Godot 4.3 app as well, when the app launches, which should show that it probably affects all Godot-built Android apps.
### Steps to reproduce
1. Open the official Godot 4.3 app
2. Observe the brief swooshing wave color
3. Close the app from Recent Apps to force a cold relaunch
### Minimal reproduction project (MRP)
I don't have an MRP. Look at the official Godot app. | bug,platform:android,topic:porting | low | Minor |
2,496,187,962 | PowerToys | Improve powertoys run vscode workspaces launcher | ### Description of the new feature / enhancement
Currently to open up vscode workspace it spawns .cmd file. In Enterprise environment .cmd extension is tied to notepad, therefore workspaces doesnt open up
### Scenario when this would be used?
In enterprise environments where security hardening is done.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,496,193,938 | pytorch | DISABLED test_transformer_runtime (__main__.TestRuntimeEstimator) | Platforms: rocm
Failure introduced by https://github.com/pytorch/pytorch/pull/134243
This test was disabled because it is failing on main branch ([recent examples](https://torch-ci.com/failure?failureCaptures=%5B%22distributed%2F_tools%2Ftest_runtime_estimator.py%3A%3ATestRuntimeEstimator%3A%3Atest_transformer_runtime%22%5D)).
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang | module: rocm,triaged,skipped | low | Critical |
2,496,207,492 | pytorch | ONNX NMS Operator Output Different From Torchvision Implementation | ### 🐛 Describe the bug
I am trying to encapsulate the torchvision.ops.nms function in an onnx model, there is no problem in the model conversion and inference but the output of the derived onnx model differs from the torchvision implementation.
```py
import torch
import torchvision
import onnxruntime
import numpy as np
class NMS(torch.nn.Module):
def __init__(self, iou_threshold=0.45):
super(NMS, self).__init__()
self.iou_threshold = iou_threshold
def forward(self, x):
boxes = x[:, :4]
scores = x[:, 4]
keep = torchvision.ops.nms(boxes, scores, self.iou_threshold)
return keep
# Test data
nms_x = torch.rand(50, 38)
# PyTorch model
nms_model = NMS()
torch_output = nms_model(nms_x)
# Export to ONNX
torch.onnx.export(nms_model,
(nms_x,),
"nms.onnx",
opset_version=17,
input_names=["input"],
output_names=["output"],
#dynamic_axes={'input': {0: 'batch'}, 'output': {0: 'batch'}}
)
# ONNX Runtime inference
session = onnxruntime.InferenceSession("nms.onnx")
onnx_output = session.run(None, {'input': nms_x.numpy()})[0]
print("PyTorch output shape:", torch_output.shape)
print("ONNX output shape:", onnx_output.shape)
```

### Versions
Collecting environment information...
PyTorch version: 2.4.0+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Home
GCC version: (tdm64-1) 10.3.0
Clang version: 17.0.6
CMake version: version 3.30.1
Libc version: N/A
Python version: 3.11.5 | packaged by Anaconda, Inc. | (main, Sep 11 2023, 13:26:23) [MSC v.1916 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.22631-SP0
Is CUDA available: False
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4070 Laptop GPU
Nvidia driver version: 560.94
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture=9
CurrentClockSpeed=4001
DeviceID=CPU0
Family=107
L2CacheSize=8192
L2CacheSpeed=
Manufacturer=AuthenticAMD
MaxClockSpeed=4001
Name=AMD Ryzen 9 7940HS w/ Radeon 780M Graphics
ProcessorType=3
Revision=29697
Versions of relevant libraries:
[pip3] flake8==6.0.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] numpydoc==1.5.0
[pip3] onnx==1.16.2
[pip3] onnx-graphsurgeon==0.3.27
[pip3] onnx-simplifier==0.4.36
[pip3] onnx-tool==0.9.0
[pip3] onnx2pytorch==0.4.1
[pip3] onnx2tf==1.19.11
[pip3] onnxoptimizer==0.3.13
[pip3] onnxruntime==1.18.1
[pip3] onnxscript==0.1.0.dev20240823
[pip3] onnxsim==0.4.35
[pip3] sclblonnx==0.2.1
[pip3] sng4onnx==1.0.1
[pip3] tf2onnx==1.16.1
[pip3] torch==2.4.0
[pip3] torchvision==0.19.0
[conda] _anaconda_depends 2023.09 py311_mkl_1
[conda] blas 1.0 mkl
[conda] mkl 2021.4.0 pypi_0 pypi
[conda] mkl-service 2.4.0 py311h2bbff1b_1
[conda] mkl_fft 1.3.8 py311h2bbff1b_0
[conda] mkl_random 1.2.4 py311h59b6b97_0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] numpydoc 1.5.0 py311haa95532_0
[conda] onnx2pytorch 0.4.1 pypi_0 pypi
[conda] torch 2.4.0 pypi_0 pypi
[conda] torchvision 0.19.0 pypi_0 pypi | module: onnx,triaged | low | Critical |
2,496,229,465 | ant-design | bug[Table]: error border-radius when column scroll | ### Reproduction link
[https://ant.design/components/table-cn](https://ant.design/components/table-cn)
### Steps to reproduce
https://github.com/user-attachments/assets/8c5afa4e-1a56-49d5-994a-641a48bd4e9e
### What is expected?
scroll 列时圆角样式正常
### What is actually happening?
scroll 列时圆角样式不正常
| Environment | Info |
| --- | --- |
| antd | 5.20.3 |
| React | 18.3.1 |
| System | Windows 11 |
| Browser | Chrome999 | | 🐛 Bug,help wanted,Inactive | low | Critical |
2,496,229,813 | godot | Type Variation doesn't show new type when create it in the theme | ### Tested versions
v4.3.stable.steam [77dcf97d8]
### System information
windows 10
### Issue description

When I create a new Type in the theme, it doesn't show up in the Type Variation but I can use the created type manually

### Steps to reproduce
"First, create a new theme. Then, within the theme, create a new type under the types category.
Next, select the theme in the PanelContainer node.
After that, look for the newly created type variation under Type Variation.
If it cannot be found, manually entering the new variation name works to apply it."
### Minimal reproduction project (MRP)
[bug.zip](https://github.com/user-attachments/files/16811793/bug.zip)
| enhancement,topic:editor,usability,documentation,topic:gui | low | Critical |
2,496,271,324 | yt-dlp | Bilibili - Can't `-f bv,ba` because `ba` is also written into mp4. So after downloading `bv` , downloading `ba` fails. | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
_No response_
### Provide a description that is worded well enough to be understood
The current behavior is fine for `bv+ba` which automatically adds `%(format_id)s` to filename during download, but I like to use `-f bv,ba` and downloading `ba` fails because yt-dlp thinks the file is already downloaded, as the filename already exists and `bv` content-length typically exceeds `ba` content-length .
I prefer to archive stuff unmuxed.
I'm aware that I could manually add `%(format_id)s` to my `-o` but I'd prefer not to.
Please fix it so that yt-dlp downloads audio only formats from bilibili to non `.mp4` file.
Maybe m4a? Since the audio is aac?
Stream #0:0[0x2](und): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, mono, fltp, 5 kb/s (default)
Thanks.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
$ yt-dlp -v --no-config --fixup warn -S tbr -f bv,ba https://www.bilibili.tv/en/video/4792404558745600
[debug] Command-line config: ['-v', '--no-config', '--fixup', 'warn', '-S', 'tbr', '-f', 'bv,ba', 'https://www.bilibili.tv/en/video/4792404558745600']
[debug] yt-dlp version nightly@2024.08.21.232751 from yt-dlp/yt-dlp-nightly-builds [6f9e65374] (win_exe)
[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-7-6.1.7601-SP1 (OpenSSL 1.1.1k 25 Mar 2021)
[debug] exe versions: ffmpeg 7.0-full_build-www.gyan.dev (setts), ffprobe 7.0-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.07.04, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.35.5, urllib3-2.2.2, websockets-13.0
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Extractor Plugins: AGB (YoutubeIE), Youtube_AgeGateBypassIE
[debug] Loaded 1831 extractors
[BiliIntl] Extracting URL: https://www.bilibili.tv/en/video/4792404558745600
[BiliIntl] 4792404558745600: Downloading webpage
[BiliIntl] 4792404558745600: Downloading video formats
[debug] Sort order given by user: tbr
[debug] Formats sorted by: hasvid, ie_pref, tbr, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id
[info] 4792404558745600: Downloading 2 format(s): 9, 1
[debug] Invoking http downloader on "https://upos-bstar1-mirrorakam.akamaized.net/bestvideoURL"
[debug] File locking is not supported. Proceeding without locking
[download] Destination: sayang namatay din [4792404558745600].mp4
[download] 100% of 6.00MiB in 00:00:13 at 469.15KiB/s
[debug] Invoking http downloader on "https://upos-bstar1-mirrorakam.akamaized.net/bestaudioURL"
[download] sayang namatay din [4792404558745600].mp4 has already been downloaded
[download] 100% of 6.00MiB
```
| site-bug,patch-available,needs-testing | low | Critical |
2,496,357,974 | next.js | Inconsistent handling of notFound() from server action | ### Link to the code that reproduces this issue
https://github.com/VitaliyPotapov/next-reproduction-app
### To Reproduce
1. `npm run dev` and navigate to http://localhost:3000
2. click `Form with server action` -> `not-found.tsx` is correctly shown
3. click `Button with server action` -> there is uncaught error `NEXT_NOT_FOUND`
### Current vs. Expected behavior
I except that click on `Button with server action` would show `not-found.tsx`.
So that handling of notFound() from server action to be consistent, no matter how I invoke that server action: via form or via click handler.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.5.0: Wed May 1 20:12:58 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6000
Available memory (MB): 16384
Available CPU cores: 10
Binaries:
Node: 18.20.1
npm: 10.5.0
Yarn: N/A
pnpm: N/A
Relevant Packages:
next: 15.0.0-canary.135 // Latest available version is detected (15.0.0-canary.135).
eslint-config-next: N/A
react: 19.0.0-rc-7771d3a7-20240827
react-dom: 19.0.0-rc-7771d3a7-20240827
typescript: 5.3.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Runtime
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
_No response_ | bug,Runtime | low | Critical |
2,496,359,929 | pytorch | AOTInductor doesn't support mutable custom operators | ### 🐛 Describe the bug
I am trying compile a torchrec model which use fbgemm IntNBitTableBatchedEmbeddingBagsCodegen, which in turn use bounds_check_indices kernel, and part of the fx graph is:
```
...
%getitem : [num_users=1] = call_function[target=operator.getitem](args = (%permute_2d_sparse_data, 0), kwargs = {})
%getitem_1 : [num_users=1] = call_function[target=operator.getitem](args = (%permute_2d_sparse_data, 1), kwargs = {})
%view_2 : [num_users=1] = call_function[target=torch.ops.aten.view.default](args = (%getitem, [-1]), kwargs = {})
%asynchronous_complete_cumsum : [num_users=1] = call_function[target=torch.ops.fbgemm.asynchronous_complete_cumsum.default](args = (%view_2,), kwargs = {})
%auto_functionalized : [num_users=3] = call_function[target=torch._higher_order_ops.auto_functionalize.auto_functionalized](args = (fbgemm.bounds_check_indices.default,), kwargs = {rows_per_table: %b__tensor_constant0, indices: %getitem_1, offsets: %asynchronous_complete_cumsum, bounds_check_mode: 1, warning: %b__tensor_constant1, weights: None, B_offsets: None, max_B: -1})
%getitem_4 : [num_users=1] = call_function[target=operator.getitem](args = (%auto_functionalized, 1), kwargs = {})
%getitem_5 : [num_users=1] = call_function[target=operator.getitem](args = (%auto_functionalized, 2), kwargs = {})
%getitem_6 : [num_users=1] = call_function[target=operator.getitem](args = (%auto_functionalized, 3), kwargs = {})
...
```
the kernel is not supported because it is mutable, what is the general solution to wrap a custom op which is mutable in inductor?
### Versions
torch=2.4.0
torchrec=0.8.0+cu121
fbgemm_gpu=0.8.0+cu121
```[tasklist]
### Tasks
```
cc @ezyang @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @zou3519 @bdhirsh @desertfire @chenyang78 @mcarilli @eellison @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @kadeng @muchulee8 @ColinPeppler @amjames | triaged,module: custom-operators,oncall: pt2,oncall: export,module: pt2-dispatcher,module: aotinductor | low | Critical |
2,496,370,231 | ollama | add Qwen2-VL | SOTA light weight vision model
[https://github.com/QwenLM/Qwen2-VL](https://github.com/QwenLM/Qwen2-VL)
llama.cpp issue [#9246](https://github.com/ggerganov/llama.cpp/issues/9246) | model request | high | Critical |
2,496,376,160 | pytorch | DISABLED test_max_split_expandable (__main__.TestCudaMallocAsync) | Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_max_split_expandable&suite=TestCudaMallocAsync&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/29456799947).
Over the past 3 hours, it has been determined flaky in 16 workflow(s) with 48 failures and 16 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_max_split_expandable`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_cuda.py", line 4116, in test_max_split_expandable
a = alloc(40)
File "/var/lib/jenkins/workspace/test/test_cuda.py", line 4111, in alloc
return torch.ones(n * mb, dtype=torch.int8, device="cuda")
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 40.00 MiB. GPU 0 has a total capacity of 7.43 GiB of which 6.92 GiB is free. Process 102870 has 68.75 MiB memory in use. Process 102895 has 80.75 MiB memory in use. Process 104347 has 68.75 MiB memory in use. Process 104367 has 291.75 MiB memory in use. 120.00 MiB allowed; Of the allocated memory 50.00 MiB is allocated by PyTorch, with 14.00 MiB allocated in private pools (e.g., CUDA Graphs), and 90.00 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
To execute this test, run the following from the base repo dir:
python test/test_cuda.py TestCudaMallocAsync.test_max_split_expandable
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_cuda.py`
cc @ptrblck @msaroufim @clee2000 | module: cuda,triaged,module: flaky-tests,skipped | low | Critical |
2,496,394,158 | rust | Too long compile time with implementing a trait function | <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I found that implementing a specific trait function causes too long compile time.
If there is `expression` function implementation at https://github.com/veryl-lang/veryl/blob/3516e3c73a62a9d74301336a4cf7d11013445e57/crates/emitter/src/emitter.rs#L984 , compilation time is 1600s.
And if the implmentation is commented out, compilation time is 5s.
If `codegen-units = 1` is specified, there is no problem.
The reproduce way is here:
```sh
git clone https://github.com/veryl-lang/veryl -b v0.12.0
cd veryl
git submodule update --init
sed -i 's/codegen/#codegen/' Cargo.toml
rm ./crates/parser/build.rs
# remove `expression` function implementation
sed -i '984,992d' ./crates/emitter/src/emitter.rs
# compile dependencies at first
cargo build --release --manifest-path crates/emitter/Cargo.toml
# There is not `expression` function implementation
# about 5s
touch ./crates/emitter/src/emitter.rs
time cargo build --release --manifest-path crates/emitter/Cargo.toml
# There is `expression` function implementation
# about 1600s
git checkout ./crates/emitter/src/emitter.rs
time cargo build --release --manifest-path crates/emitter/Cargo.toml
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.80.1 (3f5fd8dd4 2024-08-06)
binary: rustc
commit-hash: 3f5fd8dd41153bc5fdca9427e9e05be2c767ba23
commit-date: 2024-08-06
host: x86_64-unknown-linux-gnu
release: 1.80.1
LLVM version: 18.1.7
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary>Backtrace</summary>
<p>
```
<backtrace>
```
</p>
</details>
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"DianQK"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END --> | A-LLVM,I-compiletime,T-compiler,C-bug | low | Critical |
2,496,456,592 | pytorch | The documentation lacks an explanation of the constraints between larger padding and padding mode in convolutional layers | ### 📚 The doc issue
When using `conv2d`, I set a large padding and chose the circular padding mode. Because the edge data of the input tensor is insufficient for circular padding, this results in an error during execution. However, this error is not mentioned in the documentation.
```python
import torch
x = torch.randn(1, 3, 224, 224)
p = torch.nn.Conv2d(in_channels=3, out_channels=3, kernel_size=1, padding=(1234, 1234), padding_mode="circular")
print(p(x))
```
```
Traceback (most recent call last):
return F.conv2d(F.pad(input, self._reversed_padding_repeated_twice, mode=self.padding_mode),
RuntimeError: Padding value causes wrapping around more than once.
```
### Suggest a potential alternative/fix
I believe the documentation should not only provide a reference for the API but also guide users on how to use these APIs correctly. If the documentation could specify that certain padding modes may not be suitable in certain situations (such as when the padding is too large), it would help users design more robust models.
cc @svekars @brycebortree @tstatler | module: docs,module: convolution,triaged,actionable,module: padding | low | Critical |
2,496,485,241 | pytorch | LPPool2d lacks a check for the validity of norm_type | ### 🐛 Describe the bug
```python
import torch
x = torch.randn(1, 3, 224, 224)
p = torch.nn.LPPool2d(norm_type=0.0, kernel_size=2)
print(p(x))
```
When the norm_type of torch.nn.LPPool2d is set to 0, a division by zero error occurs during the computation. Because the norm_type parameter is used to specify the value of p in the Lp norm, when p=0, the calculation formula will include a division by zero, causing the program to crash.
In my opinion, the following improvements can be made:
1. **Parameter Check**: Add a check for `norm_type` before creating the `LPPool2d` layer to ensure it is not set to 0.
2. **Improve Documentation and Error Messages**: Update the `LPPool2d` documentation to clearly state that `norm_type` cannot be 0, or add a check in the code implementation to throw a more meaningful error message earlier.
### Versions
torch 2.0.0
cc @malfet | module: error checking,triaged,actionable,module: norms and normalization | low | Critical |
2,496,515,647 | godot | Copying a node with signals to another scene can cause crash. | ### Tested versions
4.3.1.rc
### System information
Ubuntu 22.04.4 LTS 64-bit
### Issue description
Copying a node with signal into another scene will cause common_parent is null errors and similar errors.
Trying to edit the signal in the new scene via the context menu causes a segfault.
```
ERROR: Index p_line = 14 is out of bounds (get_line_count() = 12).
at: unfold_line (scene/gui/code_edit.cpp:1655)
ERROR: Cannot get path of node as it is not in a scene tree.
at: get_path (scene/main/node.cpp:2257)
ERROR: Parameter "common_parent" is null.
at: get_path_to (scene/main/node.cpp:2192)
ERROR: Node not found: "" (relative to "/root/@EditorNode@16886/@Panel@13/@VBoxContainer@14/DockHSplitLeftL/DockHSplitLeftR/DockHSplitMain/@VBoxContainer@25/DockVSplitCenter/@VSplitContainer@52/@VBoxContainer@53/@PanelContainer@98/MainScreen/@CanvasItemEditor@9272/@VSplitContainer@9094/@HSplitContainer@9096/@HSplitContainer@9098/@Control@9099/@SubViewportContainer@9100/@SubViewport@9101/PopupDemo/HSplitContainer/VBoxContainer/LineEdit").
at: get_node (scene/main/node.cpp:1792)
================================================================
handle_crash: Program crashed with signal 11
Engine version: Godot Engine v4.3.1.rc.custom_build (ff9bc0422349219b337b015643544a0454d4a7ee)
Dumping the backtrace. Please include this when reporting the bug to the project developer.
[1] /lib/x86_64-linux-gnu/libc.so.6(+0x42520) [0x799864042520] (??:0)
[2] ConnectDialog::_update_warning_label() (/home/leonard/Git/godot/editor/connections_dialog.cpp:455)
[3] ConnectDialog::_tree_node_selected() (/home/leonard/Git/godot/editor/connections_dialog.cpp:178)
[4] void call_with_variant_args_helper<ConnectDialog>(ConnectDialog*, void (ConnectDialog::*)(), Variant const**, Callable::CallError&, IndexSequence<>) (/home/leonard/Git/godot/./core/variant/binder_common.h:309 (discriminator 4))
[5] void call_with_variant_args<ConnectDialog>(ConnectDialog*, void (ConnectDialog::*)(), Variant const**, int, Callable::CallError&) (/home/leonard/Git/godot/./core/variant/binder_common.h:419)
[6] CallableCustomMethodPointer<ConnectDialog>::call(Variant const**, int, Variant&, Callable::CallError&) const (/home/leonard/Git/godot/./core/object/callable_method_pointer.h:104)
[7] Callable::callp(Variant const**, int, Variant&, Callable::CallError&) const (/home/leonard/Git/godot/core/variant/callable.cpp:57)
[8] Object::emit_signalp(StringName const&, Variant const**, int) (/home/leonard/Git/godot/core/object/object.cpp:1188)
[9] Node::emit_signalp(StringName const&, Variant const**, int) (/home/leonard/Git/godot/scene/main/node.cpp:3895)
[10] Error Object::emit_signal<>(StringName const&) (/home/leonard/Git/godot/./core/object/object.h:936)
[11] SceneTreeEditor::set_selected(Node*, bool) (/home/leonard/Git/godot/editor/gui/scene_tree_editor.cpp:1023)
[12] ConnectDialog::set_dst_node(Node*) (/home/leonard/Git/godot/editor/connections_dialog.cpp:526)
[13] ConnectDialog::init(ConnectDialog::ConnectionData const&, Vector<String> const&, bool) (/home/leonard/Git/godot/editor/connections_dialog.cpp:646)
[14] ConnectionsDock::_open_edit_connection_dialog(TreeItem&) (/home/leonard/Git/godot/editor/connections_dialog.cpp:1157)
[15] ConnectionsDock::_handle_slot_menu_option(int) (/home/leonard/Git/godot/editor/connections_dialog.cpp:1254)
[16] void call_with_variant_args_helper<ConnectionsDock, int, 0ul>(ConnectionsDock*, void (ConnectionsDock::*)(int), Variant const**, Callable::CallError&, IndexSequence<0ul>) (/home/leonard/Git/godot/./core/variant/binder_common.h:309 (discriminator 4))
[17] void call_with_variant_args<ConnectionsDock, int>(ConnectionsDock*, void (ConnectionsDock::*)(int), Variant const**, int, Callable::CallError&) (/home/leonard/Git/godot/./core/variant/binder_common.h:419)
[18] CallableCustomMethodPointer<ConnectionsDock, int>::call(Variant const**, int, Variant&, Callable::CallError&) const (/home/leonard/Git/godot/./core/object/callable_method_pointer.h:104)
[19] Callable::callp(Variant const**, int, Variant&, Callable::CallError&) const (/home/leonard/Git/godot/core/variant/callable.cpp:57)
[20] Object::emit_signalp(StringName const&, Variant const**, int) (/home/leonard/Git/godot/core/object/object.cpp:1188)
[21] Node::emit_signalp(StringName const&, Variant const**, int) (/home/leonard/Git/godot/scene/main/node.cpp:3895)
[22] Error Object::emit_signal<int>(StringName const&, int) (/home/leonard/Git/godot/./core/object/object.h:936)
[23] PopupMenu::activate_item(int) (/home/leonard/Git/godot/scene/gui/popup_menu.cpp:2438)
[24] PopupMenu::_input_from_window_internal(Ref<InputEvent> const&) (/home/leonard/Git/godot/scene/gui/popup_menu.cpp:637)
[25] PopupMenu::_input_from_window(Ref<InputEvent> const&) (/home/leonard/Git/godot/scene/gui/popup_menu.cpp:447)
[26] Window::_window_input(Ref<InputEvent> const&) (/home/leonard/Git/godot/scene/main/window.cpp:1675)
[27] void call_with_variant_args_helper<Window, Ref<InputEvent> const&, 0ul>(Window*, void (Window::*)(Ref<InputEvent> const&), Variant const**, Callable::CallError&, IndexSequence<0ul>) (/home/leonard/Git/godot/./core/variant/binder_common.h:304 (discriminator 4))
[28] void call_with_variant_args<Window, Ref<InputEvent> const&>(Window*, void (Window::*)(Ref<InputEvent> const&), Variant const**, int, Callable::CallError&) (/home/leonard/Git/godot/./core/variant/binder_common.h:419)
[29] CallableCustomMethodPointer<Window, Ref<InputEvent> const&>::call(Variant const**, int, Variant&, Callable::CallError&) const (/home/leonard/Git/godot/./core/object/callable_method_pointer.h:104)
[30] Callable::callp(Variant const**, int, Variant&, Callable::CallError&) const (/home/leonard/Git/godot/core/variant/callable.cpp:57)
[31] Variant Callable::call<Ref<InputEvent> >(Ref<InputEvent>) const (/home/leonard/Git/godot/./core/variant/variant.h:876)
[32] DisplayServerX11::_dispatch_input_event(Ref<InputEvent> const&) (/home/leonard/Git/godot/platform/linuxbsd/x11/display_server_x11.cpp:4063)
[33] DisplayServerX11::_dispatch_input_events(Ref<InputEvent> const&) (/home/leonard/Git/godot/platform/linuxbsd/x11/display_server_x11.cpp:4040)
[34] Input::_parse_input_event_impl(Ref<InputEvent> const&, bool) (/home/leonard/Git/godot/core/input/input.cpp:775)
[35] Input::flush_buffered_events() (/home/leonard/Git/godot/core/input/input.cpp:1056)
[36] DisplayServerX11::process_events() (/home/leonard/Git/godot/platform/linuxbsd/x11/display_server_x11.cpp:5200)
[37] OS_LinuxBSD::run() (/home/leonard/Git/godot/platform/linuxbsd/os_linuxbsd.cpp:960)
[38] /home/leonard/Git/godot/bin/godot.linuxbsd.editor.dev.x86_64(main+0x190) [0x5acb59cb3539] (/home/leonard/Git/godot/platform/linuxbsd/godot_linuxbsd.cpp:85)
[39] /lib/x86_64-linux-gnu/libc.so.6(+0x29d90) [0x799864029d90] (??:0)
[40] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80) [0x799864029e40] (??:0)
[41] /home/leonard/Git/godot/bin/godot.linuxbsd.editor.dev.x86_64(_start+0x25) [0x5acb59cb32e5] (??:?)
-- END OF BACKTRACE --
```
### Steps to reproduce
Create a node. Connect one of its signals to the scene.
Create a second scene.
Copy the node to the second scene. Error should start appearing.
Try to edit the signal connection on the copied node. This should segfault.
### Minimal reproduction project (MRP)
N/A | bug,topic:editor,crash | low | Critical |
2,496,616,211 | svelte | `state_reference_locally` warning message is unclear | ### Describe the bug
https://github.com/sveltejs/svelte/blob/0203eb319b5d86138236158e3ae6ecf29e26864c/packages/svelte/src/compiler/warnings.js#L642-L648
"How to use a closure?" was asked several times in discord. In addition, closure only refers to functions where classes also serve the same purpose.
I propose an update to the message
"State referenced in its own scope will never update. Did you mean to reference it inside either a function or a class?"
related discord posts
https://discord.com/channels/457912077277855764/1235970955612786719
https://discord.com/channels/457912077277855764/1278658234810634282
https://discord.com/channels/457912077277855764/1275188058089586688
### Reproduction
.
### Logs
_No response_
### System Info
```shell
.
```
### Severity
annoyance | documentation | low | Critical |
2,496,644,348 | deno | Deno Jupyter with Brew Deno installation - incorrect binary path | Not sure what project should I report this to but here was my problem
1. I installed Deno through Homebrew, which puts the binary in `/opt/homebrew/bin/deno` and it does not create the `~/.deno/bin/deno` (where the official install script puts the binary)
2. When I installed the Jupyter VSCode and set the Deno as a kernel, the binary it tries to point to is `~/deno/bin/deno`, which I did not have
3. I had to manually create symlink to make the Jupyter VSCode Deno integration work `ln -s /opt/homebrew/bin/deno /Users/<user>/.deno/bin/deno`
Maybe the Homebrew install should create symlink in the `~/.deno/bin/deno`?
Maybe the Jupyter VSCode should be smarter about where to search for Deno binaries?
---
Also the Jupyter VSCode error message was rather cryptic, instead of telling me something like "binary not found" it was failing with "Kernel died"
```
Visual Studio Code (1.92.1, undefined, desktop)
Jupyter Extension Version: 2024.7.0.
Python Extension Version: 2024.12.3.
Pylance Extension Version: 2024.8.2.
Platform: darwin (arm64).
Workspace folder ~/hello/test-project, Home = /Users/hurtak
10:55:43.401 [info] Starting Kernel (Deno) for '~/hello/test-project/Untitled-1.ipynb' (disableUI=true)
10:55:44.579 [info] Launching Raw Kernel Deno # /Users/~/.deno/bin/deno
10:55:44.703 [info] Process Execution: ~/.deno/bin/deno jupyter --kernel --conn /Users/~/Library/Jupyter/runtime/kernel-v2-58051VDOaxp0yHEKY.json
> cwd: //Users/~/hello/test-project
10:55:44.705 [error] Kernel died Error: spawn /Users/~/.deno/bin/deno ENOENT
at Process.ChildProcess._handle.onexit (node:internal/child_process:286:19)
at onErrorNT (node:internal/child_process:484:16)
at processTicksAndRejections (node:internal/process/task_queues:82:21) {
errno: -2,
code: 'ENOENT',
syscall: 'spawn /Users/~/.deno/bin/deno',
path: '/Users/~/.deno/bin/deno',
spawnargs: [
'jupyter',
'--kernel',
'--conn',
'/Users/~/Library/Jupyter/runtime/kernel-v2-58051VDOaxp0yHEKY.json'
]
}
10:55:44.711 [error] Disposing kernel process due to an error Error: spawn /Users/~/.deno/bin/deno ENOENT
at Process.ChildProcess._handle.onexit (node:internal/child_process:286:19)
at onErrorNT (node:internal/child_process:484:16)
at processTicksAndRejections (node:internal/process/task_queues:82:21) {
errno: -2,
code: 'ENOENT',
syscall: 'spawn /Users/~/.deno/bin/deno',
path: '/Users/~/.deno/bin/deno',
spawnargs: [
'jupyter',
'--kernel',
'--conn',
'/Users/~/Library/Jupyter/runtime/kernel-v2-58051VDOaxp0yHEKY.json'
]
}
10:55:44.711 [error]
10:55:44.714 [error] Failed to connect raw kernel session: Error: The kernel died. Error: ... View Jupyter [log](command:jupyter.viewOutput) for further details.
10:55:44.714 [error] Failed to connect raw kernel session: Error: The kernel died. Error: ... View Jupyter [log](command:jupyter.viewOutput) for further details.
10:55:44.714 [warn] Failed to shutdown kernel, .deno./Users/~/.deno/bin/deno././users/~/.deno/bin/deno#jupyter#--kernel#--conn#{connection_file} TypeError: Cannot read properties of undefined (reading 'dispose')
at R_.shutdown (/Users/~/.vscode/extensions/ms-toolsai.jupyter-2024.7.0-darwin-arm64/dist/extension.node.js:304:13629)
at processTicksAndRejections (node:internal/process/task_queues:95:5)
at A_.shutdown (/Users/~/.vscode/extensions/ms-toolsai.jupyter-2024.7.0-darwin-arm64/dist/extension.node.js:304:22107)
10:55:44.714 [warn] Error occurred while trying to start the kernel, options.disableUI=false Error: The kernel died. Error: ... View Jupyter [log](command:jupyter.viewOutput) for further details.
> Kernel Id = .deno./Users/~/.deno/bin/deno././users/~/.deno/bin/deno#jupyter#--kernel#--conn#{connection_file}
> at new n (/Users/~/.vscode/extensions/ms-toolsai.jupyter-2024.7.0-darwin-arm64/dist/extension.node.js:98:4480)
> originalException = Error: spawn /Users/~/.deno/bin/deno ENOENT
> stdErr =
10:55:44.715 [warn] Kernel Error, context = start Error: The kernel died. Error: ... View Jupyter [log](command:jupyter.viewOutput) for further details.
> Kernel Id = .deno./Users/~/.deno/bin/deno././users/~/.deno/bin/deno#jupyter#--kernel#--conn#{connection_file}
> at new n (/Users/~/.vscode/extensions/ms-toolsai.jupyter-2024.7.0-darwin-arm64/dist/extension.node.js:98:4480)
> originalException = Error: spawn /Users/~/.deno/bin/deno ENOENT
> stdErr =
10:55:44.725 [info] Dispose Kernel '~/hello/test-project/Untitled-1.ipynb' associated with '~/hello/test-project/Untitled-1.ipynb'
10:55:44.726 [error] Error in execution Error: The kernel died. Error: ... View Jupyter [log](command:jupyter.viewOutput) for further details.
> Kernel Id = .deno./Users/~/.deno/bin/deno././users/~/.deno/bin/deno#jupyter#--kernel#--conn#{connection_file}
> at new n (/Users/~/.vscode/extensions/ms-toolsai.jupyter-2024.7.0-darwin-arm64/dist/extension.node.js:98:4480)
> originalException = Error: spawn /Users/~/.deno/bin/deno ENOENT
> stdErr =
10:55:44.726 [error] Error in execution (get message for cell) Error: The kernel died. Error: ... View Jupyter [log](command:jupyter.viewOutput) for further details.
> Kernel Id = .deno./Users/~/.deno/bin/deno././users/~/.deno/bin/deno#jupyter#--kernel#--conn#{connection_file}
> at new n (/Users/~/.vscode/extensions/ms-toolsai.jupyter-2024.7.0-darwin-arm64/dist/extension.node.js:98:4480)
> originalException = Error: spawn /Users/~/.deno/bin/deno ENOENT
> stdErr =
```
| bug,deno jupyter | low | Critical |
2,496,721,109 | transformers | The model's address is https://huggingface.co/Xenova/nllb-200-distilled-600M/tree/main/onnx。I don't know how to load encode.onnx and decoder.onnx, and successfully translate a sentence into another language. Can you help me write an inference code to achieve the translation effect through the encoder and decoder? thank you | ### Feature request
hello,The model's address is [https://huggingface.co/Xenova/nllb-200-distilled-600M/tree/main/onnx。I](https://huggingface.co/Xenova/nllb-200-distilled-600M/tree/main/onnx%E3%80%82I) don't know how to load encode.onnx and decoder.onnx, and successfully translate a sentence into another language. Can you help me write an inference code to achieve the translation effect through the encoder and decoder? thank you
### Motivation
hello,The model's address is [https://huggingface.co/Xenova/nllb-200-distilled-600M/tree/main/onnx。I](https://huggingface.co/Xenova/nllb-200-distilled-600M/tree/main/onnx%E3%80%82I) don't know how to load encode.onnx and decoder.onnx, and successfully translate a sentence into another language. Can you help me write an inference code to achieve the translation effect through the encoder and decoder? thank you
### Your contribution
hello,The model's address is [https://huggingface.co/Xenova/nllb-200-distilled-600M/tree/main/onnx。I](https://huggingface.co/Xenova/nllb-200-distilled-600M/tree/main/onnx%E3%80%82I) don't know how to load encode.onnx and decoder.onnx, and successfully translate a sentence into another language. Can you help me write an inference code to achieve the translation effect through the encoder and decoder? thank you | Feature request | low | Minor |
2,496,729,557 | pytorch | torch::jit::load invalid_path error | ### 🐛 Describe the bug
I'm trying to load a .pt model by using jit::load but for some reason i get "open file failed because of errno 22 on fopen: Invalid argument, file path: ". I'm able to open the file by using fopen in this way:
```
FILE* file2 = fopen("models/model.pt", "r");
```
But when I try
`torch::jit::load("models/model.pt");`
I get the error mentioned before. Any ideas? Thanks!
### Versions
Collecting environment information...
PyTorch version: 2.3.1+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Home
GCC version: Could not collect
Clang version: Could not collect
CMake version: version 3.30.0-rc4
Libc version: N/A
Python version: 3.12.2 | packaged by Anaconda, Inc. | (main, Feb 27 2024, 17:28:07) [MSC v.1916 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-11-10.0.22631-SP0
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture=9
CurrentClockSpeed=2100
DeviceID=CPU0
Family=107
L2CacheSize=2048
L2CacheSpeed=
Manufacturer=AuthenticAMD
MaxClockSpeed=2100
Name=AMD Ryzen 5 3500U with Radeon Vega Mobile Gfx
ProcessorType=3
Revision=6145
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] onnxruntime==1.17.1
[pip3] torch==2.3.1
[conda] blas 1.0 mkl
[conda] mkl 2021.4.0 pypi_0 pypi
[conda] mkl-service 2.4.0 py312h2bbff1b_1
[conda] mkl_fft 1.3.8 py312h2bbff1b_0
[conda] mkl_random 1.2.4 py312h59b6b97_0
[conda] numpy 1.26.4 py312hfd52020_0
[conda] numpy-base 1.26.4 py312h4dde369_0
[conda] torch 2.3.1 pypi_0 pypi
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel | oncall: jit | low | Critical |
2,496,764,814 | pytorch | GPU Precision drops significantly when kernel size is increased from 3 to 5 in Conv3D | ### 🐛 Describe the bug
# Precision bug
`nn.Conv3d` has a huge decrease in precision when the kernel size increases from 3 to 5 for input tensors with size > 16. This is true even if `torch.backends.cuda.matmul.allow_tf32` is turned off.
The maximum discrepency between `cpu` and `cuda` version is increased 1000 times when the kernel size is changed from 3x3x3 to 5x5x5.

here's the code to generate the above table.
```python
import torch
import torch.nn as nn
import pandas as pd
torch.backends.cuda.matmul.allow_tf32 = False
# Function to perform 3D convolution on a given device
def conv_3d_device(conv3d, input_tensor, device):
x = input_tensor.to(device)
conv3d = conv3d.to(device)
with torch.no_grad():
output_tensor = conv3d(x)
return output_tensor.cpu()
# Parameters
n_channels = 1
sizes = [32, 64, 128] # Different sizes to test
kernel_sizes = [3, 5, 7, 9, 11] # Different kernel sizes to test
results = []
for k_sz in kernel_sizes:
for sz in sizes:
pad_sz = k_sz // 2
x = torch.randn(1, n_channels, sz, sz, sz)
conv3d = nn.Conv3d(in_channels=n_channels, out_channels=1, kernel_size=k_sz, stride=1, padding=pad_sz)
cpu_device = 'cpu'
y1 = conv_3d_device(conv3d, x, cpu_device)
gpu_device = 'cuda'
y2 = conv_3d_device(conv3d, x, gpu_device)
max_error = torch.max(torch.abs(y2 - y1)).item()
sum_diff = torch.sum(torch.abs(y2 - y1)).item()
n_diff = torch.sum((y2 != y1).int()).item()
n_approx = torch.sum((torch.abs(y2 - y1) < 1e-5).int()).item()
n_app_diff = y2.numel() - n_approx
result = {
'size': sz,
'kernel size': k_sz,
'max error': max_error,
'sum diff': sum_diff,
'num elements': y1.numel(),
'num diff elements': n_diff,
'num same elements': y1.numel() - n_diff,
'num close elements': n_approx
}
print(result)
results.append(result)
# Convert results to DataFrame and save to Excel
df = pd.DataFrame(results)
df.to_csv('conv3d_results_pytorch.csv', index=False)
```
### Versions
pytorch-ignite 0.3.0 pypi_0 pypi
torch 2.1.1+cu118 pypi_0 pypi
torchaudio 2.1.1+cu118 pypi_0 pypi
torchvision 0.16.1+cu118 pypi_0 pypi
cc @msaroufim @csarofeen @ptrblck @xwang233 | module: performance,module: cudnn,module: cuda,triaged | low | Critical |
2,496,839,689 | deno | listenQueue blocks main thread | Version: Deno 1.46.1
> I think that under certain conditions the listenQueue method blocks the main thread:
**test-kv.ts**
```ts
export const kv = await Deno.openKv();
kv.listenQueue(async () => {}); // BLOCKER!!!!
```
**test.ts**
```ts
import { kv } from "./test-kv.ts";
kv.get([]);
Deno.serve(() => new Response()); // does not start =(
```
> But if we remove listenQueue or add some log behind it, everything will work:
**test-kv.ts**
```ts
export const kv = await Deno.openKv();
kv.listenQueue(async () => {});
console.log(); // working!
```
This same code works fine on 1.44 | bug,ext/kv | low | Minor |
2,496,851,930 | go | google.golang.org/protobuf/proto: TestHasExtensionNoAlloc/Lazy failures | ```
#!watchflakes
default <- pkg == "google.golang.org/protobuf/proto" && test == "TestHasExtensionNoAlloc/Lazy"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8738268800853585153)):
=== RUN TestHasExtensionNoAlloc/Lazy
extension_test.go:156: proto.HasExtension should not allocate, but allocated 1.00x per run
--- FAIL: TestHasExtensionNoAlloc/Lazy (0.00s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,496,982,589 | kubernetes | feature(kubelet): add goroutines metric in the kubelet component | ### What would you like to be added?
I want to add a new metric to track the number of goroutines in kubelet. There seems to be no metric defined in kubelet to track the number of goroutines.
### Why is this needed?
Although we often use metrics such as `go_goroutine` or `go_sched_goroutines_goroutines`(We can use `go_goroutines{job="kubelet"}` or `go_sched_goroutines_goroutines{job="kubelet"}` to query in prometheus) to view the active goroutines of kubelet. However, using this metrics, we cannot know the number of goroutines for which operations. Therefore, I want to add this indicator to the kubelet component.
For example: Currently, there are many operations in kubelet that may add many goroutines:
https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/pod_workers.go#L945
https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/images/puller.go#L55
https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/kuberuntime/kuberuntime_container.go#L843
In addition, kube-scheduler has implemented similar features. issue tracked: https://github.com/kubernetes/kubernetes/pull/112003 | sig/node,kind/feature,sig/instrumentation,triage/accepted | low | Major |
2,496,992,399 | bitcoin | test: WARNING: ThreadSanitizer: lock-order-inversion (potential deadlock) (pid=32090) |
To reproduce, use something like:
```
rm -rf ./bld-cmake/ && cmake -B ./bld-cmake -DAPPEND_CXXFLAGS='-std=c++23' -DCMAKE_C_COMPILER='clang' -DCMAKE_CXX_COMPILER='clang++' -DBUILD_GUI=ON -DBUILD_FUZZ_BINARY=ON -DBUILD_BENCH=ON -DWITH_ZMQ=ON -DWITH_ZMQ=ON -DBUILD_UTIL_CHAINSTATE=ON -DBUILD_KERNEL_LIB=ON -DSANITIZERS=thread && cmake --build ./bld-cmake --parallel $( nproc ) --
TSAN_OPTIONS="suppressions=$(pwd)/test/sanitizer_suppressions/tsan:halt_on_error=1:second_deadlock_stack=1" ./bld-cmake/src/test/test_bitcoin -t wallet_tests -l test_suite | Bug,Wallet,Tests | low | Critical |
2,497,003,034 | pytorch | Should make the doc of `nn.CrossEntropyLoss()` more clear | ### 📚 The doc issue
[The doc](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html) of `nn.CrossEntropyLoss()` explains about `target` tensor in a complex way as shown below. *It's difficult to understand:

So from my understanding and experiments, these simple explanations below should be added to the doc above. *It's easy to understand:
- The `target` tensor whose size is different from `input` tensor is treated as class indices.
- The `target` tensor whose size is same as `input` tensor is the class probabilities which should be between `[0, 1]`.
And from what the doc says below and my experiments, when `target` tensor is treated as **class indices**, `softmax()` is used both for `input` and `target` tensor internally:
> The target that this criterion expects should contain either:
> - Class indices in the range ...
> ...
> Note that this case is equivalent to applying [LogSoftmax](https://pytorch.org/docs/stable/generated/torch.nn.LogSoftmax.html#torch.nn.LogSoftmax) on an input, followed by [NLLLoss](https://pytorch.org/docs/stable/generated/torch.nn.NLLLoss.html#torch.nn.NLLLoss).
But when `target` tensor is treated as **class probabilities**, `softmax()` is used only for `input` tensor internally, that's why the example of `target` tensor as **class indices** in the doc doesn't use `softmax()` externally while the example of `target` tensor as **class probabilities** in the doc uses `softmax()` externally as shown below:

So, the doc also should say something like as shown below. *You also use the words **class indices mode** and **class probabilities mode** :
- `softmax()` is used internally for `input` tensor, both when `target` tensor is treated as **class indices** and **class probabilities** so you don't need to use `softmax()` externally.
- `softmax()` is used internally for `target` tensor only when `target` tensor is treated as **class indices** so you should use `softmax()` externally for `target` tensor when `target` tensor is treated as **class probabilities**.
### Suggest a potential alternative/fix
_No response_
cc @svekars @brycebortree @tstatler @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki | module: docs,module: nn,module: loss,triaged | low | Minor |
2,497,108,365 | go | proposal: bufio: add Scanner.Reset(io.Reader) | ### Proposal Details
It seems like an oversight that both bufio.Writer and bufio.Reader both can reuse their memory with the Reset method while Scanner does not have same method. | Proposal | low | Minor |
2,497,142,696 | pytorch | Trying to build from source with use_flash_attention fails on windows due to fatal error C1189 | ### 🐛 Describe the bug
Trying to build pytorch with Cuda from source with Flash Attention is failing due to fatal error C1198:
```E:\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.41.34120\include\xkeycheck.h(54): fatal error C1189: #error: The C++ Standard Library forbids macroizing the keyword "bool". Enable warning C4005 to find the forbidden define.```
``` Assembling: E:\Python\pytorch\third_party\ideep\mkl-dnn\src\common\ittnotify\ittptmark64.asm
[5169/7857] Building CXX object third_party\ideep\mkl-dnn\...\gemm\s8x8s32\jit_avx2_u8_copy_sum_bn_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5172/7857] Building CXX object third_party\ideep\mkl-dnn\...\gemm\s8x8s32\jit_avx2_u8_copy_sum_at_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5175/7857] Building CXX object third_party\ideep\mkl-dnn\...\gemm\s8x8s32\jit_avx2_u8_copy_sum_an_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5187/7857] Building CXX object third_party\ideep\mkl-dnn\...bf16\jit_avx512_core_s16_24x8_copy_an_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5190/7857] Building CXX object third_party\ideep\mkl-dnn\...bf16\jit_avx512_core_s16_24x8_copy_bt_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5193/7857] Building CXX object third_party\ideep\mkl-dnn\...bf16\jit_avx512_core_s16_48x8_copy_an_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5196/7857] Building CXX object third_party\ideep\mkl-dnn\...bf16\jit_avx512_core_s16_24x8_copy_bn_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5197/7857] Building CXX object third_party\ideep\mkl-dnn\...x64.dir\gemm\f32\jit_avx2_f32_copy_bn_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5198/7857] Building CXX object third_party\ideep\mkl-dnn\...bf16\jit_avx512_core_s16_48x8_copy_bt_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5199/7857] Building CXX object third_party\ideep\mkl-dnn\...x64.dir\gemm\f32\jit_avx2_f32_copy_bt_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5200/7857] Building CXX object third_party\ideep\mkl-dnn\...bf16\jit_avx512_core_s16_48x8_copy_bn_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5201/7857] Building CXX object third_party\ideep\mkl-dnn\...bf16\jit_avx512_core_s16_24x8_copy_at_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5202/7857] Building CXX object third_party\ideep\mkl-dnn\...bf16\jit_avx512_core_s16_48x8_copy_at_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5203/7857] Building CXX object third_party\ideep\mkl-dnn\...x64.dir\gemm\f32\jit_avx2_f32_copy_an_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5204/7857] Building CXX object third_party\ideep\mkl-dnn\...x64.dir\gemm\f32\jit_avx2_f32_copy_at_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5207/7857] Building CXX object third_party\ideep\mkl-dnn\...\gemm\f32\jit_avx512_core_f32_copy_an_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5208/7857] Building CXX object third_party\ideep\mkl-dnn\..._x64.dir\gemm\f32\jit_avx_f32_copy_bt_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5209/7857] Building CXX object third_party\ideep\mkl-dnn\..._x64.dir\gemm\f32\jit_avx_f32_copy_bn_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5210/7857] Building CXX object third_party\ideep\mkl-dnn\..._x64.dir\gemm\f32\jit_avx_f32_copy_at_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5211/7857] Building CXX object third_party\ideep\mkl-dnn\...\gemm\f32\jit_avx512_core_f32_copy_bn_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5212/7857] Building CXX object third_party\ideep\mkl-dnn\..._x64.dir\gemm\f32\jit_avx_f32_copy_an_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5213/7857] Building CXX object third_party\ideep\mkl-dnn\...\gemm\f32\jit_avx512_core_f32_copy_bt_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5219/7857] Building CXX object third_party\ideep\mkl-dnn\...64.dir\gemm\f32\jit_sse41_f32_copy_bn_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5220/7857] Building CXX object third_party\ideep\mkl-dnn\...64.dir\gemm\f32\jit_sse41_f32_copy_an_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5228/7857] Building CXX object third_party\ideep\mkl-dnn\...64.dir\gemm\f32\jit_sse41_f32_copy_at_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5229/7857] Building CXX object third_party\ideep\mkl-dnn\...64.dir\gemm\f32\jit_sse41_f32_copy_bt_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5231/7857] Building CXX object third_party\ideep\mkl-dnn\....dir\gemm\s8x8s32\jit_avx2_u8_copy_bn_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5232/7857] Building CXX object third_party\ideep\mkl-dnn\...ir\gemm\f32\jit_sse41_kernel_b0_sgemm_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5233/7857] Building CXX object third_party\ideep\mkl-dnn\....dir\gemm\s8x8s32\jit_avx2_u8_copy_at_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5236/7857] Building CXX object third_party\ideep\mkl-dnn\...4.dir\gemm\f32\jit_sse41_kernel_sgemm_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5237/7857] Building CXX object third_party\ideep\mkl-dnn\....dir\gemm\s8x8s32\jit_avx2_u8_copy_bt_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5240/7857] Building CXX object third_party\ideep\mkl-dnn\....dir\gemm\s8x8s32\jit_avx2_u8_copy_an_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5242/7857] Building CXX object third_party\ideep\mkl-dnn\...\gemm\s8x8s32\jit_avx2_u8_copy_sum_bt_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5243/7857] Building CXX object third_party\ideep\mkl-dnn\...\s8x8s32\jit_avx2_vnni_u8_copy_sum_an_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5244/7857] Building CXX object third_party\ideep\mkl-dnn\...gemm\s8x8s32\jit_avx2_vnni_u8_copy_at_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5245/7857] Building CXX object third_party\ideep\mkl-dnn\...mm\s8x8s32\jit_avx512_core_u8_copy_an_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5246/7857] Building CXX object third_party\ideep\mkl-dnn\...gemm\s8x8s32\jit_avx2_vnni_u8_copy_an_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5247/7857] Building CXX object third_party\ideep\mkl-dnn\...gemm\s8x8s32\jit_avx2_vnni_u8_copy_bn_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5248/7857] Building CXX object third_party\ideep\mkl-dnn\...gemm\s8x8s32\jit_avx2_vnni_u8_copy_bt_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5249/7857] Building CXX object third_party\ideep\mkl-dnn\...\s8x8s32\jit_avx2_vnni_u8_copy_sum_bt_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5250/7857] Building CXX object third_party\ideep\mkl-dnn\...\s8x8s32\jit_avx2_vnni_u8_copy_sum_bn_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5252/7857] Building CXX object third_party\ideep\mkl-dnn\...mm\s8x8s32\jit_avx512_core_u8_copy_bn_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5253/7857] Building CXX object third_party\ideep\mkl-dnn\...8x8s32\jit_avx512_core_u8_copy_sum_bn_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5254/7857] Building CXX object third_party\ideep\mkl-dnn\...8x8s32\jit_avx512_core_u8_copy_sum_an_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5255/7857] Building CXX object third_party\ideep\mkl-dnn\...\s8x8s32\jit_avx2_vnni_u8_copy_sum_at_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5257/7857] Building CXX object third_party\ideep\mkl-dnn\...mm\s8x8s32\jit_avx512_core_u8_copy_bt_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5259/7857] Building CXX object third_party\ideep\mkl-dnn\...8s32\jit_avx_kernel_b0_r_gemm_s8u8s32_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5260/7857] Building CXX object third_party\ideep\mkl-dnn\...8x8s32\jit_avx512_core_u8_copy_sum_at_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5261/7857] Building CXX object third_party\ideep\mkl-dnn\...mm\s8x8s32\jit_avx512_core_u8_copy_at_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5262/7857] Building CXX object third_party\ideep\mkl-dnn\...8x8s32\jit_avx512_core_u8_copy_sum_bt_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5263/7857] Building CXX object third_party\ideep\mkl-dnn\...8x8s32\jit_avx_kernel_b0_gemm_s8u8s32_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5264/7857] Building CXX object third_party\ideep\mkl-dnn\...s8x8s32\jit_avx_kernel_b_gemm_s8u8s32_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5265/7857] Building CXX object third_party\ideep\mkl-dnn\...m\s8x8s32\jit_avx_kernel_gemm_s8u8s32_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5266/7857] Building CXX object third_party\ideep\mkl-dnn\...8s32\jit_avx_kernel_b0_b_gemm_s8u8s32_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5267/7857] Building CXX object third_party\ideep\mkl-dnn\...4.dir\gemm\s8x8s32\jit_avx_u8_copy_bt_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5268/7857] Building CXX object third_party\ideep\mkl-dnn\...8s32\jit_avx_kernel_b0_c_gemm_s8u8s32_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5269/7857] Building CXX object third_party\ideep\mkl-dnn\...4.dir\gemm\s8x8s32\jit_avx_u8_copy_an_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5270/7857] Building CXX object third_party\ideep\mkl-dnn\...4.dir\gemm\s8x8s32\jit_avx_u8_copy_at_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5271/7857] Building CXX object third_party\ideep\mkl-dnn\...4.dir\gemm\s8x8s32\jit_avx_u8_copy_bn_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5272/7857] Building CXX object third_party\ideep\mkl-dnn\...s8x8s32\jit_avx_kernel_r_gemm_s8u8s32_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5273/7857] Building CXX object third_party\ideep\mkl-dnn\...r\gemm\s8x8s32\jit_avx_u8_copy_sum_at_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5274/7857] Building CXX object third_party\ideep\mkl-dnn\...r\gemm\s8x8s32\jit_avx_u8_copy_sum_bn_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5275/7857] Building CXX object third_party\ideep\mkl-dnn\...r\gemm\s8x8s32\jit_avx_u8_copy_sum_an_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5276/7857] Building CXX object third_party\ideep\mkl-dnn\...r\gemm\s8x8s32\jit_avx_u8_copy_sum_bt_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5277/7857] Building CXX object third_party\ideep\mkl-dnn\...s8x8s32\jit_avx_kernel_c_gemm_s8u8s32_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5278/7857] Building CXX object third_party\ideep\mkl-dnn\...32\jit_sse41_kernel_b0_r_gemm_s8u8s32_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5279/7857] Building CXX object third_party\ideep\mkl-dnn\...dir\gemm\s8x8s32\jit_sse41_u8_copy_an_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5280/7857] Building CXX object third_party\ideep\mkl-dnn\...x8s32\jit_sse41_kernel_r_gemm_s8u8s32_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5281/7857] Building CXX object third_party\ideep\mkl-dnn\...8s32\jit_sse41_kernel_b0_gemm_s8u8s32_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5282/7857] Building CXX object third_party\ideep\mkl-dnn\...32\jit_sse41_kernel_b0_b_gemm_s8u8s32_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5283/7857] Building CXX object third_party\ideep\mkl-dnn\...s8x8s32\jit_sse41_kernel_gemm_s8u8s32_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5284/7857] Building CXX object third_party\ideep\mkl-dnn\...32\jit_sse41_kernel_b0_c_gemm_s8u8s32_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5285/7857] Building CXX object third_party\ideep\mkl-dnn\...x8s32\jit_sse41_kernel_c_gemm_s8u8s32_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5286/7857] Building CXX object third_party\ideep\mkl-dnn\...gemm\s8x8s32\jit_sse41_u8_copy_sum_an_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5287/7857] Building CXX object third_party\ideep\mkl-dnn\...dir\gemm\s8x8s32\jit_sse41_u8_copy_at_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5288/7857] Building CXX object third_party\ideep\mkl-dnn\...x8s32\jit_sse41_kernel_b_gemm_s8u8s32_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5289/7857] Building CXX object third_party\ideep\mkl-dnn\...gemm\s8x8s32\jit_sse41_u8_copy_sum_at_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5290/7857] Building CXX object third_party\ideep\mkl-dnn\...dir\gemm\s8x8s32\jit_sse41_u8_copy_bn_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5291/7857] Building CXX object third_party\ideep\mkl-dnn\...dir\gemm\s8x8s32\jit_sse41_u8_copy_bt_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5292/7857] Building CXX object third_party\ideep\mkl-dnn\...gemm\s8x8s32\jit_sse41_u8_copy_sum_bt_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5293/7857] Building CXX object third_party\ideep\mkl-dnn\...gemm\s8x8s32\jit_sse41_u8_copy_sum_bn_kern_autogen.cpp.obj
cl : Command line warning D9025 : overriding '/O2' with '/Od'
[5495/7857] Building CXX object third_party\kineto\libkineto\CMakeFiles\kineto_api.dir\src\ThreadUtil.cpp.obj
E:\Python\pytorch\third_party\kineto\libkineto\src\ThreadUtil.cpp(19): warning C4005: 'WIN32_LEAN_AND_MEAN': macro redefinition
E:\Python\pytorch\third_party\kineto\libkineto\src\ThreadUtil.cpp(19): note: 'WIN32_LEAN_AND_MEAN' previously declared on the command line
E:\Python\pytorch\third_party\kineto\libkineto\src\ThreadUtil.cpp(20): warning C4005: 'NOGDI': macro redefinition
E:\Python\pytorch\third_party\kineto\libkineto\src\ThreadUtil.cpp(20): note: 'NOGDI' previously declared on the command line
[5500/7857] Building CXX object third_party\mimalloc\CMakeFiles\mimalloc-static.dir\src\heap.c.obj
FAILED: third_party/mimalloc/CMakeFiles/mimalloc-static.dir/src/heap.c.obj
"E:\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.41.34120\bin\Hostx64\x64\cl.exe" /nologo /TP -DFLASHATTENTION_DISABLE_ALIBI -DIDEEP_USE_MKL -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DMI_STATIC_LIB -DNOMINMAX -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DUSE_EXTERNAL_MZCRC -DUSE_MIMALLOC -DWIN32_LEAN_AND_MEAN -D_CRT_SECURE_NO_DEPRECATE=1 -D_UCRT_LEGACY_INFINITY -IE:\Python\pytorch\build\aten\src -IE:\Python\pytorch\aten\src -IE:\Python\pytorch\build -IE:\Python\pytorch -IE:\Python\pytorch\cmake\..\third_party\benchmark\include -IE:\Python\pytorch\third_party\onnx -IE:\Python\pytorch\build\third_party\onnx -IE:\Python\pytorch\nlohmann -IE:\Python\pytorch\third_party\mimalloc\include -external:IE:\Python\pytorch\build\third_party\gloo -external:IE:\Python\pytorch\cmake\..\third_party\gloo -external:IE:\Python\pytorch\cmake\..\third_party\googletest\googlemock\include -external:IE:\Python\pytorch\cmake\..\third_party\googletest\googletest\include -external:IE:\Python\pytorch\third_party\protobuf\src -external:IE:\Python\Anaconda\envs\pytorchbuild_env\Library\include -external:IE:\Python\pytorch\third_party\XNNPACK\include -external:IE:\Python\pytorch\third_party\ittapi\include -external:IE:\Python\pytorch\cmake\..\third_party\eigen -external:I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.6\include" -external:IE:\Python\pytorch\third_party\ideep\mkl-dnn\include\oneapi\dnnl -external:IE:\Python\pytorch\third_party\ideep\include -external:IE:\Python\pytorch\INTERFACE -external:IE:\Python\pytorch\third_party\nlohmann\include -external:W0 /DWIN32 /D_WINDOWS /GR /EHsc /Zc:__cplusplus /bigobj /FS /utf-8 -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON -DUSE_FBGEMM -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE /wd4624 /wd4068 /wd4067 /wd4267 /wd4661 /wd4717 /wd4244 /wd4804 /wd4273 /O2 /Ob2 /DNDEBUG /bigobj -DNDEBUG -std:c++17 -MD -DMKL_HAS_SBGEMM -DMKL_HAS_SHGEMM -DCAFFE2_USE_GLOO /Zc:__cplusplus /showIncludes /Fothird_party\mimalloc\CMakeFiles\mimalloc-static.dir\src\heap.c.obj /Fdthird_party\mimalloc\CMakeFiles\mimalloc-static.dir\mimalloc-static.pdb /FS -c E:\Python\pytorch\third_party\mimalloc\src\heap.c
E:\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.41.34120\include\xkeycheck.h(54): fatal error C1189: #error: The C++ Standard Library forbids macroizing the keyword "bool". Enable warning C4005 to find the forbidden define.
[5502/7857] Building CXX object third_party\mimalloc\CMakeFiles\mimalloc-static.dir\src\alloc.c.obj
FAILED: third_party/mimalloc/CMakeFiles/mimalloc-static.dir/src/alloc.c.obj
"E:\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.41.34120\bin\Hostx64\x64\cl.exe" /nologo /TP -DFLASHATTENTION_DISABLE_ALIBI -DIDEEP_USE_MKL -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DMI_STATIC_LIB -DNOMINMAX -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DUSE_EXTERNAL_MZCRC -DUSE_MIMALLOC -DWIN32_LEAN_AND_MEAN -D_CRT_SECURE_NO_DEPRECATE=1 -D_UCRT_LEGACY_INFINITY -IE:\Python\pytorch\build\aten\src -IE:\Python\pytorch\aten\src -IE:\Python\pytorch\build -IE:\Python\pytorch -IE:\Python\pytorch\cmake\..\third_party\benchmark\include -IE:\Python\pytorch\third_party\onnx -IE:\Python\pytorch\build\third_party\onnx -IE:\Python\pytorch\nlohmann -IE:\Python\pytorch\third_party\mimalloc\include -external:IE:\Python\pytorch\build\third_party\gloo -external:IE:\Python\pytorch\cmake\..\third_party\gloo -external:IE:\Python\pytorch\cmake\..\third_party\googletest\googlemock\include -external:IE:\Python\pytorch\cmake\..\third_party\googletest\googletest\include -external:IE:\Python\pytorch\third_party\protobuf\src -external:IE:\Python\Anaconda\envs\pytorchbuild_env\Library\include -external:IE:\Python\pytorch\third_party\XNNPACK\include -external:IE:\Python\pytorch\third_party\ittapi\include -external:IE:\Python\pytorch\cmake\..\third_party\eigen -external:I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.6\include" -external:IE:\Python\pytorch\third_party\ideep\mkl-dnn\include\oneapi\dnnl -external:IE:\Python\pytorch\third_party\ideep\include -external:IE:\Python\pytorch\INTERFACE -external:IE:\Python\pytorch\third_party\nlohmann\include -external:W0 /DWIN32 /D_WINDOWS /GR /EHsc /Zc:__cplusplus /bigobj /FS /utf-8 -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON -DUSE_FBGEMM -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE /wd4624 /wd4068 /wd4067 /wd4267 /wd4661 /wd4717 /wd4244 /wd4804 /wd4273 /O2 /Ob2 /DNDEBUG /bigobj -DNDEBUG -std:c++17 -MD -DMKL_HAS_SBGEMM -DMKL_HAS_SHGEMM -DCAFFE2_USE_GLOO /Zc:__cplusplus /showIncludes /Fothird_party\mimalloc\CMakeFiles\mimalloc-static.dir\src\alloc.c.obj /Fdthird_party\mimalloc\CMakeFiles\mimalloc-static.dir\mimalloc-static.pdb /FS -c E:\Python\pytorch\third_party\mimalloc\src\alloc.c
E:\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.41.34120\include\xkeycheck.h(54): fatal error C1189: #error: The C++ Standard Library forbids macroizing the keyword "bool". Enable warning C4005 to find the forbidden define.
[5513/7857] Building CXX object third_party\kineto\libkineto\CMakeFiles\kineto_base.dir\src\output_json.cpp.obj
ninja: build stopped: subcommand failed.```
### Versions
```(pytorchbuild_env) E:\Python>python collect_env.py
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Pro
GCC version: Could not collect
Clang version: Could not collect
CMake version: version 3.27.4
Libc version: N/A
Python version: 3.12.4 | packaged by Anaconda, Inc. | (main, Jun 18 2024, 15:03:56) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-11-10.0.22631-SP0
Is CUDA available: N/A
CUDA runtime version: 12.6.68
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3080
Nvidia driver version: 560.94
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture=9
CurrentClockSpeed=3801
DeviceID=CPU0
Family=107
L2CacheSize=3072
L2CacheSpeed=
Manufacturer=AuthenticAMD
MaxClockSpeed=3801
Name=AMD Ryzen 5 3600X 6-Core Processor
ProcessorType=3
Revision=28928
Versions of relevant libraries:
[pip3] flake8==7.0.0
[pip3] mypy==1.10.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] numpydoc==1.7.0
[pip3] optree==0.12.1
[conda] _anaconda_depends 2024.06 py312_mkl_2
[conda] blas 1.0 mkl
[conda] mkl 2023.1.0 h6b88ed4_46358
[conda] mkl-include 2024.2.1 pypi_0 pypi
[conda] mkl-service 2.4.0 py312h2bbff1b_1
[conda] mkl-static 2024.2.1 pypi_0 pypi
[conda] mkl_fft 1.3.8 py312h2bbff1b_0
[conda] mkl_random 1.2.4 py312h59b6b97_0
[conda] numpy 1.26.4 py312hfd52020_0
[conda] numpy-base 1.26.4 py312h4dde369_0
[conda] numpydoc 1.7.0 py312haa95532_0
[conda] optree 0.12.1 pypi_0 pypi```
cc @malfet @seemethere @peterjc123 @mszhanyi @skyline75489 @nbcsm @iremyux @Blackhex @jbschlosser @bhosmer @cpuhrsch @erichan1 @drisspg @mikaylagawarecki | module: build,module: windows,triaged,module: sdpa | low | Critical |
2,497,151,210 | vscode | Screen cheese on multi-file diff editor scrollbar | ERROR: type should be string, got "\n\nhttps://github.com/user-attachments/assets/a40310ac-7a44-42d8-abb7-29edcdc81aac" | polish,multi-diff-editor | low | Minor |
2,497,267,921 | flutter | Adding font scaling based on system font size for accessibility on desktop | ### Use case
According to <https://docs.flutter.dev/ui/accessibility-and-internationalization/accessibility#large-fonts>, font scaling based on Android and iOS system-wide font settings works.
I tested this with GNOME on Linux, and changing the interface font size did not adjust the font size in Flutter. I suspect that the same applies to Windows and macOS devices.
Font scaling on desktop would be wonderful for high pixel-density screens, and for the sake of accessibility.
### Proposal
It would be great if font scaling based on system settings could also work for desktop. It seems better to have a single API for font scaling in Flutter apps than to depend on a combination of behavior for phone vs desktop platforms.
Thanks! 😀 | c: new feature,a: accessibility,a: typography,c: proposal,a: desktop,P2,team-accessibility,triaged-accessibility | low | Major |
2,497,301,881 | yt-dlp | Add support for videos on vmware.com | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a new site support request
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that none of provided URLs [violate any copyrights](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and am willing to share it if required
### Region
EU
### Example URLs
- https://www.vmware.com/explore/video-library/video/6360760233112
- https://www.vmware.com/explore/video-library/video/6360759464112
dozens of other video links [here](https://github.com/lamw/vmware-explore-2024-session-urls/blob/master/vmware-explore-us.md)
### Provide a description that is worded well enough to be understood
These are videos from VMware's technology conference. They are free to watch for everyone without login, from what I can tell.
They have a `master.m3u8` that gets loaded through JavaScript, and if you feed that m3u8 link to yt-dlp, download works fine, so it's just a matter of extracting the m3u8 link from the page/JS
Playback seems to use the BrightCove player which seems to be already supported for other sites in yt-dlp
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', 'https://www.vmware.com/explore/video-library/video/6360760233112']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@2024.08.26.232811 from yt-dlp/yt-dlp-nightly-builds [41be32e78] (win_exe)
[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1k 25 Mar 2021)
[debug] exe versions: ffmpeg 2022-03-17-git-242c07982a-full_build-www.gyan.dev (setts), ffprobe 2022-03-17-git-242c07982a-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.07.04, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.35.5, urllib3-2.2.2, websockets-13.0
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1831 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Latest version: nightly@2024.08.26.232811 from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date (nightly@2024.08.26.232811 from yt-dlp/yt-dlp-nightly-builds)
[generic] Extracting URL: https://www.vmware.com/explore/video-library/video/6360760233112
[generic] 6360760233112: Downloading webpage
WARNING: [generic] Falling back on generic information extractor
[generic] 6360760233112: Extracting information
[debug] Looking for embeds
ERROR: Unsupported URL: https://www.vmware.com/explore/video-library/video/6360760233112
Traceback (most recent call last):
File "yt_dlp\YoutubeDL.py", line 1626, in wrapper
File "yt_dlp\YoutubeDL.py", line 1761, in __extract_info
File "yt_dlp\extractor\common.py", line 740, in extract
File "yt_dlp\extractor\generic.py", line 2526, in _real_extract
yt_dlp.utils.UnsupportedError: Unsupported URL: https://www.vmware.com/explore/video-library/video/6360760233112
```
| site-request | low | Critical |
2,497,329,225 | vscode | Bad line move indentation adjustment | Have the following TypeScript code:
```ts
function something() {
const x = 3;
// if (true) {
// }
}
```
Move the line `const x = 3;` down two times. Observe the result:
```ts
function something() {
// if (true) {
// }
const x = 3;
}
```
https://github.com/user-attachments/assets/84f6b38d-ca63-4852-8b17-070cedb4688e
| bug,editor-autoindent | low | Minor |
2,497,350,472 | opencv | C++ no includes on get-started page on OpenCV.org | ### Describe the doc issue
The `#include` lines are not actually showing what to include.

https://opencv.org/get-started/
### Fix suggestion
_No response_ | category: documentation | low | Major |
2,497,352,810 | bitcoin | CMake `CheckPIESupported` doesn't always work | Seen as part of this thread: https://github.com/hebasto/bitcoin/issues/341#issuecomment-2321282757.
It looks like CMakes https://cmake.org/cmake/help/latest/module/CheckPIESupported.html check isn't quite the equivalent of what we used to do in Autotools (having `-pie` and `-fPIE` as part of our hardened flags), such that now, sometimes the user will need to manually pass flags like `-fPIE`. In this case, on a FreeBSD system.
It's not entirely clear if this is a bug in CMake, or something else, however, we should fix this, as users/developers should not need to manually pass `PIC`/`PIE` related flags for working compilation, aside from it being a missing hardening feature. | Build system,Upstream | low | Critical |
2,497,400,774 | vscode | New version of vscode breaks git bash terminal integration in Windows 11 | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.92.2 (user setup)
- OS Version: Microsoft Windows 11 Home Single Language 10.0.22631 compilation 22631
Git version : Git for Windows v2.43.0
Steps to Reproduce:
1. Install the mentioned vscode version
2. Install the Git Version
3. Open the git bash terminal in vscode
4. The following error is shown

This happens when I updated VScode to the current version in other version git bash was working correctly.
I even try to update git for windows but same error is shown.
If I echo
$ echo $HOME
odrisers # outpu
But if I run inside the git bash app then it seems to behave normally

| bug,confirmation-pending,terminal-shell-git-bash | low | Critical |
2,497,437,001 | deno | Inconsistencies of unrelated but nested workspaces | Version: Deno 1.46.2
Repro repo: https://github.com/albnnc/deno-repro-20240830
Possibly related: https://github.com/denoland/deno/issues/25226
Should the following cases work, or do I have wrong understanding of workspaces in Deno?
## Possibly wrong warning of improper "patch" usage
```sh
cd internal
deno run ./mod.ts
```

Workspace "internal" is nested in terms of folder structure only (it's not referenced in "workspace" field of outer deno.jsonc), but it looks like Deno thinks that `internal/deno.jsonc` is a part of outer workspace, which is wrong. Looks like "warning" message is wrong.
## Module resolution differs between implicit and explicit Deno config cases
```sh
cd internal
deno info ./mod.ts
deno info ./mod.ts --config ./deno.jsonc
```

As far as I can understand, explicit reference of deno.jsonc which is in CWD shouldn't change anything. This behaviour breaks [esbuild_deno_loader](https://github.com/lucacasonato/esbuild_deno_loader) in some cases. | bug | low | Minor |
2,497,445,861 | godot | Animated TileMapLayer Tiles Won't Pause When The Tree Is Paused | ### Tested versions
Godot v.4.3.stable
### System information
Windows 10 - Godot v4.3.stable
### Issue description
As the title says, animated tiles created in a tileset resource won't stop animating when pausing the scene tree, unlike all other processes that are meant to be. example below.

### Steps to reproduce
Create an animated tile in a tileset resource, assign it to a TileMapLayer, and via a script, toggle the tree's paused property.
### Minimal reproduction project (MRP)
[mrp_001.zip](https://github.com/user-attachments/files/16818656/mrp_001.zip)
| enhancement,discussion,topic:2d | low | Major |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.