id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k ⌀ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,471,540,590 | godot | Audio `Beat Count` import settings displays wrong data. | ### Tested versions
- Reproducible in v4.3.stable.mono.official [77dcf97d8]
### System information
Windows 10 - Vulkan (forward+) - dedicated
### Issue description
Opening the audio importer window can load Beat Count values incorrectly
The beat count value displayed is affected by the previous audio importer window opened, somehow multiplying it.
This seems to be only a display bug.
Closing and opening the file again will display the correct saved values.
https://github.com/user-attachments/assets/a68a9980-aac5-4efe-a7ba-0491070fd709
### Steps to reproduce
- Open an audio file with low Beat Count
- Open an audio file with high Beat Count
Result: Value shown is wrong.
- Open same audio again
Result: Correct values
### Minimal reproduction project (MRP)
interactive_music demo.
Open "tower_battle_intro", then "tower_battle" -> Wrong data displayed
[interactive_music.zip](https://github.com/user-attachments/files/16645161/interactive_music.zip)
| bug,topic:editor,topic:audio | low | Critical |
2,471,549,350 | svelte | Svelte 5: Misleading/missing errors on non-existent sub-runes | ### Describe the bug
There appear to still be some constellations where you can falsely get this error:
> rune_outside_svelte
> The `$state` rune is only available inside `.svelte` and `.svelte.js/ts` files
Additionally, on build the offending rune is just ignored and no error is thrown.
Previous issues related to this:
- #12031
- #12829
### Reproduction
```js
// test.svelte.js
class State {
field = $state.x({ value: 0 });
}
export const state = $state(new State());
```
[REPL](https://svelte-5-preview.vercel.app/#H4sIAAAAAAAACmWQwW6DQAxEf8VaVQqoCNorAaR-Q4-lBwpG2nTxrrBJU6323yvYJkHK0SN75nm8GrVBVuWHV9RNqEr15pzKlPy6deAzGkGVKbbL3K9KFaXSOtGWGOaFkKFoWqq4n7WTpqVW9OTsLOCBpROEAONsJzjkhSBLHi3yEx-OLVXF_Y6qr0XEEljqje6_a5-kUDfRJR81mgFq8HDuzILlXs43CZ7hFULYEDYDLsE_bIU1NAY1KlOTHfSocVClzAuG7NbEjvXeyIn3bfSmY4b37Um_pl4Zn2LqJbnBvkBIjy2FlvCyldNbYvkv6HqQEP5EuyRNj49wn-EPJ3lDfrIBAAA=)
Without the export you get:
> `$state.x` is not a valid rune
(Though in the REPL the previous error remains until validity is restored.)
Same error *within* a `.svelte` file: [REPL](https://svelte-5-preview.vercel.app/#H4sIAAAAAAAACmWOwWrDMAxAf0WIHRIW0u3qJYF-w47LDpmjgKkrh0hpO4z_fTjZusGOejzpKeLkPAmat4g8nAkNHucZK9TPOQ9yIa-EFUpYF5tJsyMTZnWBBZaVSeDQ9dyIXdysXc-9Wj-IwKsOShAz6HVy5Edo4UEyrW9FhMvgVzLwBKl8yVLqeVsOLAqbdvcLput-ryiz3Bx-a9x8rKqBIbD1zp7aWJTQdvuF-qd7z_3B9YbgEZ4hpe_HnT2JgfjPSjm6hzqs8BxGNzka0eiyUnpPX-Z9zFBLAQAA)
---
Building a route with this code does not cause any errors, the server code is transformed to:
```js
class State {
field = $state.x({ value: 0 });
}
const state = new State();
```
Client code:
```js
class d {
constructor() {
o(this, "field", $state.x({
value: 0
}))
}
}
const v = _(new d);
```
### Logs
_No response_
### System Info
```shell
REPL & Kit with svelte@5.0.0-next.223
```
### Severity
annoyance | bug,compiler | low | Critical |
2,471,549,651 | tailwindcss | [v4] It seems that Oxide can't work with CSS Modules | <!-- Please provide all of the information requested below. We're a small team and without all of this information it's not possible for us to help and your bug report will be closed. -->
**What version of Tailwind CSS are you using?**
`v4.0.0-alpha.19`
**What build tool (or framework if it abstracts the build tool) are you using?**
vite: `v5.4.1`, @tailwindcss/vite: `4.0.0-alpha.19`
**What version of Node.js are you using?**
`v22.6.0`
**What browser are you using?**
Firefox
**What operating system are you using?**
macOS
**Reproduction URL**
https://stackblitz.com/edit/vitejs-vite-pzkrfk?file=src%2Fmain.ts
But it seems that [the node version of stackblitz](https://developer.stackblitz.com/codeflow/codeflow-faq#can-i-change-the-node-version) does not support TailwindCSS 4.0.
**Describe your issue**
I tried to use `prose.module.css`:
```css
@import '@fontsource-variable/fira-code' layer(base);
@theme {
--font-family-mono: 'Fira Code Variable', ui-monospace, SFMono-Regular, Menlo,
Monaco, Consolas, 'Liberation Mono', 'Courier New', monospace;
}
.prose h1 {
@apply text-4xl font-[900] lg:text-5xl font-mono;
}
```
But it looks like `@apply...` It has not been compiled. `globals.css` is worked.
<img width="1116" alt="截屏2024-08-17 22 29 51" src="https://github.com/user-attachments/assets/8243c825-f31f-42f6-84ad-c730e5b82b29">
| v4 | low | Critical |
2,471,564,501 | ui | [bug]: Wrong Bar Chart Tooltip For Horizontal Layout On Mobile | ### Describe the bug
When tapping on any bar in the chart, the tooltip for the first bar always appears first, as shown in the video below.
https://github.com/user-attachments/assets/86a5a274-7dd5-4878-a9cd-55eb8e610d98
### Affected component/components
Bar Chart
### How to reproduce
1. Open the bar chart page on the documentation site using a mobile device or Chrome's mobile simulator.
2. Tap on any bar other than the first one in a horizontal bar chart.
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Windows and Android, Google Chrome
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,471,576,371 | TypeScript | Signatures with less parameters aren't assignable to compatible targets with more when their rest param is an instantiated `NoInfer` | ### 🔎 Search Terms
NoInfer signature parameters list rest
### 🕗 Version & Regression Information
- This is the behavior in every version I tried
### ⏯ Playground Link
https://www.typescriptlang.org/play/?ts=5.6.0-dev.20240817#code/CYUwxgNghgTiAEAzArgOzAFwJYHtXzCgggB4BBeEADwxFWAGd44pg8IBPeNAa1RwDuqANoBdAHwAKAFDx4sAOYAueJIB0GxQxUA5HAElUiEDHLiAlPAC84+ADccWYABpZ8DWq0qy0894Dc0mB4DBjwWEYmcMDWBEQQkpJQKqjIALYARiaWNvAA3gC+zvAAjMUATOaBAPTVcvUNjfAAegD80tKgkLAIKOjYeHHE5eSUNHSMzCCs7Fy8-EJiUm6KKuqaMAra8GQ5tg5OrnIeXju+AUEhYRHGMNHlsYTDJMKpmSbFb1kwEonJ8F9stZbIVimV4JUanV6m1pEA
### 💻 Code
```ts
declare function call<A extends readonly unknown[]>(
arg: (...args: NoInfer<A>) => void,
...args: A
): A;
// Argument of type '(a: number) => void' is not assignable to parameter of type '(...args: NoInfer<[number, number]>) => void'.
// Types of parameters 'a' and 'args' are incompatible.
// Type '[number, number]' is not assignable to type '[a: number]'.
// Source has 2 element(s) but target allows only 1.(2345)
const inferred = call((a: number) => {}, 1, 2);
// ^? function call<[number, number]>(arg: (...args: NoInfer<[number, number]>) => void, args_0: number, args_1: number): [number, number]
declare function call2<A extends readonly unknown[]>(
arg: (...args: A) => void,
...args: A
): A;
const inferred2 = call2<[number, number]>((a: number) => {}, 1, 2);
// ^? const inferred2: [number, number]
```
### 🙁 Actual behavior
The first call fails to typecheck
### 🙂 Expected behavior
Both of those calls use the same arguments. The first one is using `NoInfer` so the covariant inference can get prioritized. That prevents the instantiated signature from typechecking like the second one. Note that the first call infers the same type argument as the one that is supplied explicitly to the second call: `[number, number]`
### Additional information about the issue
I think that this can be fixed by normalizing `NoInfer<[A, B]>` to `[NoInfer<A>, NoInfer<B>]`. This would be similar to what `instantiateMappedTupleType` does today at times. | Help Wanted,Possible Improvement | low | Minor |
2,471,594,137 | node | Stalled Issue action is not working | I noticed that #40354 has been labelled `stalled` for a while. While it's received activity in the past few hours, before that, there was no activity for 2 months (since https://github.com/nodejs/node/pull/40354#issuecomment-2158551355).
Another case is #51567.
The action is set up to close issues after 30 days of no activity after being labelled stale, yet that didn't occur? | meta | low | Minor |
2,471,606,312 | material-ui | [docs-infra] Support full screen / screen-size switcher for demos | ## Context
- To better demonstrate the responsiveness feature on Toolpad Core's `DashboardLayout` component https://github.com/mui/mui-toolpad/pull/3750 and other responsive components we create in the future as part of Toolpad Core
## Benchmarks
- Based off the screen-size switcher for https://ui.shadcn.com/blocks, we could add a version of that to our demos as well, placed similar to the styling system switcher on https://mui.com/base-ui/react-autocomplete/
<img width="858" alt="Screenshot 2024-07-08 at 7 11 07 AM" src="https://github.com/mui/mui-toolpad/assets/19550456/258dda10-dd12-4d6a-a7b3-06a8a1e07822">
- DevExpress full-screen button https://js.devexpress.com/React/Demos/WidgetsGallery/Demo/Common/NavigationOverview/MaterialBlueLight/
<img width="1119" alt="SCR-20240817-qvjd" src="https://github.com/user-attachments/assets/d05968f3-becf-41c7-834b-88c1d7ee8b9a">
## Solution
**Search keywords**:
**Search keywords**: | scope: docs-infra | low | Minor |
2,471,608,674 | stable-diffusion-webui | [Bug]: In ver1.10.1, loading is slow when there are a large number of models. | ### Checklist
- [x] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
After updating from the previous version 1.9.x to version 1.10.1, loading of various tabs became slower.
It seems data may not be loading properly, as even in the prompt input field, it indicates that existing LoRa models are not found.
When trying to disable extensions, the list doesn't load.
Attempting to update the UI is not possible as this is already the latest major update with a version tag.
There don't seem to be any existing reports of this issue.
Tried backing up and deleting config.json and ui-config.json before starting, but it wouldn't launch.
### Steps to reproduce the problem
Place a large number of models in various folders. This includes LoRas, Checkpoints, TIs, etc. In my environment, there are many LoRas.
The "--lora-dir" argument is used to specify the loading location. Since it was working normally before, I don't think the problem lies there.
If left running, it seems to improve at some point. This might be due to the completion of some kind of database construction. Clearing the UI from the browser resets it to the initial state.
### What should have happened?
The UI should load within a reasonable time frame, allowing browsing of various tabs, extension lists, etc. For LoRas that exist, the prompt should not show errors about their absence, while showing errors only for truly non-existent ones or those not yet updated.
(Ideally, updates would take effect immediately and errors would only show for truly non-existent items, but that might be a separate issue.)
### What browsers do you use to access the UI ?
Mozilla Firefox
### Sysinfo
```json
{
"date": "Sun Aug 18 02:02:04 2024",
"timestamp": "02:02:09",
"uptime": "Sun Aug 18 01:58:36 2024",
"version": {
"app": "stable-diffusion-webui.git",
"updated": "2024-07-27",
"hash": "82a973c0",
"url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui.git/tree/master"
},
"torch": "2.1.0+cu121 autocast half",
"gpu": {
"device": "NVIDIA GeForce RTX 3060 (1) (sm_90) (8, 6)",
"cuda": "12.1",
"cudnn": 8801,
"driver": "546.01"
},
"state": {
"started": "Sun Aug 18 02:02:09 2024",
"step": "0 / 0",
"jobs": "0 / 0",
"flags": "",
"job": "",
"text-info": ""
},
"memory": {
"ram": {
"free": 63.17,
"used": 0.69,
"total": 63.86
},
"gpu": {
"free": 3.13,
"used": 8.87,
"total": 12
},
"gpu-active": {
"current": 7.04,
"peak": 7.04
},
"gpu-allocated": {
"current": 7.04,
"peak": 7.04
},
"gpu-reserved": {
"current": 7.3,
"peak": 7.3
},
"gpu-inactive": {
"current": 0.26,
"peak": 0.28
},
"events": {
"retries": 0,
"oom": 0
},
"utilization": 0
},
"optimizations": [
"none"
],
"libs": {
"xformers": "0.0.20",
"diffusers": "0.16.1",
"transformers": "4.30.2"
},
"repos": {
"Stable Diffusion": "[cf1d67a] 2023-03-25",
"Stable Diffusion XL": "[45c443b] 2023-07-26",
"BLIP": "[48211a1] 2022-06-07",
"k_diffusion": "[ab527a9] 2023-08-12"
},
"device": {
"active": "cuda",
"dtype": "torch.float16",
"vae": "torch.float32",
"unet": "torch.float16"
},
"model": {
"configured": {
"base": "Pony_その他/汎\\はっさくXL(変態)(Ikena)_v1_3 目改良 [hassakuXLHentai_v13BetterEyesVersion].safetensors [bbad3bea96]",
"refiner": "",
"vae": "sdxl_vae MadeByollin氏改.safetensors"
},
"loaded": {
"base": "D:\\Programs_D\\_MachineLearning\\StableDiffusionModels\\Online\\CheckPoint\\Pony_その他/汎\\はっさくXL(変態)(Ikena)_v1_3 目改良 [hassakuXLHentai_v13BetterEyesVersion].safetensors",
"refiner": "",
"vae": "D:\\Programs_D\\_MachineLearning\\StableDiffusionModels\\Online\\vae\\SDXL\\sdxl_vae MadeByollin氏改.safetensors"
}
},
"schedulers": [
"DDIM",
"DDIM CFG++",
"DPM adaptive",
"DPM fast",
"DPM++ 2M",
"DPM++ 2M SDE",
"DPM++ 2M SDE Heun",
"DPM++ 2S a",
"DPM++ 3M SDE",
"DPM++ SDE",
"DPM2",
"DPM2 a",
"Euler",
"Euler a",
"Heun",
"LCM",
"LMS",
"PLMS",
"Restart",
"UniPC"
],
"extensions": [
"Generate-TransparentIMG (enabled)",
"LDSR (enabled builtin)",
"Lora (enabled builtin)",
"PBRemTools (enabled)",
"ScuNET (enabled builtin)",
"Stable-Diffusion-WebUI-TensorRT (disabled)",
"Stable-Diffusion-Webui-Civitai-Helper (enabled)",
"SwinIR (enabled builtin)",
"a1111-sd-webui-tagcomplete (enabled)",
"adetailer (enabled)",
"canvas-zoom (enabled)",
"canvas-zoom-and-pan (enabled builtin)",
"extra-options-section (enabled builtin)",
"hypertile (enabled builtin)",
"mobile (enabled builtin)",
"novelai-2-local-prompt (enabled)",
"openpose-editor (enabled)",
"postprocessing-for-training (enabled builtin)",
"prompt-bracket-checker (enabled builtin)",
"prompt_translator (disabled)",
"sd-colab-commands-browser (disabled)",
"sd-dynamic-prompts (enabled)",
"sd-extension-system-info (enabled)",
"sd-webui-animatediff (enabled)",
"sd-webui-ar (enabled)",
"sd-webui-bilingual-localization (enabled)",
"sd-webui-cardmaster (enabled)",
"sd-webui-controlnet (enabled)",
"sd-webui-custom-autolaunch (enabled)",
"sd-webui-cutoff (disabled)",
"sd-webui-deepdanbooru-object-recognition (enabled)",
"sd-webui-deepdanbooru-tag2folder (enabled)",
"sd-webui-enable-checker (enabled)",
"sd-webui-freeu (enabled)",
"sd-webui-lora-block-weight (enabled)",
"sd-webui-lycoris-sorter (enabled)",
"sd-webui-model-converter (enabled)",
"sd-webui-negpip (enabled)",
"sd-webui-prompt-history (enabled)",
"sd-webui-regional-prompter (enabled)",
"sd-webui-txt-img-to-3d-model (enabled)",
"sd-webui-weight-helper (enabled)",
"sd_extension-prompt_formatter (enabled)",
"sd_lama_cleaner (enabled)",
"sdweb-merge-block-weighted-gui (enabled)",
"sdwebui-close-confirmation-dialogue (enabled)",
"soft-inpainting (enabled builtin)",
"stable-diffusion-webui-localization-ja_JP (enabled)",
"stable-diffusion-webui-pixelization (enabled)",
"stable-diffusion-webui-wd14-tagger (enabled)"
],
"platform": {
"arch": "AMD64",
"cpu": "Intel64 Family 6 Model 167 Stepping 1, GenuineIntel",
"system": "Windows",
"release": "Windows-10-10.0.19045-SP0",
"python": "3.10.9"
},
"crossattention": "Automatic",
"backend": "",
"pipeline": ""
}
```
### Console logs
```Shell
D:\Programs_D\_MachineLearning\StableDiffusionLauncher>cd ..\StableDiffusion_0301
D:\Programs_D\_MachineLearning\StableDiffusion_0301>call .\run.bat
D:\Programs_D\_MachineLearning\StableDiffusion_0301>set MODELS_DIR=D:\Programs_D\_MachineLearning\StableDiffusionModels\Online
D:\Programs_D\_MachineLearning\StableDiffusion_0301>powershell .\sub.ps1 --opt-sdp-attention --opt-sdp-no-mem-attention --no-half-vae --deepdanbooru --skip-version-check --disable-nan-check --port 50012 --autolaunch --controlnet-dir "D:\Programs_D\_MachineLearning\StableDiffusionModels\Online\controlnet" --codeformer-models-path "D:\Programs_D\_MachineLearning\StableDiffusionModels\Online\Codeformer" --ckpt-dir "D:\Programs_D\_MachineLearning\StableDiffusionModels\Online\CheckPoint" --vae-dir "D:\Programs_D\_MachineLearning\StableDiffusionModels\Online\vae" --embeddings-dir "D:\Programs_D\_MachineLearning\StableDiffusionModels\Online\embeddings" --esrgan-models-path "D:\Programs_D\_MachineLearning\StableDiffusionModels\Online\ESRGAN" --gfpgan-models-path "D:\Programs_D\_MachineLearning\StableDiffusionModels\Online\GFPGAN" --hypernetwork-dir "D:\Programs_D\_MachineLearning\StableDiffusionModels\Online\hypernetworks" --lora-dir "D:\Programs_D\_MachineLearning\StableDiffusionModels\Online\Lora" --lyco-dir "D:\Programs_D\_MachineLearning\StableDiffusionModels\Online\LyCORIS" --realesrgan-models-path "D:\Programs_D\_MachineLearning\StableDiffusionModels\Online\RealESRGAN" --swinir-models-path "D:\Programs_D\_MachineLearning\StableDiffusionModels\Online\SwinIR" --xformers
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Version: v1.10.1
Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2
Installing requirements
loading WD14-tagger reqs from D:\Programs_D\_MachineLearning\StableDiffusion_0301\stable-diffusion-webui\extensions\stable-diffusion-webui-wd14-tagger\requirements.txt
Checking WD14-tagger requirements.
Launching Web UI with arguments: --opt-sdp-attention --opt-sdp-no-mem-attention --no-half-vae --deepdanbooru --skip-version-check --disable-nan-check --port 50012 --autolaunch --controlnet-dir 'D:\Programs_D\_MachineLearning\StableDiffusionModels\Online\controlnet' --codeformer-models-path 'D:\Programs_D\_MachineLearning\StableDiffusionModels\Online\Codeformer' --ckpt-dir 'D:\Programs_D\_MachineLearning\StableDiffusionModels\Online\CheckPoint' --vae-dir 'D:\Programs_D\_MachineLearning\StableDiffusionModels\Online\vae' --embeddings-dir 'D:\Programs_D\_MachineLearning\StableDiffusionModels\Online\embeddings' --esrgan-models-path 'D:\Programs_D\_MachineLearning\StableDiffusionModels\Online\ESRGAN' --gfpgan-models-path 'D:\Programs_D\_MachineLearning\StableDiffusionModels\Online\GFPGAN' --hypernetwork-dir 'D:\Programs_D\_MachineLearning\StableDiffusionModels\Online\hypernetworks' --lora-dir 'D:\Programs_D\_MachineLearning\StableDiffusionModels\Online\Lora' --lyco-dir 'D:\Programs_D\_MachineLearning\StableDiffusionModels\Online\LyCORIS' --realesrgan-models-path 'D:\Programs_D\_MachineLearning\StableDiffusionModels\Online\RealESRGAN' --swinir-models-path 'D:\Programs_D\_MachineLearning\StableDiffusionModels\Online\SwinIR' --xformers
2024-08-18 02:23:09.917055: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-08-18 02:23:10.619814: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
CHv1.8.10: Get Custom Model Folder
Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
[-] ADetailer initialized. version: 24.8.0, num models: 10
dirname: D:\Programs_D\_MachineLearning\StableDiffusion_0301\stable-diffusion-webui\localizations
localizations: {'ja_JP': 'D:\\Programs_D\\_MachineLearning\\StableDiffusion_0301\\stable-diffusion-webui\\extensions\\stable-diffusion-webui-localization-ja_JP\\localizations\\ja_JP.json', 'ks_JP': 'D:\\Programs_D\\_MachineLearning\\StableDiffusion_0301\\stable-diffusion-webui\\extensions\\stable-diffusion-webui-localization-ja_JP\\localizations\\ks_JP.json'}
ControlNet preprocessor location: D:\Programs_D\_MachineLearning\StableDiffusion_0301\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads
2024-08-18 02:23:21,702 - ControlNet - INFO - ControlNet v1.1.455
[sd-webui-freeu] Controlnet support: *enabled*
dirname: D:\Programs_D\_MachineLearning\StableDiffusion_0301\stable-diffusion-webui\localizations
localizations: {'ja_JP': 'D:\\Programs_D\\_MachineLearning\\StableDiffusion_0301\\stable-diffusion-webui\\extensions\\stable-diffusion-webui-localization-ja_JP\\localizations\\ja_JP.json', 'ks_JP': 'D:\\Programs_D\\_MachineLearning\\StableDiffusion_0301\\stable-diffusion-webui\\extensions\\stable-diffusion-webui-localization-ja_JP\\localizations\\ks_JP.json'}
== WD14 tagger /gpu:0, uname_result(system='Windows', node='FREEDOM', release='10', version='10.0.19045', machine='AMD64') ==
Loading weights [bbad3bea96] from D:\Programs_D\_MachineLearning\StableDiffusionModels\Online\CheckPoint\Pony_その他/汎\はっさくXL(変態)(Ikena)_v1_3 目改良 [hassakuXLHentai_v13BetterEyesVersion].safetensors
CHv1.8.10: Set Proxy:
Creating model from config: D:\Programs_D\_MachineLearning\StableDiffusion_0301\stable-diffusion-webui\repositories\generative-models\configs\inference\sd_xl_base.yaml
Loading VAE weights specified in settings: D:\Programs_D\_MachineLearning\StableDiffusionModels\Online\vae\SDXL\sdxl_vae MadeByollin氏改.safetensors
Applying attention optimization: xformers... done.
Model loaded in 4.5s (load weights from disk: 0.3s, create model: 0.4s, apply weights to model: 3.0s, load VAE: 0.2s, load textual inversion embeddings: 0.3s, calculate empty prompt: 0.2s).
2024-08-18 02:23:31,554 - ControlNet - INFO - ControlNet UI callback registered.
start ..\python/Scripts/lama-cleaner --model=lama --device=cud --port=7870
Running on local URL: http://xxx.xxx.xxx.xxx:xxxxx
To create a public link, set `share=True` in `launch()`.
Startup time: 44.2s (prepare environment: 17.5s, import torch: 6.1s, import gradio: 1.3s, setup paths: 0.5s, initialize shared: 0.2s, other imports: 0.5s, list SD models: 0.3s, load scripts: 10.2s, scripts before_ui_callback: 0.1s, create ui: 7.2s, gradio launch: 0.3s).
```
### Additional information
_No response_ | bug-report | low | Critical |
2,471,610,888 | terminal | Rapid background palette changes can make Windows Terminal hang | ### Windows Terminal version
1.21.1772.0
### Windows build number
10.0.19045.4780
### Other Software
_No response_
### Steps to reproduce
Run the following python script:
```py
import sys
for j in range(21845):
i = j*3
r,g,b = i,i+1,i+2
seq = '\033]11;rgb:%04x/%04X/%04x\033\\' % (r,g,b)
sys.stdout.write(seq)
sys.stdout.flush()
sys.stdout.write('\033]11;rgb:00/00/00\033\\')
```
### Expected Behavior
The background color should fade from black to white, then reset to black and exit. The terminal should still work at that point.
### Actual Behavior
The background fade works, but the terminal become unusable once the script has finished. None of the tabs work, and the title bar eventually shows "(Not Responding)".
I briefly looked at this in the debugger, and it appeared that it was triggering some sort of theme update event whenever the background changed, and I suspect it may have just got overloaded with a backlog of those events. | Help Wanted,Area-VT,Issue-Bug,Product-Terminal | low | Critical |
2,471,626,047 | PowerToys | Disk eraser utility | ### Description of the new feature / enhancement
To provide extra functionality over the standard Windows format utility. Currently, if I need to use a USB stick on macOS, I can't format it in Windows without compatibility issues. Also the standard Windows format utility hasn't been updated since the 90s and is restrictive.
### Scenario when this would be used?
If I need to format a new drive with a specific file system, or need to securely format a drive using multiple passes to do so.
### Supporting information
_No response_ | Idea-New PowerToy,Needs-Triage | low | Minor |
2,471,632,308 | pytorch | torch.binomial() produces counter-intuitive error message when input tensor dtypes are not correct | ### 🐛 Describe the bug
# Summary
Calling `torch.binomial()` produces an unhelpful and counter-intuitive error message when you pass in tensors with the incorrect dtype.
## Reproduction Steps
```
import torch
torch.binomial(count=torch.tensor([1]), prob=torch.tensor([0.5]))
```
Output:
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[41], line 1
----> 1 torch.binomial(count=torch.tensor([1]), prob=torch.tensor([0.5]))
RuntimeError: Found dtype Float but expected Long
```
The correct call would be:
```
torch.binomial(count=torch.tensor([1.0]), prob=torch.tensor([0.5]))
```
But inferring that from the error message is not obvious.
### Versions
```
(conda: env) > python3 collect_env.py
Collecting environment information...
PyTorch version: 2.3.1
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: Pop!_OS 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.12.2 | packaged by conda-forge | (main, Feb 16 2024, 20:50:58) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-6.9.3-76060903-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2080 Super with Max-Q Design
Nvidia driver version: 555.58.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: GenuineIntel
Model name: Intel(R) Core(TM) i7-10875H CPU @ 2.30GHz
CPU family: 6
Model: 165
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 2
CPU max MHz: 5100.0000
CPU min MHz: 800.0000
BogoMIPS: 4599.93
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp vnmi pku ospke md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 256 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 2 MiB (8 instances)
L3 cache: 16 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] pytorch-lightning==2.4.0
[pip3] torch==2.3.1
[pip3] torchaudio==2.3.1
[pip3] torchmetrics==1.4.0.post0
[pip3] torchvision==0.18.1
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py312h5eee18b_1
[conda] mkl_fft 1.3.8 py312h5eee18b_0
[conda] mkl_random 1.2.4 py312hdb19cb5_0
[conda] numpy 1.26.4 py312hc5e2394_0
[conda] numpy-base 1.26.4 py312h0da6c21_0
[conda] pytorch 2.3.1 py3.12_cuda11.8_cudnn8.7.0_0 pytorch
[conda] pytorch-cuda 11.8 h7e8668a_5 pytorch
[conda] pytorch-lightning 2.4.0 pyhd8ed1ab_0 conda-forge
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.3.1 py312_cu118 pytorch
[conda] torchmetrics 1.4.0.post0 pyhd8ed1ab_0 conda-forge
[conda] torchvision 0.18.1 py312_cu118 pytorch
```
cc @fritzo @neerajprad @alicanb @nikitaved @malfet | module: distributions,module: error checking,triaged | low | Critical |
2,471,644,638 | rust | ICE: `None` in compiler/rustc_middle/src/ty/sty.rs | <!--
[31mICE[0m: Rustc ./2.rs '' 'thread 'rustc' panicked at compiler/rustc_middle/src/ty/sty.rs:362:36: 'called `Option::unwrap()` on a `None` value'', 'thread 'rustc' panicked at compiler/rustc_middle/src/ty/sty.rs:362:36: 'called `Option::unwrap()` on a `None` value''
File: /tmp/im/2.rs
-->
auto-reduced (treereduce-rust):
````rust
impl<
const N: usize = {
const {
static || {
let first = Foo([0; FOO_SIZE]);
yield;
yield;
yield;
}
}
},
> PartialEq<FixedI8<FRAC_RHS>> for True
{
}
````
original:
````rust
#![feature(generic_const_exprs)]
pub trait True {}
impl<const N: usize = { const {
static || {
let first = Foo([0; FOO_SIZE]);
yield;
let second = first;
yield;
let second = first;
yield;
}
} }> PartialEq<FixedI8<FRAC_RHS>> for v0<FRAC_LHS> where
If<{}>: True
{
}
#![feature(generic_const_exprs)]
pub trait True {}
impl<const pointer_like_trait: usize = { const { InvariantRef::<'a>::NEW } }> PartialEq<FixedI8<FRAC_RHS>> for FixedI8<FRAC_LHS> where
If<{}>: True {}
````
Version information
````
rustc 1.82.0-nightly (0f26ee4fd 2024-08-17)
binary: rustc
commit-hash: 0f26ee4fd95a1c046582dfb18892f520788e2c2c
commit-date: 2024-08-17
host: x86_64-unknown-linux-gnu
release: 1.82.0-nightly
LLVM version: 19.1.0
````
Command:
`/home/matthias/.rustup/toolchains/master/bin/rustc `
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary><strong>Program output</strong></summary>
<p>
```
error[E0412]: cannot find type `FixedI8` in this scope
--> /tmp/icemaker_global_tempdir.xxLsOJeDobON/rustc_testrunner_tmpdir_reporting.EW3ZNLfKk1Q3/mvce.rs:14:17
|
14 | > PartialEq<FixedI8<FRAC_RHS>> for True
| ^^^^^^^ not found in this scope
error[E0412]: cannot find type `FRAC_RHS` in this scope
--> /tmp/icemaker_global_tempdir.xxLsOJeDobON/rustc_testrunner_tmpdir_reporting.EW3ZNLfKk1Q3/mvce.rs:14:25
|
14 | > PartialEq<FixedI8<FRAC_RHS>> for True
| ^^^^^^^^ not found in this scope
|
help: you might be missing a type parameter
|
13 | }, FRAC_RHS,
| ++++++++++
error[E0412]: cannot find type `True` in this scope
--> /tmp/icemaker_global_tempdir.xxLsOJeDobON/rustc_testrunner_tmpdir_reporting.EW3ZNLfKk1Q3/mvce.rs:14:40
|
14 | > PartialEq<FixedI8<FRAC_RHS>> for True
| ^^^^ not found in this scope
|
help: you may want to use a bool value instead
|
14 | > PartialEq<FixedI8<FRAC_RHS>> for true
| ~~~~
error[E0425]: cannot find value `FOO_SIZE` in this scope
--> /tmp/icemaker_global_tempdir.xxLsOJeDobON/rustc_testrunner_tmpdir_reporting.EW3ZNLfKk1Q3/mvce.rs:5:41
|
5 | let first = Foo([0; FOO_SIZE]);
| ^^^^^^^^ not found in this scope
error[E0658]: yield syntax is experimental
--> /tmp/icemaker_global_tempdir.xxLsOJeDobON/rustc_testrunner_tmpdir_reporting.EW3ZNLfKk1Q3/mvce.rs:6:21
|
6 | yield;
| ^^^^^
|
= note: see issue #43122 <https://github.com/rust-lang/rust/issues/43122> for more information
= help: add `#![feature(coroutines)]` to the crate attributes to enable
= note: this compiler was built on 2024-08-17; consider upgrading it if it is out of date
error[E0658]: yield syntax is experimental
--> /tmp/icemaker_global_tempdir.xxLsOJeDobON/rustc_testrunner_tmpdir_reporting.EW3ZNLfKk1Q3/mvce.rs:8:21
|
8 | yield;
| ^^^^^
|
= note: see issue #43122 <https://github.com/rust-lang/rust/issues/43122> for more information
= help: add `#![feature(coroutines)]` to the crate attributes to enable
= note: this compiler was built on 2024-08-17; consider upgrading it if it is out of date
error[E0658]: yield syntax is experimental
--> /tmp/icemaker_global_tempdir.xxLsOJeDobON/rustc_testrunner_tmpdir_reporting.EW3ZNLfKk1Q3/mvce.rs:10:21
|
10 | yield;
| ^^^^^
|
= note: see issue #43122 <https://github.com/rust-lang/rust/issues/43122> for more information
= help: add `#![feature(coroutines)]` to the crate attributes to enable
= note: this compiler was built on 2024-08-17; consider upgrading it if it is out of date
error[E0658]: yield syntax is experimental
--> /tmp/icemaker_global_tempdir.xxLsOJeDobON/rustc_testrunner_tmpdir_reporting.EW3ZNLfKk1Q3/mvce.rs:6:21
|
6 | yield;
| ^^^^^
|
= note: see issue #43122 <https://github.com/rust-lang/rust/issues/43122> for more information
= help: add `#![feature(coroutines)]` to the crate attributes to enable
= note: this compiler was built on 2024-08-17; consider upgrading it if it is out of date
error: `yield` can only be used in `#[coroutine]` closures, or `gen` blocks
--> /tmp/icemaker_global_tempdir.xxLsOJeDobON/rustc_testrunner_tmpdir_reporting.EW3ZNLfKk1Q3/mvce.rs:6:21
|
6 | yield;
| ^^^^^
|
help: use `#[coroutine]` to make this closure a coroutine
|
4 | #[coroutine] static || {
| ++++++++++++
error[E0658]: yield syntax is experimental
--> /tmp/icemaker_global_tempdir.xxLsOJeDobON/rustc_testrunner_tmpdir_reporting.EW3ZNLfKk1Q3/mvce.rs:8:21
|
8 | yield;
| ^^^^^
|
= note: see issue #43122 <https://github.com/rust-lang/rust/issues/43122> for more information
= help: add `#![feature(coroutines)]` to the crate attributes to enable
= note: this compiler was built on 2024-08-17; consider upgrading it if it is out of date
error[E0658]: yield syntax is experimental
--> /tmp/icemaker_global_tempdir.xxLsOJeDobON/rustc_testrunner_tmpdir_reporting.EW3ZNLfKk1Q3/mvce.rs:10:21
|
10 | yield;
| ^^^^^
|
= note: see issue #43122 <https://github.com/rust-lang/rust/issues/43122> for more information
= help: add `#![feature(coroutines)]` to the crate attributes to enable
= note: this compiler was built on 2024-08-17; consider upgrading it if it is out of date
error[E0601]: `main` function not found in crate `mvce`
--> /tmp/icemaker_global_tempdir.xxLsOJeDobON/rustc_testrunner_tmpdir_reporting.EW3ZNLfKk1Q3/mvce.rs:16:2
|
16 | }
| ^ consider adding a `main` function to `/tmp/icemaker_global_tempdir.xxLsOJeDobON/rustc_testrunner_tmpdir_reporting.EW3ZNLfKk1Q3/mvce.rs`
error: defaults for const parameters are only allowed in `struct`, `enum`, `type`, or `trait` definitions
--> /tmp/icemaker_global_tempdir.xxLsOJeDobON/rustc_testrunner_tmpdir_reporting.EW3ZNLfKk1Q3/mvce.rs:2:9
|
2 | / const N: usize = {
3 | | const {
4 | | static || {
5 | | let first = Foo([0; FOO_SIZE]);
... |
12 | | }
13 | | },
| |_________^
error[E0425]: cannot find function, tuple struct or tuple variant `Foo` in this scope
--> /tmp/icemaker_global_tempdir.xxLsOJeDobON/rustc_testrunner_tmpdir_reporting.EW3ZNLfKk1Q3/mvce.rs:5:33
|
5 | let first = Foo([0; FOO_SIZE]);
| ^^^ not found in this scope
thread 'rustc' panicked at compiler/rustc_middle/src/ty/sty.rs:362:36:
called `Option::unwrap()` on a `None` value
stack backtrace:
0: 0x7b29de7c429d - <std::sys::backtrace::BacktraceLock::print::DisplayBacktrace as core::fmt::Display>::fmt::hce35dec2bed7e8e1
1: 0x7b29df005297 - core::fmt::write::haaf8a728f3019c12
2: 0x7b29e02bbd91 - std::io::Write::write_fmt::hb0dc5188703aea0d
3: 0x7b29de7c697b - std::panicking::default_hook::{{closure}}::h35c501ae74c82675
4: 0x7b29de7c65ee - std::panicking::default_hook::hc1ddd62643c4c855
5: 0x7b29dd958729 - std[54b021f941472252]::panicking::update_hook::<alloc[8840b3d3a55bb7af]::boxed::Box<rustc_driver_impl[ec4d44b725e7aa11]::install_ice_hook::{closure#0}>>::{closure#0}
6: 0x7b29de7c7297 - std::panicking::rust_panic_with_hook::h7960ad04eaeb646d
7: 0x7b29de7c6f23 - std::panicking::begin_panic_handler::{{closure}}::had964436d76b60a3
8: 0x7b29de7c4759 - std::sys::backtrace::__rust_end_short_backtrace::h287862de25ceaa43
9: 0x7b29de7c6c24 - rust_begin_unwind
10: 0x7b29db6b1253 - core::panicking::panic_fmt::hb93ba995a275cef9
11: 0x7b29db87fa9c - core::panicking::panic::he4e48426481834b3
12: 0x7b29dbd05839 - core::option::unwrap_failed::h850f2e87e5e16379
13: 0x7b29e0a8af3c - <rustc_middle[10ec7aec7d9fdda3]::ty::sty::ParamConst>::find_ty_from_env.cold
14: 0x7b29db67b97b - <rustc_trait_selection[78ffaaaa71d4a69f]::traits::fulfill::FulfillProcessor as rustc_data_structures[de25bd5cf5c038d2]::obligation_forest::ObligationProcessor>::process_obligation
15: 0x7b29df008777 - <rustc_data_structures[de25bd5cf5c038d2]::obligation_forest::ObligationForest<rustc_trait_selection[78ffaaaa71d4a69f]::traits::fulfill::PendingPredicateObligation>>::process_obligations::<rustc_trait_selection[78ffaaaa71d4a69f]::traits::fulfill::FulfillProcessor>
16: 0x7b29df0cf344 - <rustc_hir_typeck[cd6451a3b3fe12c4]::fn_ctxt::FnCtxt>::demand_coerce_diag
17: 0x7b29dfafeae0 - <rustc_hir_typeck[cd6451a3b3fe12c4]::fn_ctxt::FnCtxt>::check_decl
18: 0x7b29dfafbd41 - <rustc_hir_typeck[cd6451a3b3fe12c4]::fn_ctxt::FnCtxt>::check_block_with_expected
19: 0x7b29dfb02247 - <rustc_hir_typeck[cd6451a3b3fe12c4]::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
20: 0x7b29df243661 - rustc_hir_typeck[cd6451a3b3fe12c4]::check::check_fn
21: 0x7b29df985eea - <rustc_hir_typeck[cd6451a3b3fe12c4]::fn_ctxt::FnCtxt>::check_expr_closure
22: 0x7b29dfb0614a - <rustc_hir_typeck[cd6451a3b3fe12c4]::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
23: 0x7b29dfafae56 - <rustc_hir_typeck[cd6451a3b3fe12c4]::fn_ctxt::FnCtxt>::check_block_with_expected
24: 0x7b29dfb02247 - <rustc_hir_typeck[cd6451a3b3fe12c4]::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
25: 0x7b29dfb0a87f - <rustc_hir_typeck[cd6451a3b3fe12c4]::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
26: 0x7b29dfafae56 - <rustc_hir_typeck[cd6451a3b3fe12c4]::fn_ctxt::FnCtxt>::check_block_with_expected
27: 0x7b29dfb02247 - <rustc_hir_typeck[cd6451a3b3fe12c4]::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
28: 0x7b29df7fe24b - rustc_hir_typeck[cd6451a3b3fe12c4]::typeck
29: 0x7b29df7fb975 - rustc_query_impl[69287ea093d06d1d]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[69287ea093d06d1d]::query_impl::typeck::dynamic_query::{closure#2}::{closure#0}, rustc_middle[10ec7aec7d9fdda3]::query::erase::Erased<[u8; 8usize]>>
30: 0x7b29df2d4bf9 - rustc_query_system[1449707d44461b06]::query::plumbing::try_execute_query::<rustc_query_impl[69287ea093d06d1d]::DynamicConfig<rustc_query_system[1449707d44461b06]::query::caches::VecCache<rustc_span[a5a4c6b08f7cf6f]::def_id::LocalDefId, rustc_middle[10ec7aec7d9fdda3]::query::erase::Erased<[u8; 8usize]>>, false, false, false>, rustc_query_impl[69287ea093d06d1d]::plumbing::QueryCtxt, false>
31: 0x7b29df2d3e15 - rustc_query_impl[69287ea093d06d1d]::query_impl::typeck::get_query_non_incr::__rust_end_short_backtrace
32: 0x7b29df2420f5 - rustc_middle[10ec7aec7d9fdda3]::query::plumbing::query_get_at::<rustc_query_system[1449707d44461b06]::query::caches::VecCache<rustc_span[a5a4c6b08f7cf6f]::def_id::LocalDefId, rustc_middle[10ec7aec7d9fdda3]::query::erase::Erased<[u8; 8usize]>>>
33: 0x7b29df7fcdb5 - rustc_hir_typeck[cd6451a3b3fe12c4]::typeck
34: 0x7b29df7fb975 - rustc_query_impl[69287ea093d06d1d]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[69287ea093d06d1d]::query_impl::typeck::dynamic_query::{closure#2}::{closure#0}, rustc_middle[10ec7aec7d9fdda3]::query::erase::Erased<[u8; 8usize]>>
35: 0x7b29df2d4bf9 - rustc_query_system[1449707d44461b06]::query::plumbing::try_execute_query::<rustc_query_impl[69287ea093d06d1d]::DynamicConfig<rustc_query_system[1449707d44461b06]::query::caches::VecCache<rustc_span[a5a4c6b08f7cf6f]::def_id::LocalDefId, rustc_middle[10ec7aec7d9fdda3]::query::erase::Erased<[u8; 8usize]>>, false, false, false>, rustc_query_impl[69287ea093d06d1d]::plumbing::QueryCtxt, false>
36: 0x7b29df2d3e15 - rustc_query_impl[69287ea093d06d1d]::query_impl::typeck::get_query_non_incr::__rust_end_short_backtrace
37: 0x7b29df2d3a9b - <rustc_middle[10ec7aec7d9fdda3]::hir::map::Map>::par_body_owners::<rustc_hir_analysis[c0d9d1e917b71b24]::check_crate::{closure#4}>::{closure#0}
38: 0x7b29df2d17e4 - rustc_hir_analysis[c0d9d1e917b71b24]::check_crate
39: 0x7b29df784cbf - rustc_interface[f40e7651e6b02ea1]::passes::run_required_analyses
40: 0x7b29dfb5c49e - rustc_interface[f40e7651e6b02ea1]::passes::analysis
41: 0x7b29dfb5c471 - rustc_query_impl[69287ea093d06d1d]::plumbing::__rust_begin_short_backtrace::<rustc_query_impl[69287ea093d06d1d]::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle[10ec7aec7d9fdda3]::query::erase::Erased<[u8; 1usize]>>
42: 0x7b29dff867ee - rustc_query_system[1449707d44461b06]::query::plumbing::try_execute_query::<rustc_query_impl[69287ea093d06d1d]::DynamicConfig<rustc_query_system[1449707d44461b06]::query::caches::SingleCache<rustc_middle[10ec7aec7d9fdda3]::query::erase::Erased<[u8; 1usize]>>, false, false, false>, rustc_query_impl[69287ea093d06d1d]::plumbing::QueryCtxt, false>
43: 0x7b29dff8654f - rustc_query_impl[69287ea093d06d1d]::query_impl::analysis::get_query_non_incr::__rust_end_short_backtrace
44: 0x7b29dfdd38ea - rustc_interface[f40e7651e6b02ea1]::interface::run_compiler::<core[5070cb7ec77c977]::result::Result<(), rustc_span[a5a4c6b08f7cf6f]::ErrorGuaranteed>, rustc_driver_impl[ec4d44b725e7aa11]::run_compiler::{closure#0}>::{closure#1}
45: 0x7b29dfdb3210 - std[54b021f941472252]::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface[f40e7651e6b02ea1]::util::run_in_thread_with_globals<rustc_interface[f40e7651e6b02ea1]::util::run_in_thread_pool_with_globals<rustc_interface[f40e7651e6b02ea1]::interface::run_compiler<core[5070cb7ec77c977]::result::Result<(), rustc_span[a5a4c6b08f7cf6f]::ErrorGuaranteed>, rustc_driver_impl[ec4d44b725e7aa11]::run_compiler::{closure#0}>::{closure#1}, core[5070cb7ec77c977]::result::Result<(), rustc_span[a5a4c6b08f7cf6f]::ErrorGuaranteed>>::{closure#0}, core[5070cb7ec77c977]::result::Result<(), rustc_span[a5a4c6b08f7cf6f]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[5070cb7ec77c977]::result::Result<(), rustc_span[a5a4c6b08f7cf6f]::ErrorGuaranteed>>
46: 0x7b29dfdb387a - <<std[54b021f941472252]::thread::Builder>::spawn_unchecked_<rustc_interface[f40e7651e6b02ea1]::util::run_in_thread_with_globals<rustc_interface[f40e7651e6b02ea1]::util::run_in_thread_pool_with_globals<rustc_interface[f40e7651e6b02ea1]::interface::run_compiler<core[5070cb7ec77c977]::result::Result<(), rustc_span[a5a4c6b08f7cf6f]::ErrorGuaranteed>, rustc_driver_impl[ec4d44b725e7aa11]::run_compiler::{closure#0}>::{closure#1}, core[5070cb7ec77c977]::result::Result<(), rustc_span[a5a4c6b08f7cf6f]::ErrorGuaranteed>>::{closure#0}, core[5070cb7ec77c977]::result::Result<(), rustc_span[a5a4c6b08f7cf6f]::ErrorGuaranteed>>::{closure#0}::{closure#0}, core[5070cb7ec77c977]::result::Result<(), rustc_span[a5a4c6b08f7cf6f]::ErrorGuaranteed>>::{closure#1} as core[5070cb7ec77c977]::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
47: 0x7b29dfdb3beb - std::sys::pal::unix::thread::Thread::new::thread_start::he8fde62ed04f9a93
48: 0x7b29e152339d - <unknown>
49: 0x7b29e15a849c - <unknown>
50: 0x0 - <unknown>
error: the compiler unexpectedly panicked. this is a bug.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: please make sure that you have updated to the latest nightly
note: rustc 1.82.0-nightly (0f26ee4fd 2024-08-17) running on x86_64-unknown-linux-gnu
query stack during panic:
#0 [typeck] type-checking `<impl at /tmp/icemaker_global_tempdir.xxLsOJeDobON/rustc_testrunner_tmpdir_reporting.EW3ZNLfKk1Q3/mvce.rs:1:1: 14:44>::{constant#0}`
#1 [typeck] type-checking `<impl at /tmp/icemaker_global_tempdir.xxLsOJeDobON/rustc_testrunner_tmpdir_reporting.EW3ZNLfKk1Q3/mvce.rs:1:1: 14:44>::{constant#0}::{constant#0}`
end of query stack
error: aborting due to 14 previous errors
Some errors have detailed explanations: E0412, E0425, E0601, E0658.
For more information about an error, try `rustc --explain E0412`.
```
</p>
</details>
<!--
query stack:
4 | #[coroutine] static || {
#0 [typeck] type-checking `<impl at /tmp/icemaker_global_tempdir.xxLsOJeDobON/rustc_testrunner_tmpdir_reporting.EW3ZNLfKk1Q3/mvce.rs:1:1: 14:44>::{constant#0}`
#1 [typeck] type-checking `<impl at /tmp/icemaker_global_tempdir.xxLsOJeDobON/rustc_testrunner_tmpdir_reporting.EW3ZNLfKk1Q3/mvce.rs:1:1: 14:44>::{constant#0}::{constant#0}`
-->
@rustbot label +F-generic_const_exprs +F-generic_const_exprs | I-ICE,T-compiler,C-bug,A-const-generics,S-bug-has-test | low | Critical |
2,471,646,480 | pytorch | 2.4.0: sdpa disallows batch-of-0 in Flash (default backend) and cuDNN. | ### 🐛 Describe the bug
I'm trying to do attention on a batch-of-zero, because my program uses a static graph and I rely on zero-batching (index_select zero-batch of inputs, index_add zero-batch of outputs) to toggle functionality without adding branches to the logic.
But attention is failing on a batch-of-zero:
```python
import torch
from torch.nn.functional import scaled_dot_product_attention
device = torch.device('cuda')
dtype = torch.float16
batch = 0
q_heads = kv_heads = 10
q_tokens = 3952
kv_tokens = 16
head_dim = 64
q = torch.zeros(batch, q_heads, q_tokens, head_dim, device=device, dtype=dtype)
k = torch.zeros(batch, kv_heads, kv_tokens, head_dim, device=device, dtype=dtype)
v = torch.zeros(batch, kv_heads, kv_tokens, head_dim, device=device, dtype=dtype)
scaled_dot_product_attention(q, k, v)
```
This is a _backend-specific_ problem.
Math backend is fine. Memory-efficient backend is fine.
But the default backend in torch 2.4.0 is Flash. Flash refuses to run:
```
RuntimeError: batch size must be positive
```
That assertion probably comes from here:
https://github.com/pytorch/pytorch/blob/d6368985af9f670db43b9a644f878d4e57e67b6b/aten/src/ATen/native/transformers/cuda/flash_attn/flash_api.cpp#L394
If I enable cuDNN backend via `TORCH_CUDNN_SDPA_ENABLED=1` env var and context manager:
```python
from torch.nn.attention import SDPBackend, sdpa_kernel
with sdpa_kernel(SDPBackend.CUDNN_ATTENTION):
scaled_dot_product_attention(q, k, v)
```
cuDNN fails too, because it dislikes cross-attention. But it'd be better if it recognised batch-of-zero could be treated as a special case.
```
UserWarning: Query tensor was not in cuDNN-supported packed QKV layout[2529280, 252928, 64, 1] (Triggered internally at /build/pytorch/aten/src/ATen/native/transformers/cuda/sdp_utils.cpp:410.)
scaled_dot_product_attention(q, k, v)
UserWarning: Key tensor was not in cuDNN-supported packed QKV layout[10240, 1024, 64, 1] (Triggered internally at /build/pytorch/aten/src/ATen/native/transformers/cuda/sdp_utils.cpp:413.)
scaled_dot_product_attention(q, k, v)
UserWarning: Value tensor was not in cuDNN-supported packed QKV layout[10240, 1024, 64, 1] (Triggered internally at /build/pytorch/aten/src/ATen/native/transformers/cuda/sdp_utils.cpp:416.)
scaled_dot_product_attention(q, k, v)
UserWarning: Query tensor was not in cuDNN-supported unpacked QKV layout[2529280, 252928, 64, 1] (Triggered internally at /build/pytorch/aten/src/ATen/native/transformers/cuda/sdp_utils.cpp:421.)
scaled_dot_product_attention(q, k, v)
UserWarning: Key tensor was not in cuDNN-supported unpacked QKV layout[10240, 1024, 64, 1] (Triggered internally at /build/pytorch/aten/src/ATen/native/transformers/cuda/sdp_utils.cpp:424.)
scaled_dot_product_attention(q, k, v)
UserWarning: Value tensor was not in cuDNN-supported unpacked QKV layout[10240, 1024, 64, 1] (Triggered internally at /build/pytorch/aten/src/ATen/native/transformers/cuda/sdp_utils.cpp:427.)
scaled_dot_product_attention(q, k, v)
```
It's tempting to say that in this situation we should just select the MATH kernel.
But I have one worry: I use torch jit to trace the model into a static graph (see [stable-fast](https://github.com/chengzeyi/stable-fast)). If we fallback to MATH kernel for batch-of-zero: does this mean that the graph will always pick MATH kernel, even when generalizing to larger batch sizes? If that's the case: it'd be better to find a solution that fixes the Flash backend (i.e. add support for zero-batching).
### Versions
```
Collecting environment information...
PyTorch version: 2.4.0
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.1
Libc version: glibc-2.35
Python version: 3.10.12 (main, Mar 22 2024, 16:50:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.5.13-650-3434-22042-coreweave-1-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A40
Nvidia driver version: 535.177
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.1.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7413 24-Core Processor
CPU family: 25
Model: 1
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU max MHz: 3630.8101
CPU min MHz: 1500.0000
BogoMIPS: 5299.87
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin brs arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm
Virtualization: AMD-V
L1d cache: 1.5 MiB (48 instances)
L1i cache: 1.5 MiB (48 instances)
L2 cache: 24 MiB (48 instances)
L3 cache: 256 MiB (8 instances)
NUMA node(s): 8
NUMA node0 CPU(s): 0-5,48-53
NUMA node1 CPU(s): 6-11,54-59
NUMA node2 CPU(s): 12-17,60-65
NUMA node3 CPU(s): 18-23,66-71
NUMA node4 CPU(s): 24-29,72-77
NUMA node5 CPU(s): 30-35,78-83
NUMA node6 CPU(s): 36-41,84-89
NUMA node7 CPU(s): 42-47,90-95
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] clip-anytorch==2.6.0
[pip3] dctorch==0.1.2
[pip3] numpy==1.26.4
[pip3] open_clip_torch==2.26.1
[pip3] pytorch-lightning==2.3.3
[pip3] torch==2.4.0
[pip3] torchaudio==2.4.0
[pip3] torchdiffeq==0.2.4
[pip3] torchmetrics==1.4.0.post0
[pip3] torchsde==0.2.6
[pip3] torchvision==0.19.0
[pip3] triton==3.0.0
[pip3] welford-torch==0.2.4
[conda] Could not collect
```
cc @csarofeen @ptrblck @xwang233 @eqy @drisspg @mikaylagawarecki | module: cudnn,triaged,module: sdpa | low | Critical |
2,471,647,199 | godot | two loops required for a body involved in a collision before they both get process-disabled, in order to detect collision | ### Tested versions
v4.3.stable.mono.arch_linux [77dcf97d8]
### System information
Godot v4.3.stable.mono (77dcf97d8) - Arch Linux #1 SMP PREEMPT_DYNAMIC Thu, 15 Aug 2024 00:25:30 +0000 - X11 - GLES3 (Compatibility) - NVIDIA GeForce RTX 3060 Ti (nvidia; 555.58.02) - AMD Ryzen 5 7600X 6-Core Processor (12 Threads)
### Issue description
(if someone has a better title for this please comment it or change this title cause idfk how to describe this issue)
lets say you have staticbody A and B. B has a physicsprocess priority lower than A, so it runs before A does. both A and B only run in `_physics_process()`. B moves close to A, its transform is forcefully updated with `force_update_transform()`, and then it sets its processmode to disabled. A tests a move towards B such that it *should* collide, and then disables its processing too. the result of `test_move()` is `False`. however, if you add a counter, such that after 2+ physics ticks, B will disable, while A still only disables after 1, `test_move()` results in `True`
i also tested this by removing the process disabling, limiting in project settings the amount of physics steps to 1, and then looking at the first printed text, but it still resulted in `True`
this is insanity.
### Steps to reproduce
just look at the MRP
### Minimal reproduction project (MRP)
[weird-ass-physics-mrp.zip](https://github.com/user-attachments/files/16646344/weird-ass-physics-mrp.zip)
| documentation,topic:physics | low | Minor |
2,471,657,051 | react | [React 19] style using precedence can produce many additional style elements after initial render | ## Summary
When adding multiple style elements with the same precedence they will only ever be grouped on the initial render. During subsequent style elements being discovered they will each attach a new style element in the head of the document which can result in many elements being inserted as seen below:
<img width="1345" alt="image" src="https://github.com/user-attachments/assets/b9374fca-3b2b-430c-a6c3-a7f43f2df644">
I'm not sure if there is an inherent constraint because of concurrent rendering, but it would be nice if React could either batch these similar to the initial page render or always reuse a precedence once created and insert rules into the same style tag. | Resolution: Needs More Information,React 19 | low | Major |
2,471,662,655 | godot | The third argument of `SpriteFrames.add_frames()` is different in 3.x and 4.x, but is not processed by project converter | ### Tested versions
Reproducible in 4.0.stable, 4.2.2stable, 4.3.stable
**Not reproducible in Godot v3.4.4**
### System information
Godot v4.2.2.stable - Windows 10.0.19044 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3060 (NVIDIA; 31.0.15.2756) - AMD Ryzen 9 5950X 16-Core Processor (32 Threads)
### Issue description
* When adding frames to SpriteFrames using code, i.e `SpriteFrames.add_frames`, the resulting playback is laggy.
* However, when adding frames using the SpriteFrames Editori UI in the Godot Editor, the resulting playback is smooth/is as expected.
I've packaged 2 repro projects which contains everything that is needed to demonstrate the problem.
This GIF is an example of the laggy playback

And this is an example of playback that is normal

### Steps to reproduce
- Open the uploaded G4 project in Godot 4.x.
- Run the project.
- The initial walk animation (anim-2) you will see _should_ be rather slow in playback.
- Click on the screen; this will switch to another animation (anim-1) which would play back normally.
In the project itself you will see:
- In the SpriteFrames of the AnimatedSprite2D node, the `anim-1` animation set has been created and loaded.
- This is the animation that plays back smoothly.
In the `AnimatedSprite.gd` script in the project, you will see:
- `anim-2` has been loaded into the SpriteFrames using SpriteFrames.add_frames()
- `anim-2` is the animation that plays back slowly.
There are 2 projects, one for G4 and one for G3. Please note that this bug does not exist in G3, and you can test it with the G3 project.
### Minimal reproduction project (MRP)
[bugreport-animatedsprite2d-add_frames_runtime-godot4-1.zip](https://github.com/user-attachments/files/16646443/bugreport-animatedsprite2d-add_frames_runtime-godot4-1.zip)
[bugreport-animatedsprite2d-add_frames_runtime-godot3-1.zip](https://github.com/user-attachments/files/16646445/bugreport-animatedsprite2d-add_frames_runtime-godot3-1.zip)
| bug,topic:animation,topic:2d | low | Critical |
2,471,664,518 | godot | Particles from GPUParticles3D with rigid collision sporadically turn when velocity has fully dampened | ### Tested versions
- Reproducible in v4.3.stable.official [77dcf97d8]
- Not reproducible in v4.2.1.stable.official [b09f793f5]
### System information
Godot v4.3.stable - Ubuntu 24.04 LTS 24.04 - X11 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3070 Ti (nvidia; 535.183.01) - 12th Gen Intel(R) Core(TM) i9-12900F (24 Threads)
### Issue description
When particles from a `GPUParticles3D` are colliding with a `GPUParticlesCollisionBox3D` with the mode set to `Rigid`, when they eventually come to a halt, they will sporadically begin to rotate (which also sometimes results in a shimmer effect if it seems to struggle to rotate on the spot).
It has only began to occur since 4.3, the same particle material has been OK in all 4.x releases so far (if I am correct).
An example containing a few iterations of the particle emitter can be seen below:
https://github.com/user-attachments/assets/456ecae1-5e50-4d09-8c5f-8121f3ae4b5e
### Steps to reproduce
Open the MRP and wait until the particles come to a halt and you should see them begin to rotate on the spot without any movement along the collider
### Minimal reproduction project (MRP)
[godot-particle-shimmer-mrp.zip](https://github.com/user-attachments/files/16646452/godot-particle-shimmer-mrp.zip)
| bug,confirmed,regression,topic:3d,topic:particles | low | Major |
2,471,676,549 | three.js | DataTexture issue introduced in r145 | ### Description
I was upgrading my [virtual texture system](https://discourse.threejs.org/t/virtual-textures/53353) to the latest revision up from r136, going directly to current version (r167.1) yielded a curious error:
```sh
WebGL: INVALID_OPERATION: texSubImage2D: no texture bound to target
GL_INVALID_OPERATION: Invalid combination of format, type and internalFormat.
```
I looked deeper and found that the problem was coming from `copyTextureToTexture`:

So I started looking for the point where the breakage occurs, r144 is fine, r145 breaks.
Looking through the [patch notes](https://github.com/mrdoob/three.js/releases/tag/r145) yield little of interest except for:
* https://github.com/mrdoob/three.js/pull/24492
* https://github.com/mrdoob/three.js/pull/24599
and possibly
https://github.com/mrdoob/three.js/pull/24637
Here's the actual code that sets up the offending texture:
```js
#page_texture = new DataTexture(
new Uint8Array(4096 * 4096 * 4),
4096 ,
4096 ,
RGBAFormat,
UnsignedByteType,
);
/* ... */
const texture = this.#page_texture;
texture.unpackAlignment = 4;
texture.type = UnsignedByteType;
texture.format = RGBAFormat;
texture.internalFormat = "RGBA8";
texture.generateMipmaps = false;
texture.minFilter = NearestFilter;
texture.magFilter = LinearFilter;
texture.wrapS = ClampToEdgeWrapping;
texture.wrapT = ClampToEdgeWrapping;
texture.anisotropy = 8;
```
All of the above is done in the constructor of the containing class, so all parts are done before first usage.
The texture is bound to two custom shaders:

here's what that looks like in r144:

Nothing special there really, but perhaps important is the fact that these two use different `ShaderMaterial`s, because if I comment out uniform assignment for one of the shaders (the one used for central element), the error goes away:

The console is clean too.
My best guess is that when `textures.setTexture2D( dstTexture, 0 );` is being called inside `copyTextureToTexture`, it binds / activates the wrong texture. Why? - no idea.
### Version
r167.1
### Device
Desktop
### Browser
Chrome
### OS
Windows | Needs Investigation | low | Critical |
2,471,684,453 | tauri | [bug] Avif images are not loading. | ### Describe the bug
.png, .jpeg, [.webp](https://www.gstatic.com/webp/gallery/1.sm.webp) images load correctly inside Tauri.

[.avif](https://aomediacodec.github.io/av1-avif/testFiles/Link-U/fox.profile0.8bpc.yuv420.avif) images do not load inside Tauri.

---
This issue comes from WebKit2GTK 2.44.2. (_WebKitGTK mentions [libavif](https://www.linuxfromscratch.org/blfs/view/svn/x/webkitgtk.html) as a plugin in their website. I have installed libavif in the system, but it did not solve the issue. Probably libavif needs to be recognized by WebKitGTK. How can I extend WebKitGTK with this option before building the Linux application?_)
### Reproduction
Load .avif image in a Tauri project.
### Expected behavior
Webview inside Tauri should load .avif images.
### Full `tauri info` output
```text
[✔] Environment
- OS: Ubuntu 24.04 X64
✔ webkit2gtk-4.1: 2.44.2
✔ rsvg2: 2.58.0
✔ rustc: 1.80.1 (3f5fd8dd4 2024-08-06)
✔ cargo: 1.80.1 (376290515 2024-07-16)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-x86_64-unknown-linux-gnu (default)
- node: 22.6.0
- npm: 10.8.2
[-] Packages
- tauri [RUST]: 2.0.0-rc.3
- tauri-build [RUST]: 2.0.0-rc.3
- wry [RUST]: 0.42.0
- tao [RUST]: 0.29.0
- @tauri-apps/api : not installed!
- @tauri-apps/cli [NPM]: 2.0.0-rc.4
[-] App
- build-type: bundle
- CSP: default-src 'self'; img-src 'self' data:; script-src 'self'; style-src 'self';
- frontendDist: ../.vitepress/dist
- devUrl: http://localhost:5173/
- bundler: Vite
``` | type: bug,platform: Linux,status: needs triage | low | Critical |
2,471,689,007 | pytorch | Cannot Convert Pytorch model with fft_rfftn layers to ONNX using latest torch.onnx.dynamo_export | ### 🐛 Describe the bug
I am trying to convert the well renowned [LAMA Inpainting](https://github.com/advimman/lama) model to ONNX via the new dynamo_export, as earlier these fft_rfftn were not supported by onnx there wasn't a way to export this model to ONNX, although there was a workaround by adding a custom [FourierUnitJIT](https://github.com/Carve-Photos/lama/blob/f5fb39a18022c34a71bf9a47a6ec393c804b49ca/saicinpainting/training/modules/ffc.py#L153) class which made it compatible with older torch.onnx.export itself but has accuracy losses.
Conversion [Notebook ](https://github.com/Carve-Photos/lama/blob/main/export_LaMa_to_onnx.ipynb )of older approach.
As now we do have the support of these layers in onnx and dynamo_export allows us to export these, there is still some issue
Minimum Reproducible Code:
```bash
!curl -LJO "https://github.com/Sanster/models/releases/download/add_big_lama/big-lama.pt"
```
```py
import torch
model = torch.jit.load("big-lama.pt", map_location="cpu")
dummy_image = torch.randn(1, 3, 512, 512)
dummy_mask = torch.randint(0, 2, (1, 1, 512, 512)).float()
onnx_program = torch.onnx.dynamo_export(model, (dummy_image, dummy_mask))
```
Torch ONNX Dynamo gives an error stating
OnnxExporterError: Failed to export the model to ONNX. Generating SARIF report at 'report_dynamo_export.sarif'. SARIF is a standard format for the output of static analysis tools
[SARIF REPORT](https://drive.google.com/file/d/1uF9JO-Kn83qIhEYe2ggXNb7V_q08jMoa/view?usp=sharing)
```py
torch.onnx.export(model, (dummy_image, dummy_mask), "lama_inpainting.onnx",
opset_version = 18,
input_names=["image", "mask"],
output_names=["output"],
dynamic_axes={'image': {0: 'batch_size'},
'mask': {0: 'batch_size'},
'output': {0: 'batch_size'}})
```
Tried using the old onnx exporter but obviously it wont work as stated in #107588 and the code errors out stating 'aten::fft_rfftn' is not supported as expected
### Versions
Collecting environment information...
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.11.9 | packaged by conda-forge | (main, Apr 19 2024, 18:36:13) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-40-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA L4
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7742 64-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 1
Stepping: 0
Frequency boost: enabled
CPU max MHz: 2250.0000
CPU min MHz: 1500.0000
BogoMIPS: 4491.48
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 2 MiB (64 instances)
L1i cache: 2 MiB (64 instances)
L2 cache: 32 MiB (64 instances)
L3 cache: 256 MiB (16 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] onnx==1.16.2
[pip3] onnxscript==0.1.0.dev20240817
[pip3] torch==2.4.0
[pip3] torchvision==0.19.0
[pip3] triton==3.0.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] torchvision 0.19.0 pypi_0 pypi | module: onnx,triaged | low | Critical |
2,471,704,414 | godot | MacOS 4.3 android build broken and editor settings not showing up | ### Tested versions
4.3 is broken
4.2.2 works
### System information
Apple M1 v13 - Godot 4.3 - Mobile Forward and Compatible
### Issue description
I'm manually editing the 4.3 editor settings .tres file (because I can't get to editor settings in 4.3), to add the java sdk for android export. But after opening the export screen, I notice that the value I added in the .tres file becomes empty. Because of this, I cannot export for android because I get the "A valid Java SDK path is required in Editor Settings."
I should note that the 4.0 editor settings .tres file does have the correct value. But it seems this value is picked up by the 4.3 editor settings .tres file.
### Steps to reproduce
1. Follow export for android but manually edit the 4.3 editor settings .tres file.
2. Export for android
Expected: The "Export Project" should be enabled.
Actual: The "Export Project" button is disabled.
### Minimal reproduction project (MRP)
I don't have this. | platform:android,topic:editor,needs testing,regression,topic:export | low | Critical |
2,471,704,689 | three.js | Support for texture array / compressed texture array in MeshBasicMaterial and MeshStandardMaterial by index | ### Description
Currently its not possible to create material using single texture from regular or compressed texture array.
### Solution
When creating material, I would like to specify concrete texture from texture array by index / layer / depth so given texture is used. Also I would be able to change index to change texture easily in code.
### Alternatives
Currently to my understanding only custom shader can work with texture arrays.
### Additional context
Currently I am loading lot of texture from ktx2 container which contains lot of compressed textures. I would like to use existing MeshBasicMaterial so I can benefit from its features like lights.
| Needs Investigation | low | Minor |
2,471,735,492 | flutter | Proposal: initialize fields at the declaration site | #### before
```dart
class _MyState extends State<MyWidget> with SingleTickerProviderMixin {
late final int _value;
AnimationController? _controller;
@override
void initState() {
_value = widget.value;
_controller = AnimationController(
duration: Durations.medium1,
vsync: this,
);
}
// ...
}
```
#### after
```dart
class _MyState extends State<MyWidget> with SingleTickerProviderMixin {
late final int _value = widget.value;
late final AnimationController _controller = AnimationController(
duration: Durations.medium1,
vsync: this,
);
// ...
}
```
<br>
This has a few advantages:
1. More concise
2. Less jumping around to see how fields are initialized
3. Less prone to late initialization errors
4. The [prefer final](https://dart.dev/diagnostics/prefer_final_fields) & [unnecessary nullable](https://dart.dev/diagnostics/unnecessary_nullable_for_final_variable_declarations) lints will kick in | framework,c: proposal,P3,c: tech-debt,team-framework,triaged-framework,refactor | low | Critical |
2,471,738,638 | godot | "@export_range" step parameter causes inspector to visually round to one decimal place | ### Tested versions
Reproducible in 4.3.stable and 4.2.2.stable. This is unlikely to be a recent regression.
### System information
Godot v4.3.stable - Windows 10.0.22631 - GLES3 (Compatibility) - NVIDIA GeForce RTX 3080 Ti Laptop GPU (NVIDIA; 31.0.15.5123) - 12th Gen Intel(R) Core(TM) i9-12900H (20 Threads)
### Issue description
The `@export_range` annotation allows for a step value that snaps the slider to specific multiples. Taking the example from the docs, `@export_range(-10, 20, 0.2)` makes a slider from -10 to +20 stepping by 0.2. **However, the resulting slider stores the correct values but visually rounds the value to one decimal place.** This appears to be a visual bug, as opening the node/tres/etc in a text editor shows that the correct snap numbers are saved.
### Steps to reproduce
- In a new project, create a script and attach it to a node in a scene.
- Add `@export_range(0.0, 1.0, 0.1) var foo := 0.5`. You will notice that this works as intended.
- Add `@export_range(0.0, 1.0, 0.25) var bar := 0.5`. The anticipated behavior would be 0.0, 0.25, 0.5, 0.75, and 1.0. However, the inspector does not allow those values and instead snaps to 0.0, 0.3, 0.5, 0.8, 1.0. Selecting 0.3 here actually stores 0.25 in the variable, but this is not visually apparent, and it rounds up to 0.3.
- Add `@export_range(0.0, 1.0, 0.125) var baz := 0.5`. The anticipated behavior is 0.0, 0.125, 0.25, 0.375, 0.5, 0.625, 0.75, 0.875, 1.0. The shown numbers are instead 0.0, 0.1, 0.3, 0.4, 0.5, 0.6, 0.8, 0.9, 1.0. Once again, the correct value is being stored, but the inspector is rounding to one decimal place.
### Minimal reproduction project (MRP)
N/A | bug,topic:editor,needs testing | low | Critical |
2,471,740,197 | vscode | Allow customization of JumpLists | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
I think it would be useful to allow users to customize their JumpList.
Not the recent documents feature, a separate constant list of launch actions.
I have hacked up my local version to demo this and could issue a PR, but figured I would ask for community feedback first.
This idea was inspired by the SSH Remote extension that I use frequently.
The workflow for launching this if VSCode isn't already open is a little kludgy:
1. open a new code window,
2. then launch the remote (which opens another window)
3. close the original (empty) window
On my machine, I modded a new entry into the app's JumpList (next to "New Window") that launches a specific SSH target I use frequently:

This static list of Launch actions is here:
https://github.com/microsoft/vscode/blob/5a0335dcf3ffe79466b90d7c24923f3158fac81a/src/vs/platform/workspaces/electron-main/workspacesHistoryMainService.ts#L328
Obviously editing my install after each update isn't sustainable, so I'd like to see a way to allow user customization of these.
A simple list somewhere in settings would be sufficient for me, but maybe the community has other ideas on how this could/should be implemented? Perhaps extensions would want to offer to install themselves? Are there other use cases for this JumpList that could hint at a different implementation?
As mentioned above, I am willing to tackle this feature in a PR but would like some feedback before I get too far into it
Thanks for any thoughts!
| feature-request,workbench-history | low | Minor |
2,471,741,690 | godot | Error logged when loading small scripts with text editor | ### Tested versions
- Reproducible in 4.3.rc and 4.4.dev
### System information
Godot v4.4.dev (1bd740d18) - macOS 14.5.0 - GLES3 (Compatibility) - Apple M2 - Apple M2 (8 Threads)
### Issue description
<img width="818" alt="Screenshot 2024-08-17 at 6 31 03 PM" src="https://github.com/user-attachments/assets/0c81af50-78f9-4eaa-b2bf-59bb0400e17a">
I seem to have found a weird edge case when loading small scripts in the text editor. The bug appears when a calculation is done to figure out how to draw the "minimap" for the file. The calculation causes an invalid index to be calculated. Here is the error message specifically seen when the error occurs:
```
ERROR: Index p_line = -1 is out of bounds (text.size() = 2).
at: get_line_wrap_count (scene/gui/text_edit.cpp:5640)
```
Note: I can still use the editor and engine after this error occurs.
### Steps to reproduce
Create a script with two lines, for example:
```gdscript
extends Node3d
# cursor here
```
Save this script. Then, with the text editor window still open in the project, close the project.
When you re-open the project, you should see the Error message mentioned above.
### Minimal reproduction project (MRP)
A new project that has a scene with a single node with a script attached can be used to see this issue | bug,topic:editor,needs testing | low | Critical |
2,471,756,764 | godot | Theme file became huge after embedding image/font data | ### Tested versions
4.3 release
### System information
Godot v4.3.stable - Windows 10.0.22631 - Vulkan (Forward+) - dedicated Radeon RX 5500 XT - Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz (24 Threads)
### Issue description
crash when i click or try to grab the file, i just created the theme and it got imense size (160mb)
### Steps to reproduce
idk if that works but i created a font for a textedit node, and modify the background to pink when focus, that was it, now the theme file has 160mb
### Minimal reproduction project (MRP)
ill not be able to send a minimal reproduction due the theme size problem, but the project is small, i swear, i'll send it via google drive
https://drive.google.com/drive/folders/1AkIHQ18KEB9qEXxPLN3TsjtkEy_HLMtH?usp=drive_link | bug,topic:editor,needs testing | low | Critical |
2,471,771,218 | pytorch | RuntimeError: createStatus == pytorch_qnnp_status_success INTERNAL ASSERT FAILED at "../aten/src/ATen/native/quantized/cpu/BinaryOps.cpp":204, please report a bug to PyTorch. failed to create QNNPACK Add operator | ### 🐛 Describe the bug
INTERNAL ASSERT Error would be raised when using `quantized tensor` (empty data) and `torch.jit.trace`. The code is as follows:
```python
import torch
import os
import ctypes
# Initialize the environment
def setup_nnapi():
torch.backends.quantized.engine = 'qnnpack'
libneuralnetworks_path = os.environ.get('LIBNEURALNETWORKS_PATH')
if libneuralnetworks_path:
ctypes.cdll.LoadLibrary(libneuralnetworks_path)
# The check function
def check(module, arg_or_args, trace_args=None):
with torch.no_grad():
if isinstance(arg_or_args, torch.Tensor):
args = [arg_or_args]
else:
args = arg_or_args
module.eval()
traced = torch.jit.trace(module, trace_args or args)
# Test function
def test_qadd():
setup_nnapi()
func = torch.ao.nn.quantized.QFunctional()
func.scale = 0.5
func.zero_point = 120
class AddMod(torch.nn.Module):
def forward(self, lhs, rhs):
return func.add(lhs, rhs)
for (name, mod) in [('add', AddMod), ]:
# check(mod(), [qpt([[1.,2.]], 0.25, 128), qpt([[1.,2.]], 0.25, 128)]) # works well
check(mod(), [qpt([[]], 0.25, 128), qpt([[]], 0.25, 128)]) # INTERNAL ASSERT FAILED
def qpt(data, scale, zero_point):
q = torch.quantize_per_tensor(torch.tensor(data), scale, zero_point, torch.quint8)
return q
test_qadd()
```
The error messages are as follows:
```
Error in QNNPACK: failed to create add operator with 0 channels: number of channels must be non-zero
Traceback (most recent call last):
File "/data/test1.py", line 145, in <module>
test_qadd()
File "/data/test1.py", line 138, in test_qadd
check(mod(), [qpt([[]], 0.25, 128), qpt([[]], 0.25, 128)])
File "/data/test1.py", line 119, in check
traced = torch.jit.trace(module, trace_args or args)
File "/data/anacondas/envs/torchtest/lib/python3.10/site-packages/torch/jit/_trace.py", line 1002, in trace
traced_func = _trace_impl(
File "/data/anacondas/envs/torchtest/lib/python3.10/site-packages/torch/jit/_trace.py", line 698, in _trace_impl
return trace_module(
File "/data/anacondas/envs/torchtest/lib/python3.10/site-packages/torch/jit/_trace.py", line 1278, in trace_module
module._c._create_method_from_trace(
File "/data/anacondas/envs/torchtest/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/data/anacondas/envs/torchtest/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/data/anacondas/envs/torchtest/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1726, in _slow_forward
result = self.forward(*input, **kwargs)
File "/data/test1.py", line 132, in forward
return func.add(lhs, rhs)
File "/data/anacondas/envs/torchtest/lib/python3.10/site-packages/torch/ao/nn/quantized/modules/functional_modules.py", line 241, in add
r = ops.quantized.add(x, y, scale=self.scale, zero_point=self.zero_point)
File "/data/anacondas/envs/torchtest/lib/python3.10/site-packages/torch/_ops.py", line 1113, in __call__
return self._op(*args, **(kwargs or {}))
RuntimeError: createStatus == pytorch_qnnp_status_success INTERNAL ASSERT FAILED at "../aten/src/ATen/native/quantized/cpu/BinaryOps.cpp":204, please report a bug to PyTorch. failed to create QNNPACK Add operator
```
The error is reproducible with the nightly-build version `2.5.0.dev20240815+cpu` . Please find the colab [here](https://colab.research.google.com/drive/1rExY29Vt4rvBszxV8yu86Dv34YUCLvm9?usp=sharing).
### Versions
PyTorch version: 2.5.0.dev20240815+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.3 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-107-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 64
On-line CPU(s) list: 0-63
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz
Stepping: 6
CPU MHz: 800.000
CPU max MHz: 3500.0000
CPU min MHz: 800.0000
BogoMIPS: 5800.00
Virtualization: VT-x
L1d cache: 1.5 MiB
L1i cache: 1 MiB
L2 cache: 40 MiB
L3 cache: 48 MiB
NUMA node0 CPU(s): 0-15,32-47
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI Syscall hardening, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] onnx==1.16.2
[pip3] onnxruntime==1.19.0
[pip3] onnxscript==0.1.0.dev20240816
[pip3] pytorch-triton==3.0.0+dedb7bdf33
[pip3] torch==2.5.0.dev20240815+cu121
[pip3] torch-xla==2.4.0
[pip3] torch_xla_cuda_plugin==2.4.0
[pip3] torchaudio==2.4.0.dev20240815+cu121
[pip3] torchvision==0.20.0.dev20240815+cu121
[pip3] triton==3.0.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] pytorch-triton 3.0.0+dedb7bdf33 pypi_0 pypi
[conda] torch 2.5.0.dev20240815+cu121 pypi_0 pypi
[conda] torch-xla 2.4.0 pypi_0 pypi
[conda] torch-xla-cuda-plugin 2.4.0 pypi_0 pypi
[conda] torchaudio 2.4.0.dev20240815+cu121 pypi_0 pypi
[conda] torchvision 0.20.0.dev20240815+cu121 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @Xia-Weiwen @leslie-fang-intel @msaroufim | oncall: jit,oncall: quantization | low | Critical |
2,471,790,867 | pytorch | `delta` argument of `HuberLoss()` with `int` and `bool` works against the error message | ### 🐛 Describe the bug
`delta` argument of [HuberLoss()](https://pytorch.org/docs/stable/generated/torch.nn.HuberLoss.html) with a `complex` value gets the error message `delta` must be `float` as shown below:
```python
import torch
from torch import nn
tensor1 = torch.tensor([0.4, -0.8, -0.6])
tensor2 = torch.tensor([-0.2, 0.9, 0.4])
# ↓↓↓↓↓↓
huberloss = nn.HuberLoss(delta=1.+0.j)
huberloss(input=tensor1, target=tensor2) # Error
```
> TypeError: huber_loss(): argument 'delta' (position 4) must be float, not complex
But, `delta` argument with `int` and `bool` works against the error message above as shown below:
```python
import torch
from torch import nn
tensor1 = torch.tensor([0.4, -0.8, -0.6])
tensor2 = torch.tensor([-0.2, 0.9, 0.4])
# ↓
huberloss = nn.HuberLoss(delta=1)
huberloss(input=tensor1, target=tensor2)
# tensor(0.6267)
# ↓↓↓↓
huberloss = nn.HuberLoss(delta=True)
huberloss(input=tensor1, target=tensor2)
# tensor(0.6267)
```
### Versions
```python
import torch
torch.__version__ # 2.3.1+cu121
```
cc @malfet @albanD | module: error checking,triaged,actionable,module: python frontend,module: edge cases | low | Critical |
2,471,792,362 | terminal | Support XTGETTCAP sequence | <!--
🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨
I ACKNOWLEDGE THE FOLLOWING BEFORE PROCEEDING:
1. If I delete this entire template and go my own path, the core team may close my issue without further explanation or engagement.
2. If I list multiple bugs/concerns in this one issue, the core team may close my issue without further explanation or engagement.
3. If I write an issue that has many duplicates, the core team may close my issue without further explanation or engagement (and without necessarily spending time to find the exact duplicate ID number).
4. If I leave the title incomplete when filing the issue, the core team may close my issue without further explanation or engagement.
5. If I file something completely blank in the body, the core team may close my issue without further explanation or engagement.
All good? Then proceed!
-->
# Description of the new feature/enhancement
<!--
A clear and concise description of what the problem is that the new feature would solve.
Describe why and how a user would use this new functionality (if applicable).
-->
neovim uses XTGETTCAP sequence to query osc52 support and enables copying over ssh via osc52 if positive. wt supports osc52 but does not report via XTGETTCAP, so neovim is not aware of it and won't enable osc52 for wt.
# Proposed technical implementation details (optional)
https://github.com/neovim/neovim/issues/29504#issuecomment-2226374704
<!--
A clear and concise description of what you want to happen.
-->
| Issue-Feature,Area-VT,Product-Terminal | low | Critical |
2,471,800,209 | godot | Physical mouse Right click in android | ### Tested versions
4.3
### System information
Android 10
### Issue description
I'm currently using 4.3 and physical mouse right click doesn't work and even long press by mouse left button doesn't opens the context menu
### Steps to reproduce
Press right mouse button and nothing will happen
### Minimal reproduction project (MRP)
Just open any project in android editor
```[tasklist]
### Tasks
```
| bug,platform:android,topic:editor,topic:input | low | Major |
2,471,803,043 | godot | Script Editor sometimes loses window manager in X11 | ### Tested versions
All version of 4.x that support windowized script editor
### System information
Godot v4.3.stable.mono - Linux Mint 22 (Wilma) - X11 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 4060 Ti (nvidia; 535.183.01) - AMD Ryzen 7 5700X 8-Core Processor (16 Threads)
### Issue description
When the script editor is windowized as a separate window from the godot engine, sometimes it loses it's windows manager and become unmanaged on the desktop. This only seems to happen to the script window, it does not happen to main engine window nor to other applications on the system, just the script editor window.
### Steps to reproduce
It's random. I am not sure I have reliable steps to reproduce...it happens more often when resuming a session after the screen saver has started, but I can't say that's very reliable either.
### Minimal reproduction project (MRP)
This doesn't seem to be related to a project or it's contents. | bug,platform:linuxbsd,topic:editor,topic:porting,needs testing | low | Minor |
2,471,813,123 | next.js | Next incorrectly infers static builds when using `Promise.prototype.catch` on the dynamic action in a `Page` | ### Link to the code that reproduces this issue
https://github.com/chriskuech/next-catch-repro
### To Reproduce
# Repro case
1. Run `npm run build`
2. (The build fails)
# Control case
1. Comment out the `.catch(() => null)` in `page.tsx`
2. Run `npm run build`
3. (The build succeeds)
### Current vs. Expected behavior
Expected: next compiler sees that `Home` calls a function `triggerRepro` that contains `cookies()` and therefore marks the code as dynamic.
Observed: By calling `.catch(() => null)` on `triggerRepro()`, next compiler no longer infers that the function is dynamic and compiles the code as static, executing the code and failing at a runtime assertion
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.6.0: Mon Jul 29 21:14:46 PDT 2024; root:xnu-10063.141.2~1/RELEASE_ARM64_T6031
Available memory (MB): 36864
Available CPU cores: 14
Binaries:
Node: 20.14.0
npm: 10.7.0
Yarn: N/A
pnpm: N/A
Relevant Packages:
next: 14.2.5 // Latest available version is detected (14.2.5).
eslint-config-next: 14.2.5
react: 18.3.1
react-dom: 18.3.1
typescript: 5.5.4
Next.js Config:
output: standalone
```
### Which area(s) are affected? (Select all that apply)
Output (export/standalone)
### Which stage(s) are affected? (Select all that apply)
next build (local)
### Additional context
_No response_ | bug,Output (export/standalone) | low | Minor |
2,471,827,798 | kubernetes | [kubelet] Add support to expose init container resource usage in PodResources API | ### What would you like to be added?
**AS-IS:**
Currently, the PodResources API provides information on resource usage for main containers.(If the SidecarContainers feature gate is enabled, it also supports restartable init containers.)
However, it does not provide resource information for regular init containers.
https://github.com/kubernetes/kubernetes/blob/0f095cf0ba5156a972b446a5f65c963fe2a7919c/pkg/kubelet/apis/podresources/server_v1.go#L54-L91
**TO-BE:**
I propose that kubelet expose resource information for regular init containers as well. (or through an option in the `ListPodResourcesRequest`.)
### Why is this needed?
**EXAMPLE:**
To illustrate the use of the PodResources API, consider the example of the DCGM-exporter. When exposing GPU-related metrics, DCGM-exporter adds labels such as `namespace`, `pod`, and `container` names to indicate which GPU is being used. This information is retrieved by querying the PodResources API to check the resources allocated to each container within a Pod, focusing specifically on NVIDIA devices.
(Reference: https://github.com/NVIDIA/dcgm-exporter/blob/178d22f48543e4d4e09f744b5c0fbab9a3309028/pkg/dcgmexporter/kubernetes.go#L127-L139)
**AS-IS:**
For some exporters that rely on kubelet, the PodResources API does not provide information about resources (eg. GPU)used by init containers.
**Even if an init container runs for a long time, there is still no way to determine which `DeviceID`(eg. GPU device) it used.**
**TO-BE:**
With the proposed enhancement, the PodResources API would expose the `DeviceID` for both init containers and main containers. (Even if the same device is detected across multiple containers, this can be resolved by cross-referencing with the container's running status or other available methods.)
/sig node | sig/node,kind/feature,triage/accepted | low | Major |
2,471,829,100 | vscode | Ctrl+h results in wrong scroll position on vscode.dev | Steps to Reproduce:
1. Paste `test` into a document on [vscode.dev](https://vscode.dev/)
2. Select it and string replace it with nothing
3. Result is a wrong scroll position
Visually:
1.

2.

| bug,editor-find | low | Minor |
2,471,832,337 | electron | [Bug]: `setBackgroundMaterial` does not work | ### Preflight Checklist
- [X] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [X] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [X] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
31.1.4
### What operating system(s) are you using?
Windows
### Operating System Version
Windows 23H2 22631.4037
### What arch are you using?
x64
### Last Known Working Electron version
_No response_
### Expected Behavior
`win.setBackgroundMaterial('mica')` should apply material background effect to window.
### Actual Behavior
Nothing happened. Neither do others like 'tabbed' or 'acylic'
### Testcase Gist URL
https://gist.github.com/kingyue737/d2ca0f1d5318f5915538cb05879bfb81
### Additional Information
_No response_ | platform/windows,bug :beetle:,has-repro-gist,31-x-y | low | Critical |
2,471,848,875 | vscode | Quickly locate the current thread in the Call Stack view | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
When debugging a program with thousands of threads, it is often a waste of time to find the thread where the breakpoint is located. It may be useful to add a button to the call stack view to quickly locate the current thread.
<img width="398" alt="Screenshot 2024-08-18 at 16 03 41" src="https://github.com/user-attachments/assets/0e3b1e65-c1e1-4a23-b506-8419e5df5f66">
<!-- Describe the feature you'd like. -->
| feature-request,debug | low | Critical |
2,471,876,075 | vscode | Surrounding with brackets doesn’t work in some cases | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.93.0-insider (d751e4324d12370e950f8cd031aa8829637ce300)
- OS Version: macOS Sonoma 14.6.1
Surrounding selected text with brackets doesn’t work for some TypeScript/JavaScript arrow functions. VS Code Insiders 1.91 (June 2024) is the first release build where this stopped working.
**Steps to Reproduce:**
1. Create a JavaScript or TypeScript file with the following content:
```ts
const fn = () =>
doSomething()
```
2. Select `doSomething()`.
3. Type `{`, `[`, or `(`. VS Code should surround the selection with brackets, but the selected text is deleted and the opening bracket is inserted at the beginning of the line.
**Expected result:**
```ts
const fn = () =>
{doSomething()}
```
**Actual result:**
```ts
const fn = () =>
{
``` | bug,javascript,editor-autoclosing | low | Critical |
2,471,876,132 | godot | `AudioStreamInteractive` throws errors on project open if clip with transition is deleted. | ### Tested versions
- Reproducible in v4.3.stable.mono.official [77dcf97d8]
### System information
Windows 10 - Vulkan (forward+) - dedicated
### Issue description
Removing a clip that has transitions produces errors on project restart.
> Edit: The clip doesn't need to have a direct transition for this to occour.
> If you setup any AnyClip transition, this error can still happen
Nothing is broken, but errors are thrown each time the project is open.
This lines of errors are shown for each deleted clip that had a transition.
```
modules/interactive_music/audio_stream_interactive.cpp:251 - Condition "p_to_clip < CLIP_ANY || p_to_clip >= clip_count" is true.
modules/interactive_music/audio_stream_interactive.cpp:250 - Condition "p_from_clip < CLIP_ANY || p_from_clip >= clip_count" is true.
```
https://github.com/user-attachments/assets/ff329ddb-11b6-4524-af32-27caf2ed5520
### Steps to reproduce
- Create a transition to a clip
- Save
- Delete the clip that used that transition.
- Save and restart the project
Result: Errors in the console when project is started.
### Minimal reproduction project (MRP)
Project with the errors:
[errored_interactive_music.zip](https://github.com/user-attachments/files/16647837/errored_interactive_music.zip)
| bug,needs testing,topic:audio | low | Critical |
2,471,879,020 | tauri | [bug] Android application focus | ### Describe the bug
Neither do `window.onfocus` or `document.onvisibilitychange` or `appWindow.listen("tauri://focus")` detect when current application starts being active
### Reproduction
Rust:
```
#[tauri::command(async)]
async fn test(){
println!("Test");
}
```
Js:
```
import { invoke } from "@tauri-apps/api/core";
import { getCurrentWindow } from "@tauri-apps/api/window";
const appWindow = getCurrentWindow();
appWindow.listen("tauri://focus", ({ event, payload }) => {
invoke("test", {});
})
```
- tauri android dev
### Expected behavior
`Test` printed out every time user changes active app to our app (on Android).
### Full `tauri info` output
```text
[✔] Environment
- OS: Linux 22.04 X64
✔ webkit2gtk-4.1: 2.44.2
✔ rsvg2: 2.52.5
✔ rustc: 1.80.1 (3f5fd8dd4 2024-08-06)
✔ cargo: 1.80.1 (376290515 2024-07-16)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: 1.80.1-x86_64-unknown-linux-gnu (default)
- node: 21.6.1
- npm: 10.2.4
[-] Packages
- tauri [RUST]: 2.0.0-rc.2
- tauri-build [RUST]: 2.0.0-rc.2
- wry [RUST]: 0.41.0
- tao [RUST]: 0.28.1
- tauri-cli [RUST]: 2.0.0-rc.3
- @tauri-apps/api [NPM]: 2.0.0-rc.0
- @tauri-apps/cli [NPM]: 2.0.0-rc.3
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:1420/
- bundler: Vite
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,status: needs triage | low | Critical |
2,471,908,258 | godot | Control in invisible container acquire focus and loose it immediately with custom neightbors | ### Tested versions
- Reproducible in v4.3.stable.official [77dcf97d8]
### System information
Godot v4.3.stable - Ubuntu 24.04 LTS 24.04 - Wayland - GLES3 (Compatibility)
### Issue description
By default (current and correct behavior), when a control is not visible, either because it's `visible` property is set to `false`, or because it's parent is not visible, then the control cannot acquire focus, and focus skips the control.
When a control's neighbors or `focus_next` or `focus_previous` are set explicitely, and wether the control's `visible` property is set to `true` or `false`, the behavior is the same (expected).
However, if the control 's `visible` property is set to `true`, but the control is effectively not visible due to a parent not being visible, then the control CAN acquire focus, and then looses it immediately, resulting in no control having focus. This makes focus navigation between controls impossible when mixing invisible containers and explicitly set neighbors.
https://github.com/user-attachments/assets/09a2d88b-2ecb-426d-8979-99d1a4570577
### Steps to reproduce
- Open the MRP
- Check / Uncheck the checkboxes, and naviguate focus between controls with TAB and SHIFT + TAB, to test the configurations I mentioned.
### Minimal reproduction project (MRP)
[mrp-invisible-control-focus.zip](https://github.com/user-attachments/files/16648076/mrp-invisible-control-focus.zip)
| bug,topic:gui | low | Minor |
2,471,916,838 | tauri | [feat] support for non-http protocols on Windows (and Android) | ### Describe the problem
I'm able to use custom protocols in my app when built for Linux, but on Windows I get the error:
> "Failed to launch 'myprotocol://blah' because the scheme does not have a registered handler."
### Describe the solution you'd like
I'm not absolutely sure this isn't a bug, or just something I don't know how to do, but what I want is...
- the ability to implement a custom protocol handler (e.g. to handle URLs of the form `myprotocol://something`) on all platforms without the need to translate the URL 'manually' into a platform specific form (which on Windows I think would be `http://myprotocol.something`).
### Alternatives considered
I understand from [discussion on Discord](https://discord.com/channels/616186924390023171/1047150269156294677/threads/1270499751846219948) that the Tauri core API `convertFileSrc()` can be used to translate before applying the URI, provided you have access to this programatically. However, I don't believe this is a solution because I am using an iframe to load content which has URLs embedded in it that may include the custom protocol, or may load other content that does so (e.g. if the user clicks a link). In those cases there's no way for me to translate the URLs programatically.
### Additional context
I was told on the Discord that this issue should also apply to Android but I haven't confirmed this. However, some time ago I was trying an early version of my app in Android Studio emulator, and I thought that it had succeeded. I can't be sure though, and have found Android Studio unreliable so not easy for me to check.
The possibility that it was working on Android leads me to hope that there is though some way to do this without needing to use the `convertFileSrc()` or any other programmatic method, which is why I'm not sure if this is an existing feature or a feature request.
Unfortunately the Discord topic isn't resolving so I decided to open an issue so it won't get lost. | type: feature request | low | Critical |
2,471,924,410 | bitcoin | bitcoind shouldn't be shutdown automatically despite wallet synchronisation error | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current behaviour
Bitcoind startup is terminated.
### Expected behaviour
bitcoind should complete startup and keep running successfully.
### Steps to reproduce
Step:
`$ ./bitcoind`
### Relevant log output
Result (logs):
Keys: 9 plaintext, 0 encrypted, 0 w/ metadata, 9 total. Unknown wallet records: 0
Wallet completed loading in 326ms
Error: Prune: last wallet synchronisation goes beyond pruned data. You need to -reindex (download the whole blockchain again in case of pruned node)
Error: Prune: last wallet synchronisation goes beyond pruned data. You need to -reindex (download the whole blockchain again in case of pruned node)
Shutdown: In progress...
scheduler thread exit
Shutdown: done
### How did you obtain Bitcoin Core
Compiled from source
### What version of Bitcoin Core are you using?
master
### Operating system and version
Linux
### Machine specifications
_No response_ | Wallet | low | Critical |
2,471,928,156 | rust | const_heap feature can be used to leak mutable memory into final value of constant | Consider this code:
```rust
#![feature(core_intrinsics)]
#![feature(const_heap)]
#![feature(const_mut_refs)]
use std::intrinsics;
const BAR: *mut i32 = unsafe { intrinsics::const_allocate(4, 4) as *mut i32 };
fn main() {}
```
This code is problematic because when `BAR` is used multiple times in the program, it will always point to the same global allocation, violating the idea that consts behave as-if their initializer is inlined everywhere they are used. Furthermore, under our current interning strategy this allocation will end up being immutable in the runtime phase of the program, so writing it is UB, which could be quite surprising.
We have a safety net in the interner that catches this problem, but the safety net has a big gaping hole around shared references with interior mutability, and can hence easily be circumvented:
```rust
#![feature(core_intrinsics)]
#![feature(const_heap)]
#![feature(const_mut_refs, const_refs_to_cell)]
use std::intrinsics;
use std::cell::Cell;
const BAR: *mut i32 = unsafe {
let launder = &*(intrinsics::const_allocate(4, 4) as *const Cell<i32>);
launder as *const _ as *mut i32
};
fn main() {}
```
The gaping hole in the safety net is needed due to https://github.com/rust-lang/rust/issues/121610 and https://github.com/rust-lang/unsafe-code-guidelines/issues/493; also see the description of https://github.com/rust-lang/rust/pull/128543 for more context.
This seems like a pretty major blocker for the const_heap feature, unless we want to just declare this UB "ex machina".
Cc @rust-lang/wg-const-eval
(Not something to be discussed any time soon, I am filing this because tidy forced me to have an issue number for the ICE that ensues from this mutable-ref-escape-prevention-bypass.) | P-low,A-const-eval,F-const_mut_refs,requires-incomplete-features,F-core_intrinsics,F-const_heap | low | Minor |
2,471,928,772 | pytorch | pytorch/aten/src/ATen/cuda/CUDABlas.cpp:1422:28: error | ### 🐛 Describe the bug
I am trying to compile pytorch on the PSC Bridges2 system and getting the following error using make in the build directory.
/jet/home/gcommuni/projects/AI-Proteins/pytorch/aten/src/ATen/cuda/CUDABlas.cpp:1422:28: error: 'CUBLASLT_MATMUL_DESC_A_SCALE_POINTER' was not declared in this scope; did you mean CUBLASLT_MATMUL_DESC_BIAS_POINTER'?
1422 | computeDesc.setAttribute(CUBLASLT_MATMUL_DESC_A_SCALE_POINTER, mat1_scale_ptr);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| CUBLASLT_MATMUL_DESC_BIAS_POINTER
/jet/home/gcommuni/projects/AI-Proteins/pytorch/aten/src/ATen/cuda/CUDABlas.cpp:1423:28: error: 'CUBLASLT_MATMUL_DESC_B_SCALE_POINTER' was not declared in this scope; did you mean CUBLASLT_MATMUL_DESC_BIAS_POINTER'?
1423 | computeDesc.setAttribute(CUBLASLT_MATMUL_DESC_B_SCALE_POINTER, mat2_scale_ptr);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| CUBLASLT_MATMUL_DESC_BIAS_POINTER
/jet/home/gcommuni/projects/AI-Proteins/pytorch/aten/src/ATen/cuda/CUDABlas.cpp:1424:28: error: 'CUBLASLT_MATMUL_DESC_D_SCALE_POINTER' was not declared in this scope; did you mean CUBLASLT_MATMUL_DESC_BIAS_POINTER'?
1424 | computeDesc.setAttribute(CUBLASLT_MATMUL_DESC_D_SCALE_POINTER, result_scale_ptr);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| CUBLASLT_MATMUL_DESC_BIAS_POINTER
/jet/home/gcommuni/projects/AI-Proteins/pytorch/aten/src/ATen/cuda/CUDABlas.cpp:1428:30: error: 'CUBLASLT_MATMUL_DESC_AMAX_D_POINTER' was not declared in this scope; did you mean 'CUBLASLT_MATMUL_DESC_BIAS_POINTER'?
1428 | computeDesc.setAttribute(CUBLASLT_MATMUL_DESC_AMAX_D_POINTER, amax_ptr);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| CUBLASLT_MATMUL_DESC_BIAS_POINTER
/jet/home/gcommuni/projects/AI-Proteins/pytorch/aten/src/ATen/cuda/CUDABlas.cpp:1432:28: error: 'CUBLASLT_MATMUL_DESC_FAST_ACCUM' was not declared in this scope; did you mean 'CUBLASLT_MATMUL_DESC_TRANSC'?
1432 | computeDesc.setAttribute(CUBLASLT_MATMUL_DESC_FAST_ACCUM, fastAccuMode);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| CUBLASLT_MATMUL_DESC_TRANSC
/jet/home/gcommuni/projects/AI-Proteins/pytorch/aten/src/ATen/cuda/CUDABlas.cpp:1446:30: error: 'CUBLASLT_MATMUL_DESC_BIAS_DATA_TYPE' was not declared in this scope; did you mean 'CUBLASLT_MATMUL_DESC_SCALE_TYPE'?
1446 | computeDesc.setAttribute(CUBLASLT_MATMUL_DESC_BIAS_DATA_TYPE, ScalarTypeToCudaDataType(bias_dtype));
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| CUBLASLT_MATMUL_DESC_SCALE_TYPE
make[2]: *** [caffe2/CMakeFiles/torch_cuda.dir/build.make:6122: caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/cuda/CUDABlas.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:7882: caffe2/CMakeFiles/torch_cuda.dir/all] Error 2
make: *** [Makefile:146: all] Error 2
### Error logs
[ 88%] Built target nccl_slim_external
Consolidate compiler generated dependencies of target torch_cuda
[ 88%] Building CXX object caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/cuda/CUDABlas.cpp.o
In file included from /jet/home/gcommuni/projects/AI-Proteins/pytorch/aten/src/ATen/core/IListRef.h:631,
from /jet/home/gcommuni/projects/AI-Proteins/pytorch/aten/src/ATen/DeviceGuard.h:3,
from /jet/home/gcommuni/projects/AI-Proteins/pytorch/aten/src/ATen/ATen.h:9,
from /jet/home/gcommuni/projects/AI-Proteins/pytorch/aten/src/ATen/cuda/CUDABlas.cpp:5:
/jet/home/gcommuni/projects/AI-Proteins/pytorch/aten/src/ATen/core/IListRef_inl.h: In static member function 'static c10::detail::IListRefConstRef<at::OptionalTensorRef> c10::detail::IListRefTagImpl<c10::IListRefTag::Boxed, at::OptionalTensorRef>::iterator_get(const c10::List<std::optional<at::Tensor> >::const_iterator&)':
/jet/home/gcommuni/projects/AI-Proteins/pytorch/aten/src/ATen/core/IListRef_inl.h:171:17: warning: possibly dangling reference to a temporary [-Wdangling-reference]
171 | const auto& ivalue = (*it).get();
| ^~~~~~
/jet/home/gcommuni/projects/AI-Proteins/pytorch/aten/src/ATen/core/IListRef_inl.h:171:35: note: the temporary was destroyed at the end of the full expression '(& it)->c10::impl::ListIterator<std::optional<at::Tensor>, __gnu_cxx::__normal_iterator<c10::IValue*, std::vector<c10::IValue> > >::operator*().c10::impl::ListElementReference<std::optional<at::Tensor>, __gnu_cxx::__normal_iterator<c10::IValue*, std::vector<c10::IValue> > >::get()'
171 | const auto& ivalue = (*it).get();
| ~~~~~~~~~^~
/jet/home/gcommuni/projects/AI-Proteins/pytorch/aten/src/ATen/cuda/CUDABlas.cpp: In function 'void at::cuda::blas::scaled_gemm(char, char, int64_t, int64_t, int64_t, const void*, const void*, int64_t, c10::ScalarType, const void*, const void*, int64_t, c10::ScalarType, const void*, c10::ScalarType, void*, const void*, int64_t, c10::ScalarType, void*, bool)':
/jet/home/gcommuni/projects/AI-Proteins/pytorch/aten/src/ATen/cuda/CUDABlas.cpp:1422:28: error: 'CUBLASLT_MATMUL_DESC_A_SCALE_POINTER' was not declared in this scope; did you mean CUBLASLT_MATMUL_DESC_BIAS_POINTER'?
1422 | computeDesc.setAttribute(CUBLASLT_MATMUL_DESC_A_SCALE_POINTER, mat1_scale_ptr);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| CUBLASLT_MATMUL_DESC_BIAS_POINTER
/jet/home/gcommuni/projects/AI-Proteins/pytorch/aten/src/ATen/cuda/CUDABlas.cpp:1423:28: error: 'CUBLASLT_MATMUL_DESC_B_SCALE_POINTER' was not declared in this scope; did you mean CUBLASLT_MATMUL_DESC_BIAS_POINTER'?
1423 | computeDesc.setAttribute(CUBLASLT_MATMUL_DESC_B_SCALE_POINTER, mat2_scale_ptr);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| CUBLASLT_MATMUL_DESC_BIAS_POINTER
/jet/home/gcommuni/projects/AI-Proteins/pytorch/aten/src/ATen/cuda/CUDABlas.cpp:1424:28: error: 'CUBLASLT_MATMUL_DESC_D_SCALE_POINTER' was not declared in this scope; did you mean CUBLASLT_MATMUL_DESC_BIAS_POINTER'?
1424 | computeDesc.setAttribute(CUBLASLT_MATMUL_DESC_D_SCALE_POINTER, result_scale_ptr);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| CUBLASLT_MATMUL_DESC_BIAS_POINTER
/jet/home/gcommuni/projects/AI-Proteins/pytorch/aten/src/ATen/cuda/CUDABlas.cpp:1428:30: error: 'CUBLASLT_MATMUL_DESC_AMAX_D_POINTER' was not declared in this scope; did you mean 'CUBLASLT_MATMUL_DESC_BIAS_POINTER'?
1428 | computeDesc.setAttribute(CUBLASLT_MATMUL_DESC_AMAX_D_POINTER, amax_ptr);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| CUBLASLT_MATMUL_DESC_BIAS_POINTER
/jet/home/gcommuni/projects/AI-Proteins/pytorch/aten/src/ATen/cuda/CUDABlas.cpp:1432:28: error: 'CUBLASLT_MATMUL_DESC_FAST_ACCUM' was not declared in this scope; did you mean 'CUBLASLT_MATMUL_DESC_TRANSC'?
1432 | computeDesc.setAttribute(CUBLASLT_MATMUL_DESC_FAST_ACCUM, fastAccuMode);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| CUBLASLT_MATMUL_DESC_TRANSC
/jet/home/gcommuni/projects/AI-Proteins/pytorch/aten/src/ATen/cuda/CUDABlas.cpp:1446:30: error: 'CUBLASLT_MATMUL_DESC_BIAS_DATA_TYPE' was not declared in this scope; did you mean 'CUBLASLT_MATMUL_DESC_SCALE_TYPE'?
1446 | computeDesc.setAttribute(CUBLASLT_MATMUL_DESC_BIAS_DATA_TYPE, ScalarTypeToCudaDataType(bias_dtype));
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| CUBLASLT_MATMUL_DESC_SCALE_TYPE
make[2]: *** [caffe2/CMakeFiles/torch_cuda.dir/build.make:6122: caffe2/CMakeFiles/torch_cuda.dir/__/aten/src/ATen/cuda/CUDABlas.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:7882: caffe2/CMakeFiles/torch_cuda.dir/all] Error 2
make: *** [Makefile:146: all] Error 2
### Minified repro
env
SLURM_MPI_TYPE=none
CONDA_SHLVL=4
NVM_DIR=/jet/home/gcommuni/.nvm
LS_COLORS=rs=0:di=38;5;33:ln=38;5;51:mh=00:pi=40;38;5;11:so=38;5;13:do=38;5;5:bd=48;5;232;38;5;11:cd=48;5;232;38;5;3:or=48;5;232;38;5;9:mi=01;05;37;41:su=48;5;196;38;5;15:sg=48;5;11;38;5;16:ca=48;5;196;38;5;226:tw=48;5;10;38;5;16:ow=48;5;10;38;5;21:st=48;5;21;38;5;15:ex=38;5;40:*.tar=38;5;9:*.tgz=38;5;9:*.arc=38;5;9:*.arj=38;5;9:*.taz=38;5;9:*.lha=38;5;9:*.lz4=38;5;9:*.lzh=38;5;9:*.lzma=38;5;9:*.tlz=38;5;9:*.txz=38;5;9:*.tzo=38;5;9:*.t7z=38;5;9:*.zip=38;5;9:*.z=38;5;9:*.dz=38;5;9:*.gz=38;5;9:*.lrz=38;5;9:*.lz=38;5;9:*.lzo=38;5;9:*.xz=38;5;9:*.zst=38;5;9:*.tzst=38;5;9:*.bz2=38;5;9:*.bz=38;5;9:*.tbz=38;5;9:*.tbz2=38;5;9:*.tz=38;5;9:*.deb=38;5;9:*.rpm=38;5;9:*.jar=38;5;9:*.war=38;5;9:*.ear=38;5;9:*.sar=38;5;9:*.rar=38;5;9:*.alz=38;5;9:*.ace=38;5;9:*.zoo=38;5;9:*.cpio=38;5;9:*.7z=38;5;9:*.rz=38;5;9:*.cab=38;5;9:*.wim=38;5;9:*.swm=38;5;9:*.dwm=38;5;9:*.esd=38;5;9:*.jpg=38;5;13:*.jpeg=38;5;13:*.mjpg=38;5;13:*.mjpeg=38;5;13:*.gif=38;5;13:*.bmp=38;5;13:*.pbm=38;5;13:*.pgm=38;5;13:*.ppm=38;5;13:*.tga=38;5;13:*.xbm=38;5;13:*.xpm=38;5;13:*.tif=38;5;13:*.tiff=38;5;13:*.png=38;5;13:*.svg=38;5;13:*.svgz=38;5;13:*.mng=38;5;13:*.pcx=38;5;13:*.mov=38;5;13:*.mpg=38;5;13:*.mpeg=38;5;13:*.m2v=38;5;13:*.mkv=38;5;13:*.webm=38;5;13:*.ogm=38;5;13:*.mp4=38;5;13:*.m4v=38;5;13:*.mp4v=38;5;13:*.vob=38;5;13:*.qt=38;5;13:*.nuv=38;5;13:*.wmv=38;5;13:*.asf=38;5;13:*.rm=38;5;13:*.rmvb=38;5;13:*.flc=38;5;13:*.avi=38;5;13:*.fli=38;5;13:*.flv=38;5;13:*.gl=38;5;13:*.dl=38;5;13:*.xcf=38;5;13:*.xwd=38;5;13:*.yuv=38;5;13:*.cgm=38;5;13:*.emf=38;5;13:*.ogv=38;5;13:*.ogx=38;5;13:*.aac=38;5;45:*.au=38;5;45:*.flac=38;5;45:*.m4a=38;5;45:*.mid=38;5;45:*.midi=38;5;45:*.mka=38;5;45:*.mp3=38;5;45:*.mpc=38;5;45:*.ogg=38;5;45:*.ra=38;5;45:*.wav=38;5;45:*.oga=38;5;45:*.opus=38;5;45:*.spx=38;5;45:*.xspf=38;5;45:
LD_LIBRARY_PATH=/opt/packages/nvidia/hpc_sdk/Linux_x86_64/22.9/comm_libs/nvshmem/lib:/opt/packages/nvidia/hpc_sdk/Linux_x86_64/22.9/comm_libs/nccl/lib:/opt/packages/nvidia/hpc_sdk/Linux_x86_64/22.9/comm_libs/mpi/lib:/opt/packages/nvidia/hpc_sdk/Linux_x86_64/22.9/math_libs/lib64:/opt/packages/nvidia/hpc_sdk/Linux_x86_64/22.9/compilers/lib:/opt/packages/nvidia/hpc_sdk/Linux_x86_64/22.9/compilers/extras/qd/lib:/opt/packages/nvidia/hpc_sdk/Linux_x86_64/22.9/cuda/extras/CUPTI/lib64:/opt/packages/nvidia/hpc_sdk/Linux_x86_64/22.9/cuda/lib64:/opt/packages/gcc/v13.2.1-p20240113/b2gpu/lib64:/opt/packages/cuda/v12.4.0/lib64:/opt/packages/cuda/v12.4.0/nvvm/lib64:/opt/packages/cuda/v12.4.0/extras/CUPTI/lib64:/jet/home/gcommuni/apps/openbabel-3.0.0-pathced/lib:/opt/packages/gcc/v13.2.1-p20240113/b2rm/lib64::
CONDA_EXE=/ocean/projects/che070006p/gcommuni/AI-Proteins/bin/conda
SRUN_DEBUG=3
SLURM_STEP_ID=4294967290
SLURM_NODEID=0
SLURM_TASK_PID=102066
__LMOD_REF_COUNT_PATH=/opt/packages/nvidia/hpc_sdk/Linux_x86_64/22.9/compilers/extras/qd/bin:1;/opt/packages/nvidia/hpc_sdk/Linux_x86_64/22.9/comm_libs/mpi/bin:1;/opt/packages/nvidia/hpc_sdk/Linux_x86_64/22.9/compilers/bin:1;/opt/packages/nvidia/hpc_sdk/Linux_x86_64/22.9/cuda/bin:1;/opt/packages/gcc/v13.2.1-p20240113/b2gpu/bin:1;/opt/packages/cuda/v12.4.0/bin:1;/opt/packages/cuda/v12.4.0/libnvvp:1;/ocean/projects/che070006p/gcommuni/AI-Proteins/alphafoldenv/bin:1;/ocean/projects/che070006p/gcommuni/AI-Proteins/bin:1;/opt/packages/anaconda3-2022.10/condabin:2;/jet/home/gcommuni/apps/osra-2.1.3/bin:2;/jet/home/gcommuni/apps/GraphicsMagick-1.3.42/bin:2;/jet/home/gcommuni/apps/lzip-1.15/bin:2;/jet/home/gcommuni/.local/bin:4;/jet/home/gcommuni/bin:2;/jet/home/gcommuni/apps/firefox:2;/jet/home/gcommuni/apps/opt/google/chrome:2;/jet/home/gcommuni/apps:2;/opt/packages/psc.allocations.user/bin:1;/opt/packages/allocations/bin:1;/opt/packages/gcc/v13.2.1-p20240113/b2rm/bin:1;/opt/packages/anaconda3-2022.10/bin:1;/usr/local/bin:1;/usr/bin:1;/usr/local/sbin:1;/usr/sbin:1;/opt/packages/interact/bin:1;/opt/puppetlabs/bin:1
_ModuleTable002_=YTMvMjAyMi4xMCIsWyJsb2FkT3JkZXIiXT0zLHByb3BUPXt9LFsic3RhY2tEZXB0aCJdPTAsWyJzdGF0dXMiXT0iYWN0aXZlIixbInVzZXJOYW1lIl09ImFuYWNvbmRhMyIsfSxjdWRhPXtbImZuIl09Ii9vcHQvbW9kdWxlZmlsZXMvcHJvZHVjdGlvbi9jdWRhLzEyLjQuMC5sdWEiLFsiZnVsbE5hbWUiXT0iY3VkYS8xMi40LjAiLFsibG9hZE9yZGVyIl09NCxwcm9wVD17fSxbInN0YWNrRGVwdGgiXT0wLFsic3RhdHVzIl09ImFjdGl2ZSIsWyJ1c2VyTmFtZSJdPSJjdWRhLzEyLjQuMCIsfSxnY2M9e1siZm4iXT0iL29wdC9tb2R1bGVmaWxlcy9wcm9kdWN0aW9uL2djYy8xMy4yLjEtcDIwMjQwMTEzLmx1YSIsWyJmdWxsTmFtZSJdPSJnY2MvMTMuMi4xLXAyMDI0MDExMyIsWyJs
SSH_CONNECTION=50.239.217.194 59810 128.182.81.29 22
INCLUDE=/opt/packages/gcc/v13.2.1-p20240113/b2gpu/include:/opt/packages/cuda/v12.4.0/include:/opt/packages/anaconda3-2022.10/include
SLURM_PRIO_PROCESS=0
__LMOD_REF_COUNT_C_INCLUDE_PATH=/opt/packages/anaconda3-2022.10/include:1
SALLOC_PARTITION=GPU-shared
LANG=en_US.UTF-8
SLURM_SUBMIT_DIR=/ocean/projects/che070006p/gcommuni/AI-Proteins/pytorch/build
HISTCONTROL=ignoredups
DISPLAY=10.8.11.29:11.0
HOSTNAME=v008.ib.bridges2.psc.edu
NVHPC_ROOT=/opt/packages/nvidia/hpc_sdk//Linux_x86_64/22.9
OLDPWD=/jet/home/gcommuni/projects/AI-Proteins
__LMOD_REF_COUNT__LMFILES_=/opt/modulefiles/production/allocations/1.0.lua:1;/opt/modulefiles/production/psc.allocations.user/1.0.lua:1;/opt/modulefiles/production/anaconda3/2022.10.lua:1;/opt/modulefiles/production/cuda/12.4.0.lua:1;/opt/modulefiles/production/gcc/13.2.1-p20240113.lua:1;/opt/modulefiles/production/openmpi/4.0.5-nvhpc22.9:1
C_INCLUDE_PATH=/opt/packages/anaconda3-2022.10/include
SLURM_STEPID=4294967290
SLURM_SRUN_COMM_HOST=10.8.11.29
OMPI_MCA_btl_openib_warn_no_device_params_found=0
CUDA_PATH=/opt/packages/cuda/v12.4.0
ROCR_VISIBLE_DEVICES=0
__LMOD_REF_COUNT_CPLUS_INCLUDE_PATH=/opt/packages/anaconda3-2022.10/include:1
CONDA_PREFIX=/ocean/projects/che070006p/gcommuni/AI-Proteins
SLURM_PROCID=0
SLURM_JOB_GID=9482
__LMOD_REF_COUNT_INCLUDE=/opt/packages/gcc/v13.2.1-p20240113/b2gpu/include:1;/opt/packages/cuda/v12.4.0/include:1;/opt/packages/anaconda3-2022.10/include:1
__LMOD_REF_COUNT_LD_LIBRARY_PATH=/opt/packages/nvidia/hpc_sdk/Linux_x86_64/22.9/comm_libs/nvshmem/lib:1;/opt/packages/nvidia/hpc_sdk/Linux_x86_64/22.9/comm_libs/nccl/lib:1;/opt/packages/nvidia/hpc_sdk/Linux_x86_64/22.9/comm_libs/mpi/lib:1;/opt/packages/nvidia/hpc_sdk/Linux_x86_64/22.9/math_libs/lib64:1;/opt/packages/nvidia/hpc_sdk/Linux_x86_64/22.9/compilers/lib:1;/opt/packages/nvidia/hpc_sdk/Linux_x86_64/22.9/compilers/extras/qd/lib:1;/opt/packages/nvidia/hpc_sdk/Linux_x86_64/22.9/cuda/extras/CUPTI/lib64:1;/opt/packages/nvidia/hpc_sdk/Linux_x86_64/22.9/cuda/lib64:1;/opt/packages/gcc/v13.2.1-p20240113/b2gpu/lib64:1;/opt/packages/cuda/v12.4.0/lib64:1;/opt/packages/cuda/v12.4.0/nvvm/lib64:1;/opt/packages/cuda/v12.4.0/extras/CUPTI/lib64:1;/jet/home/gcommuni/apps/openbabel-3.0.0-pathced/lib:2;/opt/packages/gcc/v13.2.1-p20240113/b2rm/lib64:1
DEFCUDAVERSION=11.7
__LMOD_REF_COUNT_PKG_CONFIG_PATH=/opt/packages/gcc/v13.2.1-p20240113/b2gpu/lib64/pkgconfig:1;/opt/packages/cuda/v12.4.0/lib64/pkgconfig:1;/opt/packages/gcc/v13.2.1-p20240113/b2rm/lib64/pkgconfig:1;/opt/packages/anaconda3-2022.10/lib/pkgconfig:1
SLURMD_NODENAME=v008
_ModuleTable004_=WyJmdWxsTmFtZSJdPSJwc2MuYWxsb2NhdGlvbnMudXNlci8xLjAiLFsibG9hZE9yZGVyIl09Mixwcm9wVD17fSxbInN0YWNrRGVwdGgiXT0wLFsic3RhdHVzIl09ImFjdGl2ZSIsWyJ1c2VyTmFtZSJdPSJwc2MuYWxsb2NhdGlvbnMudXNlciIsfSx9LG1wYXRoQT17Ii9ldGMvc2NsL21vZHVsZWZpbGVzIiwiL29wdC9tb2R1bGVmaWxlcy9wcm9kdWN0aW9uIiwiL29wdC9tb2R1bGVmaWxlcy9wcmVwcm9kdWN0aW9uIiwiL29wdC9tb2R1bGVmaWxlcy9kZXByZWNhdGVkIiwiL3Vzci9zaGFyZS9tb2R1bGVmaWxlcyIsIi91c3Ivc2hhcmUvbW9kdWxlZmlsZXMvTGludXgiLCIvdXNyL3NoYXJlL21vZHVsZWZpbGVzL0NvcmUiLCIvdXNyL3NoYXJlL2xtb2QvbG1vZC9tb2R1bGVmaWxl
VIRTUAL_ENV=/ocean/projects/che070006p/gcommuni/AI-Proteins/alphafoldenv
LOCAL=/local
SLURM_TASKS_PER_NODE=5
__LMOD_REF_COUNT_PYTHONPATH=/opt/packages/gcc/v13.2.1-p20240113/b2gpu/share/gcc-13.2.1/python:1;/opt/packages/gcc/v13.2.1-p20240113/b2rm/share/gcc-13.2.1/python:1;/jet/home/gcommuni/apps/SMILES_Vikrant/pyQRC:2
_CE_M=
which_declare=declare -f
CC=/opt/packages/nvidia/hpc_sdk//Linux_x86_64/22.9/compilers/bin/nvc
XDG_SESSION_ID=149318
USER=gcommuni
SLURM_NNODES=1
SLURM_LAUNCH_NODE_IPADDR=10.8.11.29
CONDA_PREFIX_1=/opt/packages/anaconda3-2022.10
CONDA_PREFIX_3=/opt/packages/anaconda3-2022.10
CONDA_PREFIX_2=/ocean/projects/che070006p/gcommuni/AI-Proteins
CONDA_PKGS_DIRS=$HOME/.conda/pkgs/
EMACSLOADPATH=/opt/packages/gcc/v13.2.1-p20240113/b2gpu/share/emacs/site-lisp:/opt/packages/gcc/v13.2.1-p20240113/b2rm/share/emacs/site-lisp::
SLURM_STEP_TASKS_PER_NODE=1
__LMOD_REF_COUNT_MODULEPATH=/etc/scl/modulefiles:2;/opt/modulefiles/production:1;/opt/modulefiles/preproduction:1;/opt/modulefiles/deprecated:1;/usr/share/modulefiles:1;/usr/share/modulefiles/Linux:1;/usr/share/modulefiles/Core:1;/usr/share/lmod/lmod/modulefiles/Core:1
__LMOD_REF_COUNT_LOADEDMODULES=allocations/1.0:1;psc.allocations.user/1.0:1;anaconda3/2022.10:1;cuda/12.4.0:1;gcc/13.2.1-p20240113:1;openmpi/4.0.5-nvhpc22.9:1
PWD=/jet/home/gcommuni/projects/AI-Proteins/pytorch/build
RAMDISK=/dev/shm
SLURM_JOB_NODELIST=v008
HOME=/jet/home/gcommuni
SLURM_CLUSTER_NAME=bridges2
CONDA_PYTHON_EXE=/ocean/projects/che070006p/gcommuni/AI-Proteins/bin/python
CMAKE_PREFIX_PATH=/opt/packages/gcc/v13.2.1-p20240113/b2gpu:/opt/packages/gcc/v13.2.1-p20240113/b2rm:/opt/packages/anaconda3-2022.10
SLURM_NODELIST=v008
SLURM_GPUS_ON_NODE=1
SSH_CLIENT=50.239.217.194 59810 22
LMOD_VERSION=8.2.7
CUDA_HOME=/opt/packages/cuda/v12.4.0
OPAL_PREFIX=/opt/packages/nvidia/hpc_sdk//Linux_x86_64/22.9/comm_libs/mpi
CPATH=/opt/packages/nvidia/hpc_sdk/Linux_x86_64/22.9/compilers/extras/qd/include/qd:/opt/packages/nvidia/hpc_sdk/Linux_x86_64/22.9/comm_libs/nvshmem/include:/opt/packages/nvidia/hpc_sdk/Linux_x86_64/22.9/comm_libs/nccl/include:/opt/packages/nvidia/hpc_sdk/Linux_x86_64/22.9/comm_libs/mpi/include:/opt/packages/nvidia/hpc_sdk/Linux_x86_64/22.9/math_libs/include:/opt/packages/gcc/v13.2.1-p20240113/b2gpu/include:/opt/packages/cuda/v12.4.0/include:/opt/packages/gcc/v13.2.1-p20240113/b2rm/include
__LMOD_REF_COUNT_INFOPATH=/opt/packages/gcc/v13.2.1-p20240113/b2gpu/share/info:1;/opt/packages/gcc/v13.2.1-p20240113/b2rm/share/info:1
F77=/opt/packages/nvidia/hpc_sdk//Linux_x86_64/22.9/compilers/bin/nvfortran
SLURM_JOB_CPUS_PER_NODE=5
BASH_ENV=/usr/share/lmod/lmod/init/bash
NVM_IOJS_ORG_MIRROR=https://iojs.org/dist
SLURM_TOPOLOGY_ADDR=v008
__LMOD_REF_COUNT_EMACSLOADPATH=/opt/packages/gcc/v13.2.1-p20240113/b2gpu/share/emacs/site-lisp:1;/opt/packages/gcc/v13.2.1-p20240113/b2rm/share/emacs/site-lisp:1
PROJECT=/ocean/projects/che070006p/gcommuni
_CE_CONDA=
SLURM_WORKING_CLUSTER=bridges2:br003:6814:9728:109
__LMOD_REF_COUNT_LIBRARY_PATH=/opt/packages/gcc/v13.2.1-p20240113/b2gpu/lib64:1;/opt/packages/cuda/v12.4.0/lib64:1;/opt/packages/cuda/v12.4.0/nvvm/lib64:1;/opt/packages/cuda/v12.4.0/extras/CUPTI/lib64:1;/opt/packages/gcc/v13.2.1-p20240113/b2rm/lib64:1
SLURM_STEP_NODELIST=v008
SALLOC_ACCOUNT=che070006p
SLURM_JOB_NAME=Interact
SLURM_SRUN_COMM_PORT=33595
TMPDIR=/var/tmp
LIBRARY_PATH=/opt/packages/gcc/v13.2.1-p20240113/b2gpu/lib64:/opt/packages/cuda/v12.4.0/lib64:/opt/packages/cuda/v12.4.0/nvvm/lib64:/opt/packages/cuda/v12.4.0/extras/CUPTI/lib64:/opt/packages/gcc/v13.2.1-p20240113/b2rm/lib64
SLURM_JOB_GPUS=6
SALLOC_JOB_NUM_NODES=1
__LMOD_REF_COUNT_CMAKE_PREFIX_PATH=/opt/packages/gcc/v13.2.1-p20240113/b2gpu:1;/opt/packages/gcc/v13.2.1-p20240113/b2rm:1;/opt/packages/anaconda3-2022.10:1
LMOD_sys=Linux
SLURM_JOBID=25100639
_ModuleTable001_=X01vZHVsZVRhYmxlXz17WyJNVHZlcnNpb24iXT0zLFsiY19yZWJ1aWxkVGltZSJdPWZhbHNlLFsiY19zaG9ydFRpbWUiXT1mYWxzZSxkZXB0aFQ9e30sZmFtaWx5PXt9LG1UPXthbGxvY2F0aW9ucz17WyJmbiJdPSIvb3B0L21vZHVsZWZpbGVzL3Byb2R1Y3Rpb24vYWxsb2NhdGlvbnMvMS4wLmx1YSIsWyJmdWxsTmFtZSJdPSJhbGxvY2F0aW9ucy8xLjAiLFsibG9hZE9yZGVyIl09MSxwcm9wVD17fSxbInN0YWNrRGVwdGgiXT0wLFsic3RhdHVzIl09ImFjdGl2ZSIsWyJ1c2VyTmFtZSJdPSJhbGxvY2F0aW9ucyIsfSxhbmFjb25kYTM9e1siZm4iXT0iL29wdC9tb2R1bGVmaWxlcy9wcm9kdWN0aW9uL2FuYWNvbmRhMy8yMDIyLjEwLmx1YSIsWyJmdWxsTmFtZSJdPSJhbmFjb25k
SLURM_CONF=/var/spool/slurm/d/conf-cache/slurm.conf
LOADEDMODULES=allocations/1.0:psc.allocations.user/1.0:anaconda3/2022.10:cuda/12.4.0:gcc/13.2.1-p20240113:openmpi/4.0.5-nvhpc22.9
FC=/opt/packages/nvidia/hpc_sdk//Linux_x86_64/22.9/compilers/bin/nvfortran
__LMOD_REF_COUNT_ACLOCAL_PATH=/opt/packages/anaconda3-2022.10/share/aclocal:1
__LMOD_REF_COUNT_MANPATH=/opt/packages/nvidia/hpc_sdk/Linux_x86_64/22.9/compilers/man:1;/opt/packages/gcc/v13.2.1-p20240113/b2gpu/share/man:1;/opt/packages/cuda/v12.4.0/gds/man:1;/opt/packages/gcc/v13.2.1-p20240113/b2rm/share/man:1;/usr/share/lmod/lmod/share/man:1;/opt/puppetlabs/puppet/share/man:1;/opt/packages/anaconda3-2022.10/man:1;/opt/packages/anaconda3-2022.10/share/man:1
_ModuleTable003_=b2FkT3JkZXIiXT01LHByb3BUPXt9LFsic3RhY2tEZXB0aCJdPTAsWyJzdGF0dXMiXT0iYWN0aXZlIixbInVzZXJOYW1lIl09ImdjYyIsfSxvcGVubXBpPXtbImZuIl09Ii9vcHQvbW9kdWxlZmlsZXMvcHJvZHVjdGlvbi9vcGVubXBpLzQuMC41LW52aHBjMjIuOSIsWyJmdWxsTmFtZSJdPSJvcGVubXBpLzQuMC41LW52aHBjMjIuOSIsWyJsb2FkT3JkZXIiXT02LHByb3BUPXt9LFsic3RhY2tEZXB0aCJdPTAsWyJzdGF0dXMiXT0iYWN0aXZlIixbInVzZXJOYW1lIl09Im9wZW5tcGkvNC4wLjUtbnZocGMyMi45Iix9LFsicHNjLmFsbG9jYXRpb25zLnVzZXIiXT17WyJmbiJdPSIvb3B0L21vZHVsZWZpbGVzL3Byb2R1Y3Rpb24vcHNjLmFsbG9jYXRpb25zLnVzZXIvMS4wLmx1YSIs
SLURM_NODE_ALIASES=(null)
LMOD_ROOT=/usr/share/lmod
SLURM_JOB_QOS=gpuinteract
SLURM_TOPOLOGY_ADDR_PATTERN=node
CONDA_PROMPT_MODIFIER=(base)
__LMOD_Priority_EMACSLOADPATH= :100
SSH_TTY=/dev/pts/6
LOCPATH=/opt/packages/gcc/v13.2.1-p20240113/b2gpu/share/locale:/opt/packages/gcc/v13.2.1-p20240113/b2rm/share/locale
OMPI_MCA_pml=^ucx
SUDO_PROMPT=Password for %u@psc.edu:
MAIL=/var/spool/mail/gcommuni
ZE_AFFINITY_MASK=0
SLURM_CPUS_ON_NODE=5
CXX=/opt/packages/nvidia/hpc_sdk//Linux_x86_64/22.9/compilers/bin/nvc++
SLURM_JOB_NUM_NODES=1
NVM_RC_VERSION=
SHELL=/bin/bash
TERM=xterm-256color
__LMOD_STACK_CONDA_PKGS_DIRS=JEhPTUUvLmNvbmRhL3BrZ3Mv
OMPI_MCA_opal_warn_on_missing_libcuda=0
SLURM_JOB_UID=22808
_ModuleTable_Sz_=5
__LMOD_REF_COUNT_CPATH=/opt/packages/nvidia/hpc_sdk/Linux_x86_64/22.9/compilers/extras/qd/include/qd:1;/opt/packages/nvidia/hpc_sdk/Linux_x86_64/22.9/comm_libs/nvshmem/include:1;/opt/packages/nvidia/hpc_sdk/Linux_x86_64/22.9/comm_libs/nccl/include:1;/opt/packages/nvidia/hpc_sdk/Linux_x86_64/22.9/comm_libs/mpi/include:1;/opt/packages/nvidia/hpc_sdk/Linux_x86_64/22.9/math_libs/include:1;/opt/packages/gcc/v13.2.1-p20240113/b2gpu/include:1;/opt/packages/cuda/v12.4.0/include:1;/opt/packages/gcc/v13.2.1-p20240113/b2rm/include:1
SLURM_JOB_PARTITION=GPU-shared
SLURM_SCRIPT_CONTEXT=prolog_task
SLURM_PTY_WIN_ROW=32
NVHPC=/opt/packages/nvidia/hpc_sdk/
TMOUT=1800
NVM_NODEJS_ORG_MIRROR=https://nodejs.org/dist
SLURM_JOB_USER=gcommuni
CUDA_VISIBLE_DEVICES=0
SLURM_PTY_WIN_COL=156
SHLVL=4
SLURM_SUBMIT_HOST=br011.ib.bridges2.psc.edu
PYTHONPATH=/opt/packages/gcc/v13.2.1-p20240113/b2gpu/share/gcc-13.2.1/python:/opt/packages/gcc/v13.2.1-p20240113/b2rm/share/gcc-13.2.1/python::/jet/home/gcommuni/apps/SMILES_Vikrant/pyQRC
SLURM_JOB_ACCOUNT=che070006p
ACLOCAL_PATH=/opt/packages/anaconda3-2022.10/share/aclocal
MANPATH=/opt/packages/nvidia/hpc_sdk/Linux_x86_64/22.9/compilers/man:/opt/packages/gcc/v13.2.1-p20240113/b2gpu/share/man:/opt/packages/cuda/v12.4.0/gds/man:/opt/packages/gcc/v13.2.1-p20240113/b2rm/share/man:/usr/share/lmod/lmod/share/man::/opt/puppetlabs/puppet/share/man:/opt/packages/anaconda3-2022.10/man:/opt/packages/anaconda3-2022.10/share/man
SLURM_EXPORT_ENV=ALL
SLURM_STEP_LAUNCHER_PORT=33595
F90=/opt/packages/nvidia/hpc_sdk//Linux_x86_64/22.9/compilers/bin/nvfortran
MODULEPATH=/etc/scl/modulefiles:/opt/modulefiles/production:/opt/modulefiles/preproduction:/opt/modulefiles/deprecated:/usr/share/modulefiles:/usr/share/modulefiles/Linux:/usr/share/modulefiles/Core:/usr/share/lmod/lmod/modulefiles/Core
GDK_BACKEND=x11
SLURM_PTY_PORT=41351
SLURM_GTIDS=0
LOGNAME=gcommuni
DBUS_SESSION_BUS_ADDRESS=unix:abstract=/tmp/dbus-KSFLM9s3js,guid=297b251e3e95e55f8929f84666c1cd62
XDG_RUNTIME_DIR=/run/user/22808
CPLUS_INCLUDE_PATH=/opt/packages/anaconda3-2022.10/include
MODULEPATH_ROOT=/usr/share/modulefiles
PATH=/opt/packages/nvidia/hpc_sdk/Linux_x86_64/22.9/compilers/extras/qd/bin:/opt/packages/nvidia/hpc_sdk/Linux_x86_64/22.9/comm_libs/mpi/bin:/opt/packages/nvidia/hpc_sdk/Linux_x86_64/22.9/compilers/bin:/opt/packages/nvidia/hpc_sdk/Linux_x86_64/22.9/cuda/bin:/opt/packages/gcc/v13.2.1-p20240113/b2gpu/bin:/opt/packages/cuda/v12.4.0/bin:/opt/packages/cuda/v12.4.0/libnvvp:/ocean/projects/che070006p/gcommuni/AI-Proteins/alphafoldenv/bin:/ocean/projects/che070006p/gcommuni/AI-Proteins/bin:/opt/packages/anaconda3-2022.10/condabin:/jet/home/gcommuni/apps/osra-2.1.3/bin:/jet/home/gcommuni/apps/GraphicsMagick-1.3.42/bin:/jet/home/gcommuni/apps/lzip-1.15/bin:/jet/home/gcommuni/.local/bin:/jet/home/gcommuni/bin:/jet/home/gcommuni/apps/firefox:/jet/home/gcommuni/apps/opt/google/chrome:/jet/home/gcommuni/apps:/opt/packages/psc.allocations.user/bin:/opt/packages/allocations/bin:/opt/packages/gcc/v13.2.1-p20240113/b2rm/bin:/opt/packages/anaconda3-2022.10/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/opt/packages/interact/bin:/opt/puppetlabs/bin
SLURM_JOB_ID=25100639
_LMFILES_=/opt/modulefiles/production/allocations/1.0.lua:/opt/modulefiles/production/psc.allocations.user/1.0.lua:/opt/modulefiles/production/anaconda3/2022.10.lua:/opt/modulefiles/production/cuda/12.4.0.lua:/opt/modulefiles/production/gcc/13.2.1-p20240113.lua:/opt/modulefiles/production/openmpi/4.0.5-nvhpc22.9
PS1=(base) (alphafoldenv) [\u@\h \W]\$
SLURM_STEP_NUM_TASKS=1
MODULESHOME=/usr/share/lmod/lmod
LMOD_SETTARG_FULL_SUPPORT=no
PKG_CONFIG_PATH=/opt/packages/gcc/v13.2.1-p20240113/b2gpu/lib64/pkgconfig:/opt/packages/cuda/v12.4.0/lib64/pkgconfig:/opt/packages/gcc/v13.2.1-p20240113/b2rm/lib64/pkgconfig:/opt/packages/anaconda3-2022.10/lib/pkgconfig
CONDA_DEFAULT_ENV=base
INFOPATH=/opt/packages/gcc/v13.2.1-p20240113/b2gpu/share/info:/opt/packages/gcc/v13.2.1-p20240113/b2rm/share/info
SALLOC_TIMELIMIT=60
HISTSIZE=1000
SLURM_JOB_CPUS_PER_NODE_PACK_GROUP_0=5
LMOD_PKG=/usr/share/lmod/lmod
SBATCH_EXPORT=NONE
SLURM_STEP_NUM_NODES=1
CPP=cpp
__LMOD_REF_COUNT_LOCPATH=/opt/packages/gcc/v13.2.1-p20240113/b2gpu/share/locale:1;/opt/packages/gcc/v13.2.1-p20240113/b2rm/share/locale:1
XML_CATALOG_FILES=file:///ocean/projects/che070006p/gcommuni/AI-Proteins/etc/xml/catalog file:///etc/xml/catalog
LMOD_CMD=/usr/share/lmod/lmod/libexec/lmod
_ModuleTable005_=cy9Db3JlIix9LFsic3lzdGVtQmFzZU1QQVRIIl09Ii9ldGMvc2NsL21vZHVsZWZpbGVzOi9vcHQvbW9kdWxlZmlsZXMvcHJvZHVjdGlvbjovb3B0L21vZHVsZWZpbGVzL3ByZXByb2R1Y3Rpb246L29wdC9tb2R1bGVmaWxlcy9kZXByZWNhdGVkOi91c3Ivc2hhcmUvbW9kdWxlZmlsZXM6L3Vzci9zaGFyZS9tb2R1bGVmaWxlcy9MaW51eDovdXNyL3NoYXJlL21vZHVsZWZpbGVzL0NvcmU6L3Vzci9zaGFyZS9sbW9kL2xtb2QvbW9kdWxlZmlsZXMvQ29yZSIsfQ==
SLURM_LOCALID=0
SALLOC_JOB_NAME=Interact
GPU_DEVICE_ORDINAL=0
SLURM_WHOLE=1
LESSOPEN=||/usr/bin/lesspipe.sh %s
LMOD_DIR=/usr/share/lmod/lmod/libexec
BASH_FUNC_which%%=() { ( alias;
eval ${which_declare} ) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot $@
}
BASH_FUNC_module%%=() { eval $($LMOD_CMD bash "$@") && eval $(${LMOD_SETTARG_CMD:-:} -s sh);
ret=$?;
/usr/share/lmod/8.2.7/libexec/log_modules "$@";
return $ret
}
BASH_FUNC_scl%%=() { if [ "$1" = "load" -o "$1" = "unload" ]; then
eval "module $@";
else
/usr/bin/scl "$@";
fi
}
BASH_FUNC_ml%%=() { eval $($LMOD_DIR/ml_cmd "$@")
}
_=/usr/bin/env
### Versions
python3 collect_env.py
Collecting environment information...
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Red Hat Enterprise Linux release 8.8 (Ootpa) (x86_64)
GCC version: (GCC) 13.2.1 20240113
Clang version: 15.0.7 (Red Hat 15.0.7-1.module+el8.8.0+17939+b58878af)
CMake version: version 3.20.2
Libc version: glibc-2.28
Python version: 3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-4.18.0-477.27.1.el8_8.x86_64-x86_64-with-glibc2.28
Is CUDA available: True
CUDA runtime version: 12.4.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla V100-SXM2-32GB
Nvidia driver version: 545.23.08
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 40
On-line CPU(s) list: 0-39
Thread(s) per core: 1
Core(s) per socket: 20
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
Stepping: 7
CPU MHz: 3200.000
BogoMIPS: 5000.00
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 28160K
NUMA node0 CPU(s): 0-19
NUMA node1 CPU(s): 20-39
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] optree==0.12.1
[pip3] torch==2.4.0
[pip3] torchaudio==2.4.0
[pip3] torchvision==0.19.0
[pip3] triton==3.0.0
[conda] cudatoolkit 11.8.0 h4ba93d1_13 conda-forge
[conda] magma-cuda121 2.6.1 1 pytorch
[conda] numpy 1.26.4 py310hb13e2d6_0 conda-forge
[conda] torch 2.4.0 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @malfet @seemethere @ptrblck @msaroufim @ezyang @chauhang @penguinwu | needs reproduction,module: build,module: cuda,triaged | low | Critical |
2,471,940,217 | excalidraw | Auto Scroll for handwriting quickly | https://twitter.com/hata6502/status/1604471663462330368
[](https://gyazo.com/571b5e987c6c7cdb100c9b3007701ee3)
Full-scratched demo app (touch devices only)
https://hata6502.github.io/handled/
When using a tablet for **study purposes**, we like to take handwritten notes as quickly as **with pen and paper**.
However, with the short width of a tablet, alternating between writing and manual scrolling may seem cumbersome.
Scrolling automatically and smoothly handwrite makes note-taking a little easier.
| UX/UI,tablet | low | Minor |
2,471,962,620 | godot | All BoneAttachment3D stays in Skeleton3D's origin until pose is updated | ### Tested versions
- Reproducible on v4.3.stable.official [77dcf97d8]
### System information
Win11 - v4.3.stable.official [77dcf97d8] - Renderer independent
### Issue description
BoneAttachment3D does not initially update it's pose until very first bone pose update and stays at `Skeleton3D`'s origin. However, in `.glb` preview it shows up normally - Refer project & demonstration video below on MRE.
### Steps to reproduce
Either:
- Load gltf from blender that has armature and bone-parented object
- Add BoneAttachment3D in runtime
### Minimal reproduction project (MRP)
[armaturetest.zip](https://github.com/user-attachments/files/16648655/armaturetest.zip)
Demonstration: https://www.youtube.com/watch?v=yAywnXm9cEw
| bug,topic:animation | low | Minor |
2,471,962,684 | ui | [bug]: Select Component Value Not Persisting After Page Reload | ### Describe the bug
The Select component is not maintaining its selected value after a page reload, even when the value is being saved to and retrieved from localStorage.
### Affected component/components
Select
### How to reproduce
1- Create a new component using the Select component from shadcn/ui.
2- Implement localStorage to save and retrieve the selected value.
3- Select a value from the dropdown.
4- Reload the page.
**Expected Behavior:**
The Select component should display the previously selected value after page reload.
**Actual Behavior:**
The Select component resets to its default state (showing the placeholder) after page reload, even though the correct value is being retrieved from localStorage.
Code Example:
```js
"use client";
import { useEffect, useState } from "react";
import {
Select,
SelectContent,
SelectItem,
SelectTrigger,
SelectValue,
} from "@/components/ui/select";
const TestComp = () => {
const [mode, setMode] = useState<string | undefined>(undefined);
useEffect(() => {
const keyMode = localStorage.getItem("keyMode");
if (keyMode) {
setMode(keyMode);
}
}, []);
useEffect(() => {
if (mode !== undefined) {
localStorage.setItem("keyMode", mode);
}
}, [mode]);
return (
<div>
<form>
<Select value={mode} onValueChange={(value) => setMode(value)}>
<SelectTrigger className="w-[180px]">
<SelectValue placeholder="Select mode" />
</SelectTrigger>
<SelectContent>
<SelectItem value="fast">Fast ⚡</SelectItem>
<SelectItem value="quality">Quality 💎</SelectItem>
</SelectContent>
</Select>
<div>Mode: {mode || "Not selected"}</div>
</div>
<Button type="submit">Submit</Button>
</form>
);
};
export default TestComp;
```
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Next.js version: 14.2.3
React version: 18
Browser: Chrome
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,471,965,624 | godot | Visual shader curve node displays different result while dragging | ### Tested versions
4.3
### System information
Godot v4.3.stable - Nobara Linux 40 (GNOME Edition) - Wayland - Vulkan (Mobile) - dedicated NVIDIA GeForce RTX 3050 Ti Laptop GPU - 11th Gen Intel(R) Core(TM) i7-11370H @ 3.30GHz (8 Threads)
### Issue description
I've tested this in various projects in both Linux and Windows, and I see similar behavior. While you are dragging in the curve node, a different result is displayed compared to when you finally release the mouse button/trackpad.
https://github.com/user-attachments/assets/33565fdb-0d96-451c-ac6d-18176590a982
https://github.com/user-attachments/assets/69e41896-ea93-40bf-89ef-0bf7b66d28b7
I hope it's obvious that the 3D viewport looks quite different depending on when I'm moving the mouse, vs when I'm releasing the mouse.
Also, the first time I drag the mouse button in the corner I get the weird light colors, but the second time I drag it into the corner I get the "good" colors, so that's a workaround (but I don't think it should be necessary).
### Steps to reproduce
1. Open the example project
2. Drag around a bit in the graph node and observe the behavior
### Minimal reproduction project (MRP)
[CurveTest.zip](https://github.com/user-attachments/files/16648809/CurveTest.zip)
| bug,topic:gui,topic:2d | low | Minor |
2,471,973,988 | rust | `-Zthreads` is slower the more cores I add | In my crates I haven't actually gotten a speedup from more threads. In fact, quite the opposite!
The more threads I add the slower it becomes, `1` being the fastest.
```
rustc --version
rustc 1.82.0-nightly (91376f416 2024-08-12)
```
| WG-compiler-performance,S-needs-repro | low | Major |
2,471,980,499 | rust | `std::fs::canonicalize` failed to process path mounted with cppcryptfs/dokan under windows | <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I tried this code:
```rust
use std::fs::canonicalize;
pub fn main() {
println!("{}", canonicalize("F:\\gofs").unwrap().display());
// Where F:\gofs is a path mounted with cppcryptfs which use Dokan library.
}
```
I expected to see this happen:
print `\\?\F:\gofs`.
Instead, this happened:
```
thread 'main' panicked at src/main.rs:5:45:
called `Result::unwrap()` on an `Err` value: Os { code: 2, kind: NotFound, message: "系统找不到指定的文件。" }
```
I have double checked that `F:\gofs` is exist and accessible.
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.80.1 (3f5fd8dd4 2024-08-06)
binary: rustc
commit-hash: 3f5fd8dd41153bc5fdca9427e9e05be2c767ba23
commit-date: 2024-08-06
host: x86_64-pc-windows-msvc
release: 1.80.1
LLVM version: 18.1.7
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary>Backtrace</summary>
<p>
```
stack backtrace:
0: std::panicking::begin_panic_handler
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library\std\src\panicking.rs:652
1: core::panicking::panic_fmt
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library\core\src\panicking.rs:72
2: core::result::unwrap_failed
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23/library\core\src\result.rs:1679
3: enum2$<core::result::Result<std::path::PathBuf,std::io::error::Error> >::unwrap
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23\library\core\src\result.rs:1102
4: test_dunce::main
at .\src\main.rs:5
5: core::ops::function::FnOnce::call_once<void (*)(),tuple$<> >
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23\library\core\src\ops\function.rs:250
6: core::hint::black_box
at /rustc/3f5fd8dd41153bc5fdca9427e9e05be2c767ba23\library\core\src\hint.rs:338
```
</p>
</details>
| O-windows,C-bug,T-libs,A-io | low | Critical |
2,471,999,115 | stable-diffusion-webui | [Bug]: ImportError: DLL load failed while importing onnx_cpp2py_export | ### Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [x] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
```
venv "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.10.1
Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2
Installing requirements
CUDA 12.1
Launching Web UI with arguments: --xformers --api
CHv1.8.11: Get Custom Model Folder
Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
[-] ADetailer initialized. version: 24.8.0, num models: 10
ControlNet preprocessor location: C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads
2024-08-18 11:21:27,174 - ControlNet - INFO - ControlNet v1.1.455
*** Error loading script: console_log_patch.py
Traceback (most recent call last):
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\scripts.py", line 515, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\script_loading.py", line 13, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\console_log_patch.py", line 4, in <module>
import insightface
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\__init__.py", line 16, in <module>
from . import model_zoo
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\__init__.py", line 1, in <module>
from .model_zoo import get_model
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 11, in <module>
from .arcface_onnx import *
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\arcface_onnx.py", line 10, in <module>
import onnx
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\onnx\__init__.py", line 77, in <module>
from onnx.onnx_cpp2py_export import ONNX_ML
ImportError: DLL load failed while importing onnx_cpp2py_export: Error en una rutina de inicialización de biblioteca de vínculos dinámicos (DLL).
---
*** Error loading script: reactor_api.py
Traceback (most recent call last):
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\scripts.py", line 515, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\script_loading.py", line 13, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_api.py", line 28, in <module>
from scripts.reactor_swapper import EnhancementOptions, swap_face, DetectionOptions
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_swapper.py", line 11, in <module>
import insightface
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\__init__.py", line 16, in <module>
from . import model_zoo
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\__init__.py", line 1, in <module>
from .model_zoo import get_model
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 11, in <module>
from .arcface_onnx import *
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\arcface_onnx.py", line 10, in <module>
import onnx
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\onnx\__init__.py", line 77, in <module>
from onnx.onnx_cpp2py_export import ONNX_ML
ImportError: DLL load failed while importing onnx_cpp2py_export: Error en una rutina de inicialización de biblioteca de vínculos dinámicos (DLL).
---
*** Error loading script: reactor_faceswap.py
Traceback (most recent call last):
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\scripts.py", line 515, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\script_loading.py", line 13, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_faceswap.py", line 18, in <module>
from reactor_ui import (
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\reactor_ui\__init__.py", line 2, in <module>
import reactor_ui.reactor_tools_ui as ui_tools
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\reactor_ui\reactor_tools_ui.py", line 2, in <module>
from scripts.reactor_swapper import build_face_model, blend_faces
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_swapper.py", line 11, in <module>
import insightface
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\__init__.py", line 16, in <module>
from . import model_zoo
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\__init__.py", line 1, in <module>
from .model_zoo import get_model
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 11, in <module>
from .arcface_onnx import *
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\arcface_onnx.py", line 10, in <module>
import onnx
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\onnx\__init__.py", line 77, in <module>
from onnx.onnx_cpp2py_export import ONNX_ML
ImportError: DLL load failed while importing onnx_cpp2py_export: Error en una rutina de inicialización de biblioteca de vínculos dinámicos (DLL).
---
*** Error loading script: reactor_helpers.py
Traceback (most recent call last):
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\scripts.py", line 515, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\script_loading.py", line 13, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_helpers.py", line 10, in <module>
from insightface.app.common import Face
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\__init__.py", line 16, in <module>
from . import model_zoo
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\__init__.py", line 1, in <module>
from .model_zoo import get_model
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 11, in <module>
from .arcface_onnx import *
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\arcface_onnx.py", line 10, in <module>
import onnx
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\onnx\__init__.py", line 77, in <module>
from onnx.onnx_cpp2py_export import ONNX_ML
ImportError: DLL load failed while importing onnx_cpp2py_export: Error en una rutina de inicialización de biblioteca de vínculos dinámicos (DLL).
---
*** Error loading script: reactor_logger.py
Traceback (most recent call last):
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\scripts.py", line 515, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\script_loading.py", line 13, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_logger.py", line 7, in <module>
from scripts.reactor_helpers import addLoggingLevel
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_helpers.py", line 10, in <module>
from insightface.app.common import Face
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\__init__.py", line 16, in <module>
from . import model_zoo
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\__init__.py", line 1, in <module>
from .model_zoo import get_model
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 11, in <module>
from .arcface_onnx import *
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\arcface_onnx.py", line 10, in <module>
import onnx
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\onnx\__init__.py", line 77, in <module>
from onnx.onnx_cpp2py_export import ONNX_ML
ImportError: DLL load failed while importing onnx_cpp2py_export: Error en una rutina de inicialización de biblioteca de vínculos dinámicos (DLL).
---
*** Error loading script: reactor_swapper.py
Traceback (most recent call last):
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\scripts.py", line 515, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\script_loading.py", line 13, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_swapper.py", line 11, in <module>
import insightface
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\__init__.py", line 16, in <module>
from . import model_zoo
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\__init__.py", line 1, in <module>
from .model_zoo import get_model
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 11, in <module>
from .arcface_onnx import *
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\arcface_onnx.py", line 10, in <module>
import onnx
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\onnx\__init__.py", line 77, in <module>
from onnx.onnx_cpp2py_export import ONNX_ML
ImportError: DLL load failed while importing onnx_cpp2py_export: Error en una rutina de inicialización de biblioteca de vínculos dinámicos (DLL).
---
*** Error loading script: reactor_version.py
Traceback (most recent call last):
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\scripts.py", line 515, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\script_loading.py", line 13, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_version.py", line 4, in <module>
from scripts.reactor_logger import logger, get_Run, set_Run
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_logger.py", line 7, in <module>
from scripts.reactor_helpers import addLoggingLevel
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_helpers.py", line 10, in <module>
from insightface.app.common import Face
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\__init__.py", line 16, in <module>
from . import model_zoo
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\__init__.py", line 1, in <module>
from .model_zoo import get_model
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 11, in <module>
from .arcface_onnx import *
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\arcface_onnx.py", line 10, in <module>
import onnx
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\onnx\__init__.py", line 77, in <module>
from onnx.onnx_cpp2py_export import ONNX_ML
ImportError: DLL load failed while importing onnx_cpp2py_export: Error en una rutina de inicialización de biblioteca de vínculos dinámicos (DLL).
---
*** Error loading script: reactor_xyz.py
Traceback (most recent call last):
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\scripts.py", line 515, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\script_loading.py", line 13, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_xyz.py", line 8, in <module>
from scripts.reactor_helpers import (
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_helpers.py", line 10, in <module>
from insightface.app.common import Face
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\__init__.py", line 16, in <module>
from . import model_zoo
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\__init__.py", line 1, in <module>
from .model_zoo import get_model
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 11, in <module>
from .arcface_onnx import *
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\arcface_onnx.py", line 10, in <module>
import onnx
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\onnx\__init__.py", line 77, in <module>
from onnx.onnx_cpp2py_export import ONNX_ML
ImportError: DLL load failed while importing onnx_cpp2py_export: Error en una rutina de inicialización de biblioteca de vínculos dinámicos (DLL).
---
Loading weights [a3f5346925] from C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\models\Stable-diffusion\1.5\Pony\fastPhotoPony_v40WithT5xxl_2.safetensors
CHv1.8.11: Set Proxy:
2024-08-18 11:21:49,221 - ControlNet - INFO - ControlNet UI callback registered.
Creating model from config: C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\repositories\generative-models\configs\inference\sd_xl_base.yaml
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py:1150: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
warnings.warn(
Startup time: 43.4s (prepare environment: 9.7s, import torch: 5.0s, import gradio: 0.8s, setup paths: 1.2s, initialize shared: 0.3s, other imports: 0.5s, load scripts: 23.8s, create ui: 0.7s, gradio launch: 0.5s, add APIs: 0.6s).
Loading VAE weights specified in settings: C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\models\VAE\sdxl_vae.safetensors
Applying attention optimization: xformers... done.
Model loaded in 7.0s (load weights from disk: 0.8s, create model: 0.5s, apply weights to model: 5.1s, load VAE: 0.1s, calculate empty prompt: 0.2s).
```
### Steps to reproduce the problem
Start the app.
### What should have happened?
NO show problems.
### What browsers do you use to access the UI ?
Mozilla Firefox
### Sysinfo
[sysinfo-2024-08-18-14-23.json](https://github.com/user-attachments/files/16649048/sysinfo-2024-08-18-14-23.json)
### Console logs
```Shell
venv "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.10.1
Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2
Installing requirements
CUDA 12.1
Launching Web UI with arguments: --xformers --api
CHv1.8.11: Get Custom Model Folder
Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
[-] ADetailer initialized. version: 24.8.0, num models: 10
ControlNet preprocessor location: C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads
2024-08-18 11:21:27,174 - ControlNet - INFO - ControlNet v1.1.455
*** Error loading script: console_log_patch.py
Traceback (most recent call last):
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\scripts.py", line 515, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\script_loading.py", line 13, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\console_log_patch.py", line 4, in <module>
import insightface
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\__init__.py", line 16, in <module>
from . import model_zoo
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\__init__.py", line 1, in <module>
from .model_zoo import get_model
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 11, in <module>
from .arcface_onnx import *
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\arcface_onnx.py", line 10, in <module>
import onnx
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\onnx\__init__.py", line 77, in <module>
from onnx.onnx_cpp2py_export import ONNX_ML
ImportError: DLL load failed while importing onnx_cpp2py_export: Error en una rutina de inicialización de biblioteca de vínculos dinámicos (DLL).
---
*** Error loading script: reactor_api.py
Traceback (most recent call last):
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\scripts.py", line 515, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\script_loading.py", line 13, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_api.py", line 28, in <module>
from scripts.reactor_swapper import EnhancementOptions, swap_face, DetectionOptions
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_swapper.py", line 11, in <module>
import insightface
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\__init__.py", line 16, in <module>
from . import model_zoo
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\__init__.py", line 1, in <module>
from .model_zoo import get_model
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 11, in <module>
from .arcface_onnx import *
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\arcface_onnx.py", line 10, in <module>
import onnx
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\onnx\__init__.py", line 77, in <module>
from onnx.onnx_cpp2py_export import ONNX_ML
ImportError: DLL load failed while importing onnx_cpp2py_export: Error en una rutina de inicialización de biblioteca de vínculos dinámicos (DLL).
---
*** Error loading script: reactor_faceswap.py
Traceback (most recent call last):
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\scripts.py", line 515, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\script_loading.py", line 13, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_faceswap.py", line 18, in <module>
from reactor_ui import (
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\reactor_ui\__init__.py", line 2, in <module>
import reactor_ui.reactor_tools_ui as ui_tools
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\reactor_ui\reactor_tools_ui.py", line 2, in <module>
from scripts.reactor_swapper import build_face_model, blend_faces
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_swapper.py", line 11, in <module>
import insightface
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\__init__.py", line 16, in <module>
from . import model_zoo
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\__init__.py", line 1, in <module>
from .model_zoo import get_model
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 11, in <module>
from .arcface_onnx import *
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\arcface_onnx.py", line 10, in <module>
import onnx
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\onnx\__init__.py", line 77, in <module>
from onnx.onnx_cpp2py_export import ONNX_ML
ImportError: DLL load failed while importing onnx_cpp2py_export: Error en una rutina de inicialización de biblioteca de vínculos dinámicos (DLL).
---
*** Error loading script: reactor_helpers.py
Traceback (most recent call last):
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\scripts.py", line 515, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\script_loading.py", line 13, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_helpers.py", line 10, in <module>
from insightface.app.common import Face
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\__init__.py", line 16, in <module>
from . import model_zoo
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\__init__.py", line 1, in <module>
from .model_zoo import get_model
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 11, in <module>
from .arcface_onnx import *
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\arcface_onnx.py", line 10, in <module>
import onnx
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\onnx\__init__.py", line 77, in <module>
from onnx.onnx_cpp2py_export import ONNX_ML
ImportError: DLL load failed while importing onnx_cpp2py_export: Error en una rutina de inicialización de biblioteca de vínculos dinámicos (DLL).
---
*** Error loading script: reactor_logger.py
Traceback (most recent call last):
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\scripts.py", line 515, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\script_loading.py", line 13, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_logger.py", line 7, in <module>
from scripts.reactor_helpers import addLoggingLevel
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_helpers.py", line 10, in <module>
from insightface.app.common import Face
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\__init__.py", line 16, in <module>
from . import model_zoo
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\__init__.py", line 1, in <module>
from .model_zoo import get_model
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 11, in <module>
from .arcface_onnx import *
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\arcface_onnx.py", line 10, in <module>
import onnx
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\onnx\__init__.py", line 77, in <module>
from onnx.onnx_cpp2py_export import ONNX_ML
ImportError: DLL load failed while importing onnx_cpp2py_export: Error en una rutina de inicialización de biblioteca de vínculos dinámicos (DLL).
---
*** Error loading script: reactor_swapper.py
Traceback (most recent call last):
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\scripts.py", line 515, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\script_loading.py", line 13, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_swapper.py", line 11, in <module>
import insightface
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\__init__.py", line 16, in <module>
from . import model_zoo
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\__init__.py", line 1, in <module>
from .model_zoo import get_model
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 11, in <module>
from .arcface_onnx import *
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\arcface_onnx.py", line 10, in <module>
import onnx
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\onnx\__init__.py", line 77, in <module>
from onnx.onnx_cpp2py_export import ONNX_ML
ImportError: DLL load failed while importing onnx_cpp2py_export: Error en una rutina de inicialización de biblioteca de vínculos dinámicos (DLL).
---
*** Error loading script: reactor_version.py
Traceback (most recent call last):
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\scripts.py", line 515, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\script_loading.py", line 13, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_version.py", line 4, in <module>
from scripts.reactor_logger import logger, get_Run, set_Run
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_logger.py", line 7, in <module>
from scripts.reactor_helpers import addLoggingLevel
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_helpers.py", line 10, in <module>
from insightface.app.common import Face
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\__init__.py", line 16, in <module>
from . import model_zoo
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\__init__.py", line 1, in <module>
from .model_zoo import get_model
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 11, in <module>
from .arcface_onnx import *
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\arcface_onnx.py", line 10, in <module>
import onnx
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\onnx\__init__.py", line 77, in <module>
from onnx.onnx_cpp2py_export import ONNX_ML
ImportError: DLL load failed while importing onnx_cpp2py_export: Error en una rutina de inicialización de biblioteca de vínculos dinámicos (DLL).
---
*** Error loading script: reactor_xyz.py
Traceback (most recent call last):
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\scripts.py", line 515, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\modules\script_loading.py", line 13, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_xyz.py", line 8, in <module>
from scripts.reactor_helpers import (
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\extensions\sd-webui-reactor\scripts\reactor_helpers.py", line 10, in <module>
from insightface.app.common import Face
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\__init__.py", line 16, in <module>
from . import model_zoo
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\__init__.py", line 1, in <module>
from .model_zoo import get_model
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 11, in <module>
from .arcface_onnx import *
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\insightface\model_zoo\arcface_onnx.py", line 10, in <module>
import onnx
File "C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\onnx\__init__.py", line 77, in <module>
from onnx.onnx_cpp2py_export import ONNX_ML
ImportError: DLL load failed while importing onnx_cpp2py_export: Error en una rutina de inicialización de biblioteca de vínculos dinámicos (DLL).
---
Loading weights [a3f5346925] from C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\models\Stable-diffusion\1.5\Pony\fastPhotoPony_v40WithT5xxl_2.safetensors
CHv1.8.11: Set Proxy:
2024-08-18 11:21:49,221 - ControlNet - INFO - ControlNet UI callback registered.
Creating model from config: C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\repositories\generative-models\configs\inference\sd_xl_base.yaml
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py:1150: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
warnings.warn(
Startup time: 43.4s (prepare environment: 9.7s, import torch: 5.0s, import gradio: 0.8s, setup paths: 1.2s, initialize shared: 0.3s, other imports: 0.5s, load scripts: 23.8s, create ui: 0.7s, gradio launch: 0.5s, add APIs: 0.6s).
Loading VAE weights specified in settings: C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\models\VAE\sdxl_vae.safetensors
Applying attention optimization: xformers... done.
Model loaded in 7.0s (load weights from disk: 0.8s, create model: 0.5s, apply weights to model: 5.1s, load VAE: 0.1s, calculate empty prompt: 0.2s).
```
```
### Additional information
(venv) C:\Users\ZeroCool22\Desktop\AutoSDXL\stable-diffusion-webui\venv\Scripts>pip show onnx
Name: onnx
Version: 1.16.2
Summary: Open Neural Network Exchange
Home-page: https://onnx.ai/
Author:
Author-email: ONNX Contributors <onnx-technical-discuss@lists.lfaidata.foundation>
License: Apache License v2.0
Location: c:\users\zerocool22\desktop\autosdxl\stable-diffusion-webui\venv\lib\site-packages
Requires: numpy, protobuf
Required-by: insightface
C:\Users\ZeroCool22>nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Fri_Jun_14_16:44:19_Pacific_Daylight_Time_2024
Cuda compilation tools, release 12.6, V12.6.20
Build cuda_12.6.r12.6/compiler.34431801_0
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 560.81 Driver Version: 560.81 CUDA Version: 12.6 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Driver-Model | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 4070 ... WDDM | 00000000:06:00.0 On | N/A |
| 40% 25C P8 8W / 285W | 7822MiB / 16376MiB | 2% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+ | bug-report | low | Critical |
2,472,002,593 | godot | Cannot download export template from macOS editor | ### Tested versions
Godot 4.3 stable
### System information
Godot v4.3.stable - macOS 14.6.1 - Vulkan (Mobile) - integrated Apple M1 - Apple M1 (8 Threads)
### Issue description
I am trying to download the android build template for Godot but get an error saying Can't connect to the mirror
### Steps to reproduce
Create a Godot Project. Click Project => Install Android Build Template => Click Manage Templates => Click Download & Install
### Minimal reproduction project (MRP)
https://github.com/user-attachments/assets/f4eea114-d009-4d11-98b4-1348cb844ddf
| bug,platform:macos,needs testing,topic:network | low | Critical |
2,472,013,216 | langchain | VectorStoreRetriever can't correctly handle keyword parameters from invoke() | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```py
from typing import TYPE_CHECKING
from uuid import uuid4
from dotenv import load_dotenv
from langchain_chroma import Chroma
from langchain_core.documents import Document
from langchain_openai import OpenAIEmbeddings
if TYPE_CHECKING:
from langchain_core.vectorstores.base import VectorStoreRetriever
load_dotenv()
def init_db(vs: Chroma) -> None:
document_1 = Document(
page_content="I had chocalate chip pancakes and scrambled eggs for breakfast this morning.",
metadata={"source": "tweet"},
id=1,
)
document_2 = Document(
page_content="The weather forecast for tomorrow is cloudy and overcast, with a high of 62 degrees.",
metadata={"source": "news"},
id=2,
)
documents = [document_1, document_2]
uuids = [str(uuid4()) for _ in range(len(documents))]
vs.add_documents(documents=documents, ids=uuids)
def main(*, first_run_init_db: bool = False) -> None:
vector_store = Chroma(
collection_name="example_collection",
embedding_function=OpenAIEmbeddings(model="text-embedding-3-small"),
persist_directory="./chroma_langchain_db",
)
if first_run_init_db:
init_db(vector_store)
retriever: VectorStoreRetriever = vector_store.as_retriever(
search_type="mmr", search_kwargs={"k": 1, "fetch_k": 2}
)
print(
retriever.invoke("Stealing from the bank is a crime", filter={"source": "news"})
)
# output:
# [Document(metadata={'source': 'tweet'}, page_content='I had chocalate chip pancakes and scrambled eggs for breakfast this morning.')]
# The filter didn't perform as expected.
if __name__ == "__main__":
main(first_run_init_db=True)
```
### Error Message and Stack Trace (if applicable)
stdout:
```shell
[Document(metadata={'source': 'tweet'}, page_content='I had chocalate chip pancakes and scrambled eggs for breakfast this morning.')
```
### Description
This code example uses the [Chroma documentation](https://python.langchain.com/v0.2/docs/integrations/vectorstores/chroma/#query-by-turning-into-retriever) and retains only two documents from the example.
Surprisingly, `filter={"source": "news"}` did not work as expected, and I received results outside of that filter.
I readed the source code and found that the docstring in the invoke indeed allows such behavior.(parameters can be passed to the retriever from the invoke) ⬇️

The function indeed passes `**kwargs` to `_get_relevant_documents`.
However, the `_get_relevant_documents` function doesn't accept kwargs parameters. ⬇️

The actual implementation of `_get_relevant_documents` in the `VectorStoreRetriever` doesn't accept `**kwargs`, which ultimately leads to the unexpected results.
### System Info
langchain==0.2.14
langchain-core==0.2.33
platform==windows
python-version==3.12.5 | Ɑ: vector store,🤖:bug,🔌: chroma,investigate,Ɑ: retriever,todo | low | Critical |
2,472,016,130 | pytorch | torch._dynamo.utils.istype type inference is broken | ### 🐛 Describe the bug
```
@overload
def istype(
obj: object, allowed_types: Type[T]
) -> TypeIs[T]: ...
@overload
def istype(
obj: object, allowed_types: Tuple[Type[Unpack[Ts]]]
) -> TypeIs[Union[Unpack[Ts]]]: ...
```
it should be more something like this. Currently, it is not narrowing correctly.
Everything is being overladed by the non-typeguarded iterable boolean overload which is breaking everything unfortunately.
### Versions
master
cc @ezyang @malfet @xuzhao9 @gramster @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | module: typing,triaged,OSS contribution wanted,oncall: pt2,module: dynamo | low | Critical |
2,472,026,701 | pytorch | Warning when initialize `torch.nn.Linear(0, d)` | ### 🐛 Describe the bug
Sometimes, we intend to create layer like `torch.nn.Linear(0, d)`, which contains nothing but a single bias. However, when initialize it, because of weight tensor of shape 0*d, a warning raised:
```
UserWarning: Initializing zero-element tensors is a no-op
warnings.warn("Initializing zero-element tensors is a no-op")
```
I think Linear should check if dim_input is zero, if so, weight tensor should not be created or initialized.
### Versions
Collecting environment information...
PyTorch version: 2.3.1
Is debug build: False
CUDA used to build PyTorch: 12.5
ROCM used to build PyTorch: N/A
OS: Arch Linux (x86_64)
GCC version: (GCC) 14.2.1 20240805
Clang version: 18.1.8
CMake version: version 3.30.2
Libc version: glibc-2.40
Python version: 3.12.4 (main, Jun 7 2024, 06:33:07) [GCC 14.1.1 20240522] (64-bit runtime)
Python platform: Linux-6.6.36-1-lts-x86_64-with-glibc2.40
Is CUDA available: True
CUDA runtime version: 12.5.82
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 555.58
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.9.2.1
/usr/lib/libcudnn_adv.so.9.2.1
/usr/lib/libcudnn_cnn.so.9.2.1
/usr/lib/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/libcudnn_graph.so.9.2.1
/usr/lib/libcudnn_heuristic.so.9.2.1
/usr/lib/libcudnn_ops.so.9.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i5-12400F
CPU family: 6
Model: 151
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 5
CPU(s) scaling MHz: 60%
CPU max MHz: 4400.0000
CPU min MHz: 800.0000
BogoMIPS: 4993.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 288 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 7.5 MiB (6 instances)
L3 cache: 18 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.10.1
[pip3] mypy_extensions==1.0.0
[pip3] numpy==2.0.1
[pip3] torch==2.3.1
[conda] Could not collect
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki | module: nn,triaged | low | Critical |
2,472,049,093 | deno | Different behavior for fetch call between Node.js & Deno | Version: Deno 1.45.5
Seems like Deno is handling the second fetch call differently and the server (router) thinks the user is unathenticated and ends up in a redirection loop.
Code to reproduce (requires LiveBox router)
```ts
await login()
.then((urn) => getConnectedDevices(urn))
.catch((error) => {
console.error("An error occurred:", error);
});
async function login() {
const bodyParams = new URLSearchParams({
GO: "status.htm",
pws: "md5 password here",
usr: "admin",
ui_pws: "password here",
login: "acceso",
});
return fetch("http://livebox.home" + "/login.cgi", {
method: "POST",
body: bodyParams.toString(),
})
.then((response) => {
console.log("#1", response);
return response.text();
})
.then((responseText) => {
return responseText.split("'")[1];
})
.catch((error) => {
console.error("Login failed:", error);
});
}
async function getConnectedDevices(urn) {
const headers = {
Cookie: `urn=${urn}`,
};
return fetch("http://livebox.home" + "/cgi/cgi_network_connected.js", {
method: "GET",
headers,
})
.then((response) => {
console.log("#2", response);
})
.catch((error) => {
console.error("Failed to get connected devices:", error);
});
}
```
**Deno**
First fetch call:
```
{
body: ReadableStream { locked: false },
bodyUsed: false,
headers: Headers {
"cache-control": "no-cache,no-store,must-revalidate, post-check=0,pre-check=0",
connection: "close",
"content-language": "en",
"content-type": "text/html",
date: "Sun, 18 Aug 2024 16:14:12 GMT",
expires: "0",
pragma: "no-cache",
server: "httpd"
},
ok: true,
redirected: true,
status: 200,
statusText: "OK",
url: "http://livebox.home/status.htm"
}
```
Second fetch call:
> Failed to get connected devices: TypeError: Fetch failed: Maximum number of redirects (20) reached
> at ext:deno_fetch/26_fetch.js:370:25
> at eventLoopTick (ext:core/01_core.js:168:7)
> at async fetch (ext:deno_fetch/26_fetch.js:391:7)
> at async file:///C:/Users/migue/Documents/GitHub/livebox-reporter/test.mjs:1:1
**Node.js**
```
{
status: 200,
statusText: 'Ok',
headers: Headers {
server: 'httpd',
date: 'Sun, 18 Aug 2024 16:51:38 GMT',
pragma: 'no-cache',
'cache-control': 'no-cache,no-store,must-revalidate, post-check=0,pre-check=0',
expires: '0',
'content-type': 'text/html',
'content-language': 'en',
connection: 'close'
},
body: ReadableStream { locked: false, state: 'readable', supportsBYOB: true },
bodyUsed: false,
ok: true,
redirected: true,
type: 'basic',
url: 'http://livebox.home/status.htm'
}
{
status: 200,
statusText: 'Ok',
headers: Headers {
server: 'httpd',
date: 'Sun, 18 Aug 2024 16:51:38 GMT',
pragma: 'no-cache',
'cache-control': 'no-cache,no-store,must-revalidate, post-check=0,pre-check=0',
expires: '0',
'content-type': 'text/javascript',
cookie: 'urn=2f355162420fe882',
'set-cookie': 'PATH=/; HttpOnly;',
'content-language': 'en',
connection: 'close'
},
body: ReadableStream { locked: false, state: 'readable', supportsBYOB: true },
bodyUsed: false,
ok: true,
redirected: false,
type: 'basic',
url: 'http://livebox.home/cgi/cgi_network_connected.js'
}
```
Browser:

Thanks. | needs info | low | Critical |
2,472,049,848 | node | server.listen's bind argument does not accept [::] for ipv6 | ### Version
22, 20
### Platform
```text
Darwin the-beast-kitten.local 23.6.0 Darwin Kernel Version 23.6.0: Mon Jul 29 21:14:46 PDT 2024; root:xnu-10063.141.2~1/RELEASE_ARM64_T6031 arm64
```
### Subsystem
net or dns
### What steps will reproduce the bug?
```js
require('http').createServer((req, res) => {
res.statusCode = 200;
res.end("OK");
}).listen(4001, '[::]');
```
Outputs:
```
node:events:497
throw er; // Unhandled 'error' event
^
Error: getaddrinfo ENOTFOUND [::]
at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:109:26)
Emitted 'error' event on Server instance at:
at GetAddrInfoReqWrap.doListen [as callback] (node:net:2132:12)
at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:109:17) {
errno: -3008,
code: 'ENOTFOUND',
syscall: 'getaddrinfo',
hostname: '[::]'
}
Node.js v20.16.0
```
(also reproducible on 22.2.0, same error)
### How often does it reproduce? Is there a required condition?
Always
### What is the expected behavior? Why is that the expected behavior?
Should listen on all interfaces for ipv6.
### What do you see instead?
Server fails to start with an error.
### Additional information
Using `bind` of `::` works, so the following succeeds:
```
require('http').createServer((req, res) => {
res.statusCode = 200;
res.end("OK");
}).listen(4001, '::');
```
This causes compatibility issues if you've configuration that has a BIND parameter that needs to be passed to Node.js and another process, such as the Ruby on Rails built-in server, which doesn't accept `bind` of `::` but does accept `[::]` | dns,net | low | Critical |
2,472,051,768 | tauri | [docs] WebDriver (Selenium and WebdriverIO) e2e testing does not work. | I'm trying to setup end-to-end tests for Tauri. I've followed the [official documentation](https://tauri.app/v1/guides/testing/webdriver/example/selenium) but when I try to start my tests I get the following error:
```
** (pulsar:12894): WARNING **: 06:38:47.744: Disabled hardware acceleration because GTK failed to initialize GL: Unable to create a GL context.
** (pulsar:12894): WARNING **: 06:38:47.751: webkit_settings_set_enable_offline_web_application_cache is deprecated and does nothing.
1) "before all" hook in "{root}"
2) "after all" hook in "{root}"
0 passing (741ms)
2 failing
1) "before all" hook in "{root}":
SessionNotCreatedError: Failed to match capabilities
at Object.throwDecodedError (node_modules/selenium-webdriver/lib/error.js:521:15)
at parseHttpResponse (node_modules/selenium-webdriver/lib/http.js:514:13)
at Executor.execute (node_modules/selenium-webdriver/lib/http.js:446:28)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
2) "after all" hook in "{root}":
TypeError: Cannot read properties of undefined (reading 'quit')
at Context.<anonymous> (test/test.js:47:16)
at process.processImmediate (node:internal/timers:483:21)
```
The application starts correctly when I launch it manually. But fails without a reason in the selenium test. Any idea what could be wrong or how can I get a more descriptive error? Are the docs up-to-date? Migration to v2 is not possible for us at the moment, just need to get the tests running, any help is much appreciated. | type: documentation,scope: webdriver | low | Critical |
2,472,082,264 | terminal | Incorrect `DECRQM` for Win32InputMode | ### Windows Terminal version
Commit 9b21b78feea9ff0a0b9ff2c8fa4b4aa5602d3889
### Windows build number
10.0.19045.4780
### Other Software
_No response_
### Steps to reproduce
1. Checkout and build a recent commit of Windows Terminal (I'm assuming it needs to be 450eec48de252a3a8d270bade847755ecbeb5a75 or later).
2. Open a tab with a WSL bash shell.
3. Toggle Win32 input mode with `printf "\e[?9001h"; sleep 1; printf "\e[?9001l"; read`
4. You'll likely see the keyup event for the <kbd>Enter</kbd> key that you just pressed.
5. Wait a second and press <kbd>Enter</kbd> again to return to the prompt.
6. Execute `showkey -a`
7. Try pressing <kbd>Esc</kbd> or <kbd>Alt</kbd>+<kbd>[</kbd>.
### Expected Behavior
You should see the VT codes for those keys like this:
```
^[ 27 0033 0x1b
^[[ 27 0033 0x1b
91 0133 0x5b
```
### Actual Behavior
Neither of those keys are picked up. And I think the problem is the heuristic we're using here:
https://github.com/microsoft/terminal/blob/9b21b78feea9ff0a0b9ff2c8fa4b4aa5602d3889/src/terminal/parser/stateMachine.cpp#L2055-L2056
That works fine if win32 input mode is always on, or always off, but when it's toggled the `_encounteredWin32InputModeSequence` flag is never reset, so that branch is always skipped.
I don't think this was an issue prior to the new VT passthrough because we never forwarded win32 input mode changes to conpty, so it should have been permanently enabled (assuming it was supported at all). | Product-Conpty,Area-Input,Area-VT,Issue-Bug | low | Minor |
2,472,094,115 | pytorch | Issues compiling `torch` with `mkl` | I am trying to debug some build/compilation issues of `pytorch` using `mkl` as the `lapack`/`blas` provider.
See https://github.com/NixOS/nixpkgs/issues/269271 for downstream issue.
Attempting to build `pytorch` with `mkl` produces the following errors:
```
CMake Error at caffe2/cmake_install.cmake:115 (file):
file RPATH_CHANGE could not write new RPATH:
$ORIGIN:/lib:/lib/intel64:/lib/intel64_win:/lib/win-x64
to the file:
/build/source/torch/lib/libtorch_cuda_linalg.so
The current RUNPATH is:
/nix/store/3l73glzx595bnya3kfk6z6x2d8v5wlhs-python3.12-torch-2.3.1-lib/lib:/nix/store/3ah4c8n0bprsl5bw1hlm78pvjgmk1zqm-mkl-2023.1.0.46342/lib:/nix/store/23j56hv7plgkgmhj8l2aj4mgjk32529h-cuda_cudart-12.2.140-lib/lib:/nix/store/8071nd4ikv569wfqwrqlxkrnrcksdnmy-cuda_nvtx-12.2.140-lib/lib:/nix/store/rnyc2acy5c45pi905ic9cb2iybn35crz-libcublas-12.2.5.6-lib/lib:/nix/store/y3khb4k396kn3pzlj4i5vbjksmlqp04g-libcusolver-11.5.2.141-lib/lib:/nix/store/8s3nkibydv4iqyj5sp1bq0j4ww9xn7mq-libcusparse-12.1.2.141-lib/lib:/nix/store/0wydilnf1c9vznywsvxqnaing4wraaxp-glibc-2.39-52/lib:/nix/store/kgmfgzb90h658xg0i7mxh9wgyx0nrqac-gcc-13.3.0-lib/lib
which does not contain:
/lib:/lib/intel64:/lib/intel64_win:/lib/win-x64:/build/source/build/lib:
as was expected.
```
I was able to narrow the cause of this error down to these lines in `pytorch/cmake/public/mkl.cmake`:
https://github.com/pytorch/pytorch/blob/47ed5f57b09a280cdc32b5c0f39b811749e341aa/cmake/public/mkl.cmake#L19-L23
The problem is that the `MKL_ROOT` variable is not set, so this line ends up adding `/lib`, `/lib/intel64`, `/lib/intel64_win` and `/lib/win-x64` to `INTERFACE_LINK_DIRECTORIES` for `caffe2::mkl` (on a somewhat related note, I am not sure if this line makes sense even if `MKL_ROOT` **was** set to the correct value since the `intel64`, `intel64_win` and `win-x64` subdirectories don't seem to be present in most distributions of MKL).
I am honestly not sure, why this line expects `MKL_ROOT` to be set. The `find_package(MKL QUIET)` call earlier in the file ends up calling [`cmake/Modules/FindMKL.cmake`](https://github.com/pytorch/pytorch/blob/0d4cedaa47c7ee22042eb24e87eb3cfe95502404/cmake/Modules/FindMKL.cmake) provided by `pytorch`, which doesn't mention `MKL_ROOT` anywhere. The only place, where I could find mentions of `MKL_ROOT` is in the `lib/cmake/mkl/MKLConfig.cmake` file provided by `mkl`, but this file doesn't seem to be called at any point during the build.
---
I am not sure if this `set_property` "hack" is even needed at this point in time. Here is an abbreviated blame history of `MKL_ROOT` usage in `cmake/public/mkl.cmake`:
1) Originally introduced in commit https://github.com/pytorch/pytorch/commit/7b0d577c226fae78f377b26feab4122c4203ad59 (pytorch/pytorch#89359),
mentions issue pytorch/pytorch#73008
https://github.com/pytorch/pytorch/blob/7b0d577c226fae78f377b26feab4122c4203ad59/cmake/public/mkl.cmake#L13-L17
2) Replaced with a loop over `MKL_LIBRARIES` in commit https://github.com/pytorch/pytorch/commit/dc2b7aa95554188155a4e2e087412f06f2f3b642 (pytorch/pytorch#94924)
https://github.com/pytorch/pytorch/blob/dc2b7aa95554188155a4e2e087412f06f2f3b642/cmake/public/mkl.cmake#L10-L19
3) For loop removed, `MKL_ROOT` added back in commit https://github.com/pytorch/pytorch/commit/3226ad21cfdea4a35f13bafd746c3b84a60e5643 (this was a revert of the previous PR)
https://github.com/pytorch/pytorch/blob/3226ad21cfdea4a35f13bafd746c3b84a60e5643/cmake/public/mkl.cmake#L13-L17
4) `intel64`, `intel64_win` and `win-x64` subdirectories added in commit https://github.com/pytorch/pytorch/commit/b5d3d58497c4926c34f6ae6880fb2c03268439c2 (pytorch/pytorch#105525)
https://github.com/pytorch/pytorch/blob/b5d3d58497c4926c34f6ae6880fb2c03268439c2/cmake/public/mkl.cmake#L13-L17
5) The loop over `MKL_LIBRARIES` from step 2 was reintroduced in commit https://github.com/pytorch/pytorch/commit/0cc2f06aec16fe9f41c9c1602c2b8b318cc3ce6c (pytorch/pytorch#104224), however this time the `MKL_ROOT` line was not removed for some reason.
https://github.com/pytorch/pytorch/blob/0cc2f06aec16fe9f41c9c1602c2b8b318cc3ce6c/cmake/public/mkl.cmake#L9-L22
So it's possible that this line is no longer needed after pytorch/pytorch#104224. I was able to successfully build torch with MKL on NixOS after removing the `set_property` call.
cc @malfet @seemethere @frank-wei @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 | module: build,triaged,module: mkl,module: intel | low | Critical |
2,472,100,531 | material-ui | [joy-ui][Select] Export SelectListbox | ### Summary
Could you export SelectListbox component like AutocompleteListbox component?
### Examples
_No response_
### Motivation
_No response_
**Search keywords**: SelectListbox | on hold,component: select,package: joy-ui,enhancement | low | Minor |
2,472,106,565 | tauri | [bug] [v2] [iOS] Manual Xcode build fails with `cargo: command not found` | ### Describe the bug
I generate an Xcode project through `cargo tauri`. When I then open it and try to build in Xcode manually, the script execution phase fails with the following error (long line, scroll to the end):
```
/Users/stijn/Library/Developer/Xcode/DerivedData/temp-epvdqgocduaasecmmqocxpzwtttl/Build/Intermediates.noindex/temp.build/debug-iphoneos/temp_iOS.build/Script-6BC118688969AAE7C46FEBA4.sh: line 2: cargo: command not found
Command PhaseScriptExecution failed with a nonzero exit code
```
I looked at the script it tries to execute, which is only this:
```sh
#!/bin/sh
cargo tauri ios xcode-script -v --platform ${PLATFORM_DISPLAY_NAME:?} --sdk-root ${SDKROOT:?} --framework-search-paths "${FRAMEWORK_SEARCH_PATHS:?}" --header-search-paths "${HEADER_SEARCH_PATHS:?}" --gcc-preprocessor-definitions "${GCC_PREPROCESSOR_DEFINITIONS:-}" --configuration ${CONFIGURATION:?} ${FORCE_COLOR} ${ARCHS:?}
```
That's weird, because `cargo` is obviously installed on my system. The shell listed is `/bin/sh`, and when I manually open a terminal by running `/bin/sh`, I can run e.g. `cargo --version` just fine.
Replacing `cargo` in the script with `/Users/stijn/.cargo/bin/cargo` does the trick, so there's probably something wrong with the `PATH`.
### Reproduction
1. Create a new project. I used `pnpm` + Svelte + TS, but I guess this fails with any combination.
1. I also ran `pnpm install` for good measure
1. `cargo tauri ios init`
1. `cargo tauri ios dev --open --host`
1. When Xcode has opened, hit Cmd + B and see the failure
### Expected behavior
I expect the Xcode build to finish
### Full `tauri info` output
```text
[✔] Environment
- OS: Mac OS 14.6.1 X64
✔ Xcode Command Line Tools: installed
✔ rustc: 1.80.1 (3f5fd8dd4 2024-08-06)
✔ cargo: 1.80.1 (376290515 2024-07-16)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-x86_64-apple-darwin (environment override by RUSTUP_TOOLCHAIN)
- node: 20.14.0
- pnpm: 9.6.0
- yarn: 1.22.22
- npm: 10.7.0
[-] Packages
- tauri [RUST]: 2.0.0-rc (no lockfile)
- tauri-build [RUST]: no manifest (no lockfile)
- wry [RUST]: no manifest (no lockfile)
- tao [RUST]: no manifest (no lockfile)
- tauri-cli [RUST]: 2.0.0-rc.4
- @tauri-apps/api [NPM]: 2.0.0-rc.1
- @tauri-apps/cli [NPM]: 2.0.0-rc.4
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../build
- devUrl: http://localhost:1420/
- framework: Svelte
- bundler: Vite
[-] iOS
- Developer Teams: <redacted>
```
```
### Stack trace
```text
N.A.
```
### Additional context
N.A. | type: bug,status: needs triage,platform: iOS | low | Critical |
2,472,114,825 | rust | Refining generic bounds in trait method | This issue documents an aspect of #100706 (refined trait implementations) that to my knowledge has not been previously discussed or documented: refining generic bounds on trait methods.
It also documents the current rustc behavior, which seems to be quite surprising based on conversations I've had with a half dozen very experienced Rustaceans.
## Code example
Consider the following setup in a fictional `upstream` crate:
```rust
pub trait Super {}
mod private {
pub trait Marker: super::Super {}
}
pub trait Subject {
fn method<IM: private::Marker>(&self);
}
```
In this case, rustc allows both of the following generic type refinements with no errors or warnings:
```rust
struct First;
impl upstream::Subject for First {
fn method<IM: upstream::Super>(&self) {}
}
struct Second;
impl upstream::Subject for Second {
fn method<IM>(&self) {}
}
```
[playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=61560cc45f224196a0437365d69e3f9a)
## Surprising current behavior
In the above code, `First` refines the generic bound from `IM: upstream::private::Marker` to `IM: upstream::Super`, a supertype of `Marker`. `Second` refines the bound away completely, allowing any type.
It's quite surprising that a bare `<IM>` generic is accepted! It's completely unused and trivially satisfied, and unbounded generics are usually not accepted elsewhere in Rust. Consider for example that `PhantomData` marker fields must be added if a type declares, but does not use, a generic type.
## Relying on the refinement fails at point of use, not declaration
Attempting to rely on either refinement when calling the refined function fail with `the trait bound '<type>: Marker' is not satisfied` errors:
```rust
fn use_first() {
<First as upstream::Subject>::method::<()>(&First);
}
impl upstream::Super for () {}
fn use_second() {
<Second as upstream::Subject>::method::<()>(&Second);
}
```
[playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=aa0ba03dc6f247630d47b21ad43f20c7)
A warning similar to the one for refined return types (`impl trait in impl method signature does not match trait method signature`) would be good. An example of that warning can be seen in [this playground link](https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=1d364a825ccd532bd76177ad71fe71bc).
Related to #121718, #100706
@rustbot label +F-refine +T-lang | A-trait-system,T-lang,C-discussion,T-types | low | Critical |
2,472,125,315 | rust | Unsized parameters are not flagged in trait definitions | ### Code
```Rust
trait Trait<T: ?Sized> {
fn method(t: T);
}
```
### Current output
```Shell
<none>
```
### Desired output
```Shell
A warning that the `?Sized` constraint is useless
```
### Rationale and extra context
Any concrete choice of `T` that's unsized will result in [E0277](https://doc.rust-lang.org/stable/error_codes/E0277.html) when the trait is implemented, so marking it as unsized isn't allowing anything that wasn't already valid. This exact error will appear if `method` has a default implementation.
Similar redundant constructions like comparing unsigned values to 0 are flagged, so I'd expect this to be too.
### Other cases
_No response_
### Rust Version
```Shell
stable 1.80.1
```
### Anything else?
_No response_ | A-diagnostics,T-compiler,T-types | low | Critical |
2,472,129,394 | go | runtime: consider using batched shuffle in selectgo | This is a minor optimization opportunity. [runtime.selectgo](https://github.com/golang/go/blob/27093581b2828a2752a6d2711def09517eb2513b/src/runtime/select.go#L121) typically does a shuffle of a fairly small number of channels. https://lemire.me/blog/2024/08/17/faster-random-integer-generation-with-batching/ introduces an optimization for such shuffles. It might be worth incorporating the technique into selectgo. (math/rand/v2 is probably not possible due to backwards compat.) | Performance,help wanted,NeedsInvestigation,compiler/runtime | low | Minor |
2,472,129,526 | rust | E0382: Add help for fixing a partially moved struct by manual assignment | ### Code
```Rust
#[derive(Debug)]
struct A {
a: String,
b: String,
}
fn main(){
let mut a: A = A { a: "test1".to_string(), b: "test2".to_string()};
std::mem::drop(a.b);
// A is now partially moved
// a.b = "fix".to_string(); // fixing the partial move
dbg!(a); // no error
}
```
### Current output
```Shell
error[E0382]: use of partially moved value: `a`
--> src/main.rs:13:5
|
10 | std::mem::drop(a.b);
| --- value partially moved here
...
13 | dbg!(a); // no error
| ^^^^^^^ value used here after partial move
|
= note: partial move occurs because `a.b` has type `String`, which does not implement the `Copy` trait
```
### Desired output
```Shell
error[E0382]: use of partially moved value: `a`
--> src/main.rs:13:5
|
10 | std::mem::drop(a.b);
| --- value partially moved here
...
13 | dbg!(a); // no error
| ^^^^^^^ value used here after partial move
|
= note: partial move occurs because `a.b` has type `String`, which does not implement the `Copy` trait
help: reassign a.b to a new value
12 | a.b = String::default();
| ++++++++++++++++++++++++ assigning to a new value returns the struct to a useable state
```
### Rationale and extra context
I was not aware for a long time this was possible. It is not explained in user-facing docs where it would be nice to have, and is often enough since often only the other fields are necessary, or the new owner returns a new value you could replace into this.
Its a more natural way of resolving this compared to using mem::replace before the value is moved.
### Other cases
_No response_
### Rust Version
```Shell
rustc 1.80.1 (3f5fd8dd4 2024-08-06)
binary: rustc
commit-hash: 3f5fd8dd41153bc5fdca9427e9e05be2c767ba23
commit-date: 2024-08-06
host: x86_64-pc-windows-msvc
release: 1.80.1
LLVM version: 18.1.7
```
### Anything else?
I'm not at all sure that "help:" is the correct way of phrasing this, as usually "help:" is used for cases where only one fix exists. Theres many ways to work around this, but i can't recall whether there is any existing phrasing for "have you considered this option?" | A-diagnostics,T-compiler | low | Critical |
2,472,130,618 | stable-diffusion-webui | [Feature Request]: Applying Hires fix should keep the original image in the batch gallery | ### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What would your feature do ?
I don't have "Always save all generated images" enabled. I generate a batch of images, select some of them to hires fix and only save the ones I like. The hires fixed image replaces the original one in the batch gallery, making it impossible to apply hires fix with different settings on the original one. I want keep the original image in the batch gallery so that it can be hires fixed multiple times. Hires fixed copies should be appended to the list of images in the batch gallery.
### Proposed workflow
1. Don't have "Always save all generated images" option enabled
2. Generate an image
3. Hires fix it using the upscale button in the toolbar under the batch gallery
4. Upscaled image is appended to the list of images in the batch gallery
5. Run the hires fix again on the original image
6. Go to 4.
### Additional information
There is a setting "Save a copy of image before applying highres fix."
1. It only save the images to disk and only if "Always save all generated images" is enabled
2. "highres fix" is misspelled | enhancement | low | Minor |
2,472,133,542 | godot | D3D12: Incorrect OmniLight3D dual paraboloid shadows | ### Tested versions
- Reproducible in v4.3:stable
### System information
Godot v4.3.stable - Windows 10.0.19045 - d3d12 (Forward+) - dedicated NVIDIA GeForce RTX 2060 SUPER (NVIDIA; 31.0.15.4601) - AMD Ryzen 5 3600 6-Core Processor (12 Threads)
### Issue description
In the D3D12 renderer OmniLight3D dual paraboloid shadows seem to be broken: one side of shadow disappear. Vulkan have both sides of shadow.
D3D12 video
https://github.com/user-attachments/assets/30cba27f-3b8d-49b7-8f88-540db0980373
Vulkan video
https://github.com/user-attachments/assets/2331cacd-28fc-401e-a8a7-dda9250d265a
### Steps to reproduce
* Set d3d12 rendering driver in Project Settings
* Create OmniLight3D with dual paraboloid mode and enable shadows
* Check both sides of OmniLight3D
### Minimal reproduction project (MRP)
[d3d12_omni_dualp_shadow.zip](https://github.com/user-attachments/files/16651923/d3d12_omni_dualp_shadow.zip)
| bug,platform:windows,topic:rendering,topic:3d | low | Critical |
2,472,139,649 | three.js | [Feature] USDZ and USD loader using TinyUSDZ | ### Description
I'd like to share the status of TinyUSDZLoader for three.js, Its a USD loader module in wasm.

https://x.com/syoyo/status/1825270359211548719
https://github.com/lighttransport/tinyusdz/tree/dev/sandbox/threejs
## Current status
A mesh with a texture possible.
## Advantage
* (Mostly) Full-featured USDZ support(both supports USDA Ascii and USDC binary) possible. On the contrary, current USDZLoader.js in three.js https://github.com/mrdoob/three.js/blob/dev/examples/jsm/loaders/USDZLoader.js is limited to USDA only.
* Better material/texture support(e.g. loading broader EXR image format possible thanks to TinyEXR(embedded in TinyUSDZ module)
* Portable. No need of SharedArrayBuffers and Atomics(whose are requirement of wasm build of OpenUSD)
## Disadvantage
* Larger WASM size. Roughly 2MB uncompressed, 600kb when compressed.
We still need to implement more stuffs(scene graphs, better reconstruction of material/texture suited for three.js), skinning/animations, etc. But I hope USDZLoader based on TinyUSDZ is the promising way to load USD/USDZ in three.js.
### Solution
N/A
### Alternatives
N/A
### Additional context
_No response_ | Loaders | low | Minor |
2,472,142,079 | godot | Projects using the "compatibility" rendering engine take a few seconds to run now | ### Tested versions
- Reproducible in: v4.3.stable.official [77dcf97d8]
- Not Reproducible in: v4.2.stable.official [46dc277], v4.2.2.stable.official [15073af]
### System information
Godot v4.3.stable - Windows 10.0.22631 - GLES3 (Compatibility) - NVIDIA GeForce RTX 4050 Laptop GPU (NVIDIA; 32.0.15.5599) - 13th Gen Intel(R) Core(TM) i7-13650HX (20 Threads)
### Issue description
When you click the "Run Project" button, or the "Run Current Scene" button, or the "Run Specific Scene" button, then the window for the project will take a few seconds to open. But that's only if the rendering engine is set to compatibility. This makes it slower to playtest, since before 4.3 the project would run almost instantly. And this happens no matter how big or small the project/scene is.
### Steps to reproduce
- Create a new project with "compatibility" as the rendering engine, or take an existing project and switch it to compatibility
- If needed, create an empty scene for the project (it doesn't have to be empty, the scene can be anything)
- Run the project
### Minimal reproduction project (MRP)
[new-game-project.zip](https://github.com/user-attachments/files/16651992/new-game-project.zip)
| bug,platform:windows,topic:rendering,needs testing,regression,performance | medium | Major |
2,472,145,817 | TypeScript | Add `--strictObjectIterables` (bikeshed): exclude primitive (`string`) from `Iterable` types | ### 🔍 Search Terms
"iterable object", "string"
### ✅ Viability Checklist
- [X] This wouldn't be a breaking change in existing TypeScript/JavaScript code
- [X] This wouldn't change the runtime behavior of existing JavaScript code
- [X] This could be implemented without emitting different JS based on the types of the expressions
- [X] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, new syntax sugar for JS, etc.)
- [X] This isn't a request to add a new utility type: https://github.com/microsoft/TypeScript/wiki/No-New-Utility-Types
- [X] This feature would agree with the rest of our Design Goals: https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals
### ⭐ Suggestion
At the TC39 meeting in 2024-07, it was decided that `Iterable` expects objects.
> ## Reject primitives in iterable-taking positions
>
> Any time an iterable or async-iterable value (a value that has a `Symbol.iterator` or `Symbol.asyncIterator` method) is expected, primitives should be treated as if they were not iterable. Usually, this will mean throwing a `TypeError`. If the user provides a primitive wrapper Object such as a String Object, however, it should be treated like any other Object.
>
> Although primitive Strings are default iterable (`String.prototype` has a `Symbol.iterator` method which enumerates code points), it is now considered a mistake to iterate a String without specifying whether the String is providing an abstraction over code units, code points, grapheme clusters, or something else.
>
> NB: This convention is new as of 2024, and most earlier parts of the language do not follow it. In particular, positional destructuring (both binding and assignment), array spread, argument spread, for-of loops, `yield *`, the `Set` and `AggregateError` constructors, `Object.groupBy`, `Map.groupBy`, `Promise.all`, `Promise.allSettled`, `Promise.any`, `Promise.race`, `Array.from`, the static `from` methods on typed array constructors, and `Iterator.from` (Stage 3 at time of writing) all accept primitives where iterables are expected.
https://github.com/tc39/how-we-work/blob/main/normative-conventions.md#reject-primitives-in-iterable-taking-positions
To follow this decision, the `Iterable` and `string` types should be separated, and APIs that accept both types should be required to explicitly specify `Iterable<any> | string`. Since this would be a breaking change, how about adding a new `--strictObjectIterables` (bikeshed) option?
In practice, the upcoming `ReadableStream.from` will accept `Iterable<any>` and `AsyncIterable<any>`, but will be restricted to objects. https://github.com/whatwg/streams/pull/1310
### 📃 Motivating Example
Enabling the `--strictObjectIterables` option raises a type error in the following example:
```ts
const iterable: Iterable<any> = "string";
```
### 💻 Use Cases
1. What do you want to use this for?
Used for `ReadableStream.from` and other APIs to be added in the future.
2. What shortcomings exist with current approaches?
`Iterable` object and `string` are not distinguished by default.
3. What workarounds are you using in the meantime?
Maybe `Iterable<any> & object` can reject `string`. | Suggestion,Needs Proposal | low | Critical |
2,472,150,203 | godot | [TRACKER] Direct3D 12 rendering driver issues | This is a tracker for issues related to the Direct3D 12 (D3D12) rendering driver.
This issue does **not** track issues specific to ANGLE (the `opengl3_angle` rendering driver, which means OpenGL running over Direct3D).
- See also https://github.com/godotengine/godot/issues/95919.
```[tasklist]
### Bugs
- [ ] https://github.com/godotengine/godot/issues/94251
- [ ] https://github.com/godotengine/godot/issues/95053
- [ ] https://github.com/godotengine/godot/issues/95619
- [ ] https://github.com/godotengine/godot/issues/95767
- [ ] https://github.com/godotengine/godot/issues/98158
- [ ] https://github.com/godotengine/godot/issues/98207
- [ ] https://github.com/godotengine/godot/issues/98527
- [ ] https://github.com/godotengine/godot/issues/101845
```
```[tasklist]
### Enhancements
- [ ] https://github.com/godotengine/godot-proposals/issues/10078
```
| bug,enhancement,platform:windows,topic:rendering,tracker | low | Critical |
2,472,162,862 | tensorflow | HLO for TopK oddly casts uint8 input to uint32 before passing to radix sort. | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
TF 2 (HEAD of internal repo)
### Custom code
Yes
### OS platform and distribution
Google-Internal Environment
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
V100 (also reproducible on other GPUs)
### Current behavior?
An Alphabet model invokes `tf.math.top_k` with a tensor of dtype uint8 and shape (1, 1,32768).

For this call, XLA ends up calling radix sort. However, the radix sort is sub-optimal because TensorFlow casts the inputs to uint32 (instead of using original dtype uint8). Of course, radix sort is faster across smaller dtypes (with fewer bytes).
> @@(u32[1,32768]{1,0}, s32[1,32768]{1,0}, u8[271871]{0}) custom-call(u32[1,32768]{1,0}, s32[1,32768]{1,0}), custom_call_target="__cub$DeviceRadixSort
I would expect HLO text more like this, where the uint8 inputs are passed directly:
> (u8[1,32768]{1,0}, s32[1,32768]{1,0}, u8[310527]{0}) custom-call(u8[1,32768]{1,0}, s32[1,32768]{1,0}), custom_call_target="__cub$DeviceRadixSort"
We actually use TF indirectly via the jax2tf bridge, and I see this comment in the code hinting that uint8 is incompatible with `tf.math.top_k`:
https://github.com/google/jax/blob/0b87bf48f97ace10c7aee19c8f980788891a2df7/jax/experimental/jax2tf/jax2tf.py#L3167
However, recently, @dimitar-asenov (on XLA GPU) has made some changes to XLA sorting logic that provides support for radix-sorting uint8.
Could `tf.math.top_k` lower to HLO that avoids this up-cast to uint32 before radix sort? I believe this would halve the latency of radix sort for uint8.
### Standalone code to reproduce the issue
```shell
To repro, collect an XProf trace for the following snippet. See attached screenshot for sample trace.
import tensorflow as tf
data = tf.random.uniform(
shape=(1, 32768),
minval=0,
maxval=256,
dtype=tf.int32,
)
data = tf.cast(data, tf.uint8)
_, box_indices = tf.math.top_k(data, k=1024)
Or feel free to just find and run my internal experimental `benchmark_tf_top_k_uint8` binary on a machine with a GPU.
```
### Relevant log output
_No response_ | stat:awaiting tensorflower,type:bug,comp:ops,2.17 | medium | Critical |
2,472,175,535 | rust | `single_use_lifetimes`: false positive (suggests unstable anonymous lifetimes in `impl Trait`) | <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
Consider this code ([playground](https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=9ec3f4e9344318db60186b47cd48e5f0)):
```rust
#[warn(single_use_lifetimes)]
// warning: lifetime parameter `'a` only used once
fn foo<'a>(x: impl IntoIterator<Item = &'a i32>) -> Vec<i32> {
x.into_iter().copied().collect()
}
fn main() {
foo(&[1, 2, 3]);
}
```
The warning is issued: ``warning: lifetime parameter `'a` only used once``. However, removing the lifetime as suggested makes code to not compile:
```rust
#[warn(single_use_lifetimes)]
// error[E0658]: anonymous lifetimes in `impl Trait` are unstable
fn foo(x: impl IntoIterator<Item = &i32>) -> Vec<i32> {
x.into_iter().copied().collect()
}
fn main() {
foo(&[1, 2, 3]);
}
```
So the compiler contradicts itself.
I suspect `single_use_lifetimes` does not correctly verify whether the compiler feature is available. My understanding is that it will be stabilized in 2024 edition (https://github.com/rust-lang/rust/issues/117587).
### Meta
> If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
I tried in the playground by setting "nightly" and both 2021 and 2024 editions, and the problem is currently still there. | A-lints,C-bug,D-incorrect,F-lint-single_use_lifetimes | low | Critical |
2,472,193,018 | godot | Invisible nodes when pasting nodes from unsaved scene. | ### Tested versions
Reproducible in Godot 4.2.2 (Stable)
### System information
Godot v4.2.2.stable - macOS 14.5.0 - GLES3 (Compatibility) - Intel(R) Iris(TM) Plus Graphics OpenGL Engine (1x6x8 (fused) LP - Intel(R) Core(TM) i3-1000NG4 CPU @ 1.10GHz (4 Threads)
### Issue description
When pasting nodes from a newly created and unsaved scene, all nodes except for the root node will be invisible (maybe deleted). When re-adding the same nodes with the same name to the scene, the name will become [name]2 or more.
### Steps to reproduce
1) Click the + on the scene bar to add a new scene.
2) Choose "2D Scene". Works with other nodes as well.
3) Use Command-A to add a new Sprite2D, or use the + button to add a child Sprite2D to the root. Some other nodes work.
4) With only the Sprite2D selected, hold shift and click the root Node2D to select all nodes.
5) Copy them with Command+C
6) Without saving the scene, create a new scene like you did in step 1.
7) In the new scene, use Command+V to paste the nodes.
8) Like you did in step 3, add a new Sprite2D (or other node of the same kind added in step 3). The name with be Sprite2D2. Changing the name to Sprite2D and hitting enter will rename it to Sprite2D2.
https://github.com/user-attachments/assets/56a4855a-90f8-47c0-9c15-a2ac7abecae3
### Minimal reproduction project (MRP)
Just create a new project and follow the steps above. | bug,topic:editor | low | Minor |
2,472,214,723 | vscode | VS Code crash/freeze my PC [Ubuntu] | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: It is very random, I'm using Platformio and Ionic extensions.
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.92.2
- OS Version: Ubuntu 24.04 LTS
Steps to Reproduce :
1. It usually happens when I try to open a project on a disk that is not mounted.
2. I mount the disk, I reopen the project from "Open Folder" and while it is loading sometimes my PC freezes. When this happens, the mouse stops responding and if I have playing a video, for example on YouTube in the Brave browser, the sound keeps repeating in a loop for only a very small fragment, I estimate of a few ms.
3. When I reset the PC and try to mount the disk where I have the projects, I can't because it says it has errors and I have to repair it to be able to mount it again.

4. This has been happening to me for quite some time, about more than a year, with versions prior to the one reported here and with different PCs. On both PCs I have Ubuntu and I use nVidia video cards with the corresponding proprietary driver.
5. Attached the log from the .config/Code/logs folder.
[20240818T203627.zip](https://github.com/user-attachments/files/16653357/20240818T203627.zip)
6. I have tried forcing the bug under exactly the same conditions when I forget to mount the disk in one case with the extensions disabled and in another enabled but I couldn't reproduce it again.
Any extra information I can get I will post again | freeze-slow-crash-leak | low | Critical |
2,472,218,848 | TypeScript | `getTextOfJSDocComment` introduces a space in JSDoc comments | ### 🔎 Search Terms
getJSDocTags
getTextOfJSDocComment
### 🕗 Version & Regression Information
Tested on 5.5.4, also existed on previous versions.
### ⏯ Playground Link
https://stackblitz.com/edit/gettextofjsdoccomment-dbm6rj?file=main.ts,package-lock.json
### 💻 Code
```ts
import ts from 'typescript';
const filename = 'example.ts';
const code = `
/**
*
* @see {@link entry()}
*/
class Foo {};
`;
const sourceFile = ts.createSourceFile(
filename,
code,
ts.ScriptTarget.ESNext,
true
);
const visitNode = (node: ts.Node) => {
ts.forEachChild(node, visitNode);
const tags = ts.getJSDocTags(node);
if (tags.length) {
tags.forEach((tag) => {
const comment = tag.comment!.slice(1) as any;
console.log(
tag.tagName.getText(),
'> ',
ts.getTextOfJSDocComment(comment)
);
//console.log(comment);
});
}
};
ts.forEachChild(sourceFile, visitNode);
```
### 🙁 Actual behavior
`getTextOfJSDocComment` returns `{@link entry ()}`. Notice the space inbetween.
### 🙂 Expected behavior
`getTextOfJSDocComment` returns `{@link entry()}`. Notice the space inbetween.
### Additional information about the issue
This is a variation of #58584.
Also noticed it happened with `{@link core/inject}` becoming `{@link core /inject}` | Bug,Help Wanted,Domain: JSDoc | low | Minor |
2,472,248,570 | pytorch | DISABLED test_device_mode_ops_sparse_mm_reduce_cpu_float16 (__main__.TestDeviceUtilsCPU) | Platforms: asan, linux, mac, macos, win, windows
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_device_mode_ops_sparse_mm_reduce_cpu_float16&suite=TestDeviceUtilsCPU&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/28915224686).
Over the past 3 hours, it has been determined flaky in 30 workflow(s) with 90 failures and 30 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_device_mode_ops_sparse_mm_reduce_cpu_float16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "C:\actions-runner\_work\pytorch\pytorch\test\test_utils.py", line 1164, in test_device_mode_ops
for sample in samples:
File "C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\build\torch\testing\_internal\common_utils.py", line 388, in __next__
input_idx, input_val = next(self.child_iter)
File "C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\build\torch\utils\_contextlib.py", line 36, in generator_context
response = gen.send(None)
File "C:\actions-runner\_work\pytorch\pytorch\build\win_tmp\build\torch\testing\_internal\common_methods_invocations.py", line 1227, in sample_inputs_sparse_mm_reduce
torch.eye(m, m)
RuntimeError: Please register PrivateUse1HooksInterface by `RegisterPrivateUse1HooksInterface` first.
To execute this test, run the following from the base repo dir:
python test\test_utils.py TestDeviceUtilsCPU.test_device_mode_ops_sparse_mm_reduce_cpu_float16
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_utils.py`
cc @clee2000 | triaged,module: flaky-tests,skipped,module: unknown | medium | Critical |
2,472,285,599 | godot | `StreamPeerBuffer` fails to return only a part of the string previously stored on it | ### Tested versions
- Reproducible in: 4.3.stable
### System information
Godot v4.3.stable - macOS 14.6.1 - GLES3 (Compatibility) - Intel(R) Iris(TM) Plus Graphics OpenGL Engine - Intel(R) Core(TM) i7-1068NG7 CPU @ 2.30GHz (8 Threads)
### Issue description
If you put a string or an utf8 string in a `StreamPeerBuffer` and then you try to get just a part of it you will get an empty or corrupted string.
### Steps to reproduce
Here is a small script to reproduce the issue.
```gdscript
extends Node
# Called when the node enters the scene tree for the first time.
func _ready() -> void:
var buff = StreamPeerBuffer.new()
buff.put_string("Hello, World!!!")
buff.seek(0)
print("->" + buff.get_string(5) + "<-")
buff.clear()
buff.put_utf8_string("Hello✩, World✩!!!")
buff.seek(0)
print("->" + buff.get_utf8_string(8) + "<-")
```
This script will print:
```
-><-
-><-
```
### Minimal reproduction project (MRP)
[streampeerbuffergetpartialstringbug.zip](https://github.com/user-attachments/files/16654388/streampeerbuffergetpartialstringbug.zip)
| discussion,topic:core | low | Critical |
2,472,335,532 | tauri | [bug] does not work when run `tauri ios init` from a globally installed CLI. | ### Describe the bug
does not work when run `tauri ios init` from a globally installed CLI. Generated preBuildScripts at `gen/apple/project.yml` is:
```
preBuildScripts:
- script: node tauri ios xcode-script -v --platform
```
Run script error is `Error: Cannot find module '[PATH]/gen/apple/tauri'`.
There seems to be a problem here:
https://github.com/tauri-apps/tauri/blob/dev/tooling/cli/src/mobile/init.rs#L147-L174
### Reproduction
_No response_
### Expected behavior
_No response_
### Full `tauri info` output
```text
[✔] Environment
- OS: Mac OS 14.4.1 X64
✔ Xcode Command Line Tools: installed
✔ rustc: 1.80.1 (3f5fd8dd4 2024-08-06)
✔ cargo: 1.80.1 (376290515 2024-07-16)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-aarch64-apple-darwin (default)
- node: 20.12.2
- pnpm: 8.9.0
- yarn: 1.17.3
- npm: 10.5.0
[-] Packages
- tauri [RUST]: 2.0.0-rc.2 (no lockfile)
- tauri-build [RUST]: no manifest (no lockfile)
- wry [RUST]: no manifest (no lockfile)
- tao [RUST]: no manifest (no lockfile)
- tauri-cli [RUST]: 2.0.0-alpha.7
- @tauri-apps/api [NPM]: 2.0.0-rc.1
- @tauri-apps/cli [NPM]: 2.0.0-rc.3
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:1420/
- bundler: Rollup
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,status: needs triage,platform: iOS | low | Critical |
2,472,342,785 | tauri | [bug] target not recognized: Unrecognized operating system: ios13.0.0 | ### Describe the bug
I tried to delete `gen/apple` and re-`cargo tauri ios init`, but this error keeps coming up when I use `cargo tauri ios dev`
### Reproduction
_No response_
### Expected behavior
_No response_
### Full `tauri info` output
```text
[✔] Environment
- OS: Mac OS 14.6.1 X64
✔ Xcode Command Line Tools: installed
✔ rustc: 1.82.0-nightly (506052d49 2024-08-16)
✔ cargo: 1.82.0-nightly (2f738d617 2024-08-13)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: nightly-aarch64-apple-darwin (environment override by RUSTUP_TOOLCHAIN)
- node: 22.6.0
- npm: 10.8.2
- bun: 1.0.36
[-] Packages
- tauri [RUST]: 2.0.0-rc.3
- tauri-build [RUST]: 2.0.0-rc.3
- wry [RUST]: 0.42.0
- tao [RUST]: 0.29.0
- tauri-cli [RUST]: 2.0.0-rc.4
[-] App
- build-type: bundle
- CSP: default-src 'self'; img-src 'self' data:;; connect-src ipc: http://ipc.localhost
- frontendDist: ../dist
- devUrl: http://localhost:5173/
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,status: needs triage,platform: iOS | low | Critical |
2,472,376,168 | next.js | generateStaticParams globalThis (global) singleton problem | ### Link to the code that reproduces this issue
https://github.com/vaneenige/next-app-router-singleton
### To Reproduce
1. Set a new key in the global object
2. Try to get this key from the global object inside generateStaticParams
### Current vs. Expected behavior
I expected the global object to be the same in this environment
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.6.0: Mon Jul 29 21:14:30 PDT 2024; root:xnu-10063.141.2~1/RELEASE_ARM64_T6030
Available memory (MB): 18432
Available CPU cores: 12
Binaries:
Node: 20.11.1
npm: 10.2.4
Yarn: N/A
pnpm: 9.5.0
Relevant Packages:
next: 14.2.5 // Latest available version is detected (14.2.5).
eslint-config-next: N/A
react: 18.3.1
react-dom: 18.3.1
typescript: 5.5.4
Next.js Config:
output: export
```
### Which area(s) are affected? (Select all that apply)
Performance, Runtime
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local)
### Additional context
For example, I declare a new key in the global object using instrumentation.ts (it doesn't really matter where you do it)
<img width="423" alt="1" src="https://github.com/user-attachments/assets/6d52ca58-c67f-4338-a333-b13bece1427b">
When I try to get it inside the generateStaticParams function, I get undefined
<img width="534" alt="2" src="https://github.com/user-attachments/assets/d0027aa1-6247-4dd6-94d4-eb3959001b05">
But after trying the same thing in the component, I got the expected result:
<img width="448" alt="3" src="https://github.com/user-attachments/assets/d0a7db3b-ba13-4459-9512-5ea920235244">
I found similar issues and they seem to be related:
[https://github.com/vercel/next.js/issues/65350](https://github.com/vercel/next.js/issues/65350)
[https://github.com/vercel/next.js/issues/52165](https://github.com/vercel/next.js/issues/52165)
It seems to be running in a separate environment. But how can I solve this problem? | bug,Performance,Runtime | low | Major |
2,472,443,054 | flutter | App Exits Instead of Navigating Back with StatefullShellRoute in go_router on Android (flutter >= 3.24) | ### Steps to reproduce
1. Modify MainActivity.kt to use FlutterFragmentActivity instead of FlutterActivity.
2. Run the official example of StatefullShellRoute from the go_router package. The example can be found [here](https://github.com/flutter/packages/blob/main/packages/go_router/example/lib/stateful_shell_route.dart).
3. Tap the “View details” button in “Section A” to navigate to the “DetailsScreen - A” screen.
4. Switch to “Section B.”
5. Switch back to “Section A” and use the Android back button or swipe gesture to go back.
6. Issue: The app exits instead of returning to the “Root of section A.”
### Expected results
The app should navigate back to the “Root of section A” when the Android back button or swipe gesture is used.
### Actual results
The app exits instead of navigating back to the previous screen.
### Code sample
<details open><summary>Code sample</summary>
```kotlin
import io.flutter.embedding.android.FlutterFragmentActivity
class MainActivity: FlutterFragmentActivity()
```
```dart
import 'package:flutter/material.dart';
import 'package:go_router/go_router.dart';
final GlobalKey<NavigatorState> _rootNavigatorKey =
GlobalKey<NavigatorState>(debugLabel: 'root');
final GlobalKey<NavigatorState> _sectionANavigatorKey =
GlobalKey<NavigatorState>(debugLabel: 'sectionANav');
// This example demonstrates how to setup nested navigation using a
// BottomNavigationBar, where each bar item uses its own persistent navigator,
// i.e. navigation state is maintained separately for each item. This setup also
// enables deep linking into nested pages.
void main() {
runApp(NestedTabNavigationExampleApp());
}
/// An example demonstrating how to use nested navigators
class NestedTabNavigationExampleApp extends StatelessWidget {
/// Creates a NestedTabNavigationExampleApp
NestedTabNavigationExampleApp({super.key});
final GoRouter _router = GoRouter(
navigatorKey: _rootNavigatorKey,
initialLocation: '/a',
routes: <RouteBase>[
// #docregion configuration-builder
StatefulShellRoute.indexedStack(
builder: (BuildContext context, GoRouterState state,
StatefulNavigationShell navigationShell) {
// Return the widget that implements the custom shell (in this case
// using a BottomNavigationBar). The StatefulNavigationShell is passed
// to be able access the state of the shell and to navigate to other
// branches in a stateful way.
return ScaffoldWithNavBar(navigationShell: navigationShell);
},
// #enddocregion configuration-builder
// #docregion configuration-branches
branches: <StatefulShellBranch>[
// The route branch for the first tab of the bottom navigation bar.
StatefulShellBranch(
navigatorKey: _sectionANavigatorKey,
routes: <RouteBase>[
GoRoute(
// The screen to display as the root in the first tab of the
// bottom navigation bar.
path: '/a',
builder: (BuildContext context, GoRouterState state) =>
const RootScreen(label: 'A', detailsPath: '/a/details'),
routes: <RouteBase>[
// The details screen to display stacked on navigator of the
// first tab. This will cover screen A but not the application
// shell (bottom navigation bar).
GoRoute(
path: 'details',
builder: (BuildContext context, GoRouterState state) =>
const DetailsScreen(label: 'A'),
),
],
),
],
),
// #enddocregion configuration-branches
// The route branch for the second tab of the bottom navigation bar.
StatefulShellBranch(
// It's not necessary to provide a navigatorKey if it isn't also
// needed elsewhere. If not provided, a default key will be used.
routes: <RouteBase>[
GoRoute(
// The screen to display as the root in the second tab of the
// bottom navigation bar.
path: '/b',
builder: (BuildContext context, GoRouterState state) =>
const RootScreen(
label: 'B',
detailsPath: '/b/details/1',
secondDetailsPath: '/b/details/2',
),
routes: <RouteBase>[
GoRoute(
path: 'details/:param',
builder: (BuildContext context, GoRouterState state) =>
DetailsScreen(
label: 'B',
param: state.pathParameters['param'],
),
),
],
),
],
),
// The route branch for the third tab of the bottom navigation bar.
StatefulShellBranch(
routes: <RouteBase>[
GoRoute(
// The screen to display as the root in the third tab of the
// bottom navigation bar.
path: '/c',
builder: (BuildContext context, GoRouterState state) =>
const RootScreen(
label: 'C',
detailsPath: '/c/details',
),
routes: <RouteBase>[
GoRoute(
path: 'details',
builder: (BuildContext context, GoRouterState state) =>
DetailsScreen(
label: 'C',
extra: state.extra,
),
),
],
),
],
),
],
),
],
);
@override
Widget build(BuildContext context) {
return MaterialApp.router(
title: 'Flutter Demo',
theme: ThemeData(
primarySwatch: Colors.blue,
),
routerConfig: _router,
);
}
}
/// Builds the "shell" for the app by building a Scaffold with a
/// BottomNavigationBar, where [child] is placed in the body of the Scaffold.
class ScaffoldWithNavBar extends StatelessWidget {
/// Constructs an [ScaffoldWithNavBar].
const ScaffoldWithNavBar({
required this.navigationShell,
Key? key,
}) : super(key: key ?? const ValueKey<String>('ScaffoldWithNavBar'));
/// The navigation shell and container for the branch Navigators.
final StatefulNavigationShell navigationShell;
// #docregion configuration-custom-shell
@override
Widget build(BuildContext context) {
return Scaffold(
// The StatefulNavigationShell from the associated StatefulShellRoute is
// directly passed as the body of the Scaffold.
body: navigationShell,
bottomNavigationBar: BottomNavigationBar(
// Here, the items of BottomNavigationBar are hard coded. In a real
// world scenario, the items would most likely be generated from the
// branches of the shell route, which can be fetched using
// `navigationShell.route.branches`.
items: const <BottomNavigationBarItem>[
BottomNavigationBarItem(icon: Icon(Icons.home), label: 'Section A'),
BottomNavigationBarItem(icon: Icon(Icons.work), label: 'Section B'),
BottomNavigationBarItem(icon: Icon(Icons.tab), label: 'Section C'),
],
currentIndex: navigationShell.currentIndex,
// Navigate to the current location of the branch at the provided index
// when tapping an item in the BottomNavigationBar.
onTap: (int index) => navigationShell.goBranch(index),
),
);
}
// #enddocregion configuration-custom-shell
/// NOTE: For a slightly more sophisticated branch switching, change the onTap
/// handler on the BottomNavigationBar above to the following:
/// `onTap: (int index) => _onTap(context, index),`
// ignore: unused_element
void _onTap(BuildContext context, int index) {
// When navigating to a new branch, it's recommended to use the goBranch
// method, as doing so makes sure the last navigation state of the
// Navigator for the branch is restored.
navigationShell.goBranch(
index,
// A common pattern when using bottom navigation bars is to support
// navigating to the initial location when tapping the item that is
// already active. This example demonstrates how to support this behavior,
// using the initialLocation parameter of goBranch.
initialLocation: index == navigationShell.currentIndex,
);
}
}
/// Widget for the root/initial pages in the bottom navigation bar.
class RootScreen extends StatelessWidget {
/// Creates a RootScreen
const RootScreen({
required this.label,
required this.detailsPath,
this.secondDetailsPath,
super.key,
});
/// The label
final String label;
/// The path to the detail page
final String detailsPath;
/// The path to another detail page
final String? secondDetailsPath;
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text('Root of section $label'),
),
body: Center(
child: Column(
mainAxisSize: MainAxisSize.min,
children: <Widget>[
Text('Screen $label',
style: Theme.of(context).textTheme.titleLarge),
const Padding(padding: EdgeInsets.all(4)),
TextButton(
onPressed: () {
GoRouter.of(context).go(detailsPath, extra: '$label-XYZ');
},
child: const Text('View details'),
),
const Padding(padding: EdgeInsets.all(4)),
if (secondDetailsPath != null)
TextButton(
onPressed: () {
GoRouter.of(context).go(secondDetailsPath!);
},
child: const Text('View more details'),
),
],
),
),
);
}
}
/// The details screen for either the A or B screen.
class DetailsScreen extends StatefulWidget {
/// Constructs a [DetailsScreen].
const DetailsScreen({
required this.label,
this.param,
this.extra,
this.withScaffold = true,
super.key,
});
/// The label to display in the center of the screen.
final String label;
/// Optional param
final String? param;
/// Optional extra object
final Object? extra;
/// Wrap in scaffold
final bool withScaffold;
@override
State<StatefulWidget> createState() => DetailsScreenState();
}
/// The state for DetailsScreen
class DetailsScreenState extends State<DetailsScreen> {
int _counter = 0;
@override
Widget build(BuildContext context) {
if (widget.withScaffold) {
return Scaffold(
appBar: AppBar(
title: Text('Details Screen - ${widget.label}'),
),
body: _build(context),
);
} else {
return ColoredBox(
color: Theme.of(context).scaffoldBackgroundColor,
child: _build(context),
);
}
}
Widget _build(BuildContext context) {
return Center(
child: Column(
mainAxisSize: MainAxisSize.min,
children: <Widget>[
Text('Details for ${widget.label} - Counter: $_counter',
style: Theme.of(context).textTheme.titleLarge),
const Padding(padding: EdgeInsets.all(4)),
TextButton(
onPressed: () {
setState(() {
_counter++;
});
},
child: const Text('Increment counter'),
),
const Padding(padding: EdgeInsets.all(8)),
if (widget.param != null)
Text('Parameter: ${widget.param!}',
style: Theme.of(context).textTheme.titleMedium),
const Padding(padding: EdgeInsets.all(8)),
if (widget.extra != null)
Text('Extra: ${widget.extra!}',
style: Theme.of(context).textTheme.titleMedium),
if (!widget.withScaffold) ...<Widget>[
const Padding(padding: EdgeInsets.all(16)),
TextButton(
onPressed: () {
GoRouter.of(context).pop();
},
child: const Text('< Back',
style: TextStyle(fontWeight: FontWeight.bold, fontSize: 18)),
),
]
],
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
https://github.com/user-attachments/assets/b59a1e8d-0fd2-4a9a-a198-13c92fc0f463
</details>
### Logs
_No response_
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
Doctor summary (to see all details, run flutter doctor -v):
[✓] Flutter (Channel stable, 3.24.0, on macOS 14.5 23F79 darwin-arm64, locale ja-JP)
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
[!] Xcode - develop for iOS and macOS (Xcode 15.2)
✗ CocoaPods installed but not working.
You appear to have CocoaPods installed but it is not working.
This can happen if the version of Ruby that CocoaPods was installed with is different from the one being used to invoke
it.
This can usually be fixed by re-installing CocoaPods.
For re-installation instructions, see https://guides.cocoapods.org/using/getting-started.html#installation
[✓] Chrome - develop for the web
[✓] Android Studio (version 2024.1)
[✓] VS Code (version 1.92.2)
[✓] Connected device (6 available)
[✓] Network resources
! Doctor found issues in 1 category.
```
</details>
### Additional Information
- This issue does not occur in Flutter SDK version 3.22.3 but starts occurring from version 3.24.0.
- The issue only occurs when using FlutterFragmentActivity. It does not occur when using FlutterActivity.
| c: regression,package,has reproducible steps,P1,p: go_router,fyi-android,team-go_router,triaged-go_router,:hourglass_flowing_sand:,found in release: 3.24,found in release: 3.25 | medium | Critical |
2,472,479,615 | pytorch | Unexpected behavior of `CrossEntropyLoss()` | ### 🐛 Describe the bug
[CrossEntropyLoss()](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html) with the 1D tensor of size 1 and a 0D tensor gets the error message as shown below:
```python
import torch
from torch import nn
tensor1 = torch.tensor([7.2]) # Here
tensor2 = torch.tensor(1.5) # Here
cel = nn.CrossEntropyLoss()
cel(input=tensor1, target=tensor2) # Error
```
> RuntimeError: expected scalar type Long but found Float
So, I changed `1.5` to `1` for `tensor2` but there is the error message as shown below:
```python
import torch
from torch import nn
tensor1 = torch.tensor([7.2])
tensor2 = torch.tensor(1) # Here
cel = nn.CrossEntropyLoss()
cel(input=tensor1, target=tensor2) # Error
```
> IndexError: Target 1 is out of bounds.
So, I changed `1` to `0` for `tensor2`, then it works as shown below:
```python
import torch
from torch import nn
tensor1 = torch.tensor([7.2])
tensor2 = torch.tensor(0) # Here
cel = nn.CrossEntropyLoss()
cel(input=tensor1, target=tensor2)
# tensor(0.)
```
Lastly, I changed `[7.2]` to `0` for `tensor1` and `0` to `[7.2]` for `tensor2` but there is the error message as shown below:
```python
import torch
from torch import nn
tensor1 = torch.tensor(0) # Here
tensor2 = torch.tensor([7.2]) # Here
cel = nn.CrossEntropyLoss()
cel(input=tensor1, target=tensor2) # Error
```
> IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
In addition, `CrossEntropyLoss()` with the 1D tensor of size 3 and a 0D tensor works as shown below:
```python
import torch
from torch import nn
tensor1 = torch.tensor([7.2, 4.6, 9.3]) # Here
tensor2 = torch.tensor(1) # Here
cel = nn.CrossEntropyLoss()
cel(input=tensor1, target=tensor2)
cel = nn.CrossEntropyLoss()
cel(input=tensor1, target=tensor2)
# tensor(-0.)
```
### Versions
```python
import torch
torch.__version__ # 2.3.1+cu121
```
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki | module: nn,triaged | low | Critical |
2,472,484,893 | vscode | Error: `No view is registered with id` might be caused by a race condition | > I think `No view is registered with id: cspell.issuesViewByFile` might be caused by a race condition. I just saw it when I installed the latest version of [Code Spell Checker v4.0.6](https://marketplace.visualstudio.com/items?itemName=streetsidesoftware.code-spell-checker) in 1.92.0. Something about the way the Extension Host is restarted seems to have changed.
> The extension first starts the LSP client before registering some views because it needs access to the client. I'll try and change the order to register views as soon as possible. Even so, it is an async function, so there is always a chance of a race condition if the views are expected to exist before the `activate` resolves.
_Originally posted by @Jason3S in https://github.com/microsoft/vscode/issues/224590#issuecomment-2265674035_
I created this as a separate issue because I have seen this pop up with other extensions.
<img width="458" alt="image" src="https://github.com/user-attachments/assets/b483c5f0-72fe-47c7-b1bc-b54fe07348ec">
It happens after clicking `Restart Extensions` button:
<img width="146" alt="image" src="https://github.com/user-attachments/assets/c6d03b3c-3f56-41a9-ba90-a16c5e8a2c59">
I think the race condition happens when:
1. The extension is already installed and running.
2. It has views
3. Was updated but not yet restarted
4. Uses an Async activation function
```
export async function activate(context: ExtensionContext): Promise<ExtensionApi>
```
When the restart happens, the error appears.
I was able to reproduce this with GitLens:
1. Extensions: GitLens -> Install Specific version (I chose 15.2.3).
<img width="598" alt="image" src="https://github.com/user-attachments/assets/67060682-d220-4f5e-b1c5-a953868d9e0e">
2. Restart Extensions:
<img width="295" alt="image" src="https://github.com/user-attachments/assets/198a11d1-a1e8-4561-b902-f158289d3722">
3. Notice: Error doesn't occur.
4. Choose Update then Restart
<img width="287" alt="image" src="https://github.com/user-attachments/assets/2da28dde-7332-4686-a97e-4e70681ad056">
<img width="295" alt="image" src="https://github.com/user-attachments/assets/0f1094e0-d12b-4991-b0b1-36fb8369eb86">
5. Notice: Error
<img width="468" alt="image" src="https://github.com/user-attachments/assets/0dfe96b4-295f-4fda-9933-595685a37f1e">
| bug,workbench-views | low | Critical |
2,472,517,224 | PowerToys | Awake - user setting of 'keep awake on interval' | ### Description of the new feature / enhancement
When selecting awake from the taskbar, and choosing 'keep awake on interval' theres only 30min, 1h and 2 h.
In setting you can define the lenght in'Interval before returning to the previous awakness state', this setting is memorized after experation, so I would like 'last used' setting this to be selecteble in the taskbar menu.
as i often use 4 hours
### Scenario when this would be used?
To rapidly set the perfered setting, when using the PC for presenting, in longer meetings.
### Supporting information
_No response_ | Idea-Enhancement,Product-Awake | low | Minor |
2,472,517,560 | ollama | for glm4-9b | Do you have any plans to support the tool-calling function of glm4-9b? | feature request | low | Minor |
2,472,517,667 | transformers | How to use 【examples/pytorch/contrastive-image-text】 to inter inference | ### Feature request
I have reviewed the training code for CLIP and successfully executed it. Now, I want to use the obtained model for inference testing.
### Motivation
I would like to test the performance of the model I have trained.
### Your contribution
I hope I can get a example script to inference testing like below script :
python examples/pytorch/contrastive-image-text/run_clip.py \
--output_dir ./clip-roberta-finetuned \
--model_name_or_path ./clip-roberta \
--data_dir $PWD/data \
--dataset_name ydshieh/coco_dataset_script \
--dataset_config_name=2017 \
--image_column image_path \
--caption_column caption \
--remove_unused_columns=False \
--do_train --do_eval \
--per_device_train_batch_size="64" \
--per_device_eval_batch_size="64" \
--learning_rate="5e-5" --warmup_steps="0" --weight_decay 0.1 \
--overwrite_output_dir \
--push_to_hub | Feature request | low | Major |
2,472,522,959 | react-native | Touchable is unresponsive during internal ScrollView scroll on iOS | ### Description
I can't press buttons during carousel scrolling in another place of ScrollView. Only on iOS.
Also if touch button down and keep finger: on scroll start button will be unpressed.
### Steps to reproduce
Press button during scroll.
Or touch button down, keep finger and see what happening on scroll start.
### React Native Version
0.75.1, 0.74.5
### Affected Platforms
Runtime - iOS
### Output of `npx react-native info`
```text
System:
OS: macOS 14.6.1
CPU: (10) arm64 Apple M1 Max
Memory: 7.89 GB / 64.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 22.6.0
path: /opt/homebrew/bin/node
Yarn:
version: 4.4.0
path: /opt/homebrew/bin/yarn
npm:
version: 10.8.2
path: /opt/homebrew/bin/npm
Watchman:
version: 2024.08.12.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods:
version: 1.12.1
path: /usr/local/bin/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 23.5
- iOS 17.5
- macOS 14.5
- tvOS 17.5
- visionOS 1.2
- watchOS 10.5
Android SDK: Not Found
IDEs:
Android Studio: 2024.1 AI-241.18034.62.2411.12169540
Xcode:
version: 15.4/15F31d
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.12
path: /usr/bin/javac
Ruby:
version: 2.6.10
path: /Users/bardiamist/.rbenv/shims/ruby
npmPackages:
"@react-native-community/cli": Not Found
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.75.1
wanted: 0.75.1
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: false
iOS:
hermesEnabled: true
newArchEnabled: false
```
### Stacktrace or Logs
```text
No crash
```
### Reproducer
https://snack.expo.dev/@bardiamist/nested-auto-scroll?platform=ios
### Screenshots and Videos
https://github.com/user-attachments/assets/d61b9a9a-a1e0-453c-8c6b-b33d51cf825d
| Platform: iOS,Issue: Author Provided Repro,Component: ScrollView | low | Critical |
2,472,546,940 | next.js | New page is called when intercepting routes are terminated before the server action is completed | ### Link to the code that reproduces this issue
https://github.com/JoSuwon/intercepting-routes-with-server-action
### To Reproduce
1. Start the application in development or visit deploy application (https://intercepting-routes-with-server-action.vercel.app/)
2. Click the button to open the modal (This modal consists of intercepting routes and performs a server-action when mounted, which returns the result after 3 seconds)
3. Ends interception-routes and returns to main before server-action completes execution (before 3 seconds)
4. The app/page.tsx file is newly rendered on the server. (Check that page loaded time and random number are changed)
`When you go back before the server-action is complete`
https://github.com/user-attachments/assets/83931317-8543-43d5-8071-71e972248e62
`When you go back after the server-action is complete`
https://github.com/user-attachments/assets/7deebe11-78a1-4a5b-b698-de28df7ca7fb
### Current vs. Expected behavior
Even if interception-routes is terminated before the server-action is complete, I think app/page.tsx should be maintained without new rendering.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.0.0: Wed Aug 7 03:08:56 PDT 2024; root:xnu-11215.1.9~22/RELEASE_ARM64_T6020
Available memory (MB): 16384
Available CPU cores: 10
Binaries:
Node: 20.14.0
npm: 10.7.0
Yarn: 1.22.19
pnpm: 9.4.0
Relevant Packages:
next: 14.2.5 // Latest available version is detected (14.2.5).
eslint-config-next: 14.2.5
react: 18.3.1
react-dom: 18.3.1
typescript: 5.5.4
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Parallel & Intercepting Routes, Partial Prerendering (PPR)
### Which stage(s) are affected? (Select all that apply)
next dev (local), next start (local), Vercel (Deployed)
### Additional context
Installing the next@canary version and running dev encountered the same problem, and the build failed and distributed to vercel as 14.2.5 version.
deploy link : https://intercepting-routes-with-server-action.vercel.app/ | bug,Parallel & Intercepting Routes,Partial Prerendering (PPR) | low | Critical |
2,472,629,262 | godot | Custom Resources that worked in 4.2.2 do not in 4.3 (does not recognize as resource?) | ### Tested versions
No issue in v4.2.2.stable.official [15073afe3] and previous
First visual(?) errors in v4.3.rc1.official [e343dbbcc]
First game-ending exception occurs in v4.3.rc2.official [3978628c6] and later (rc3 and 4.3 tested)
### System information
Godot v4.3.stable - Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1080 (NVIDIA; 32.0.15.6081) - Intel(R) Core(TM) i5-7600 CPU @ 3.50GHz (4 Threads)
### Issue description
Short version, something's wacky with my custom resources since I updated to 4.3 from 4.2.2
I have a very simple resource which looks like this:
```
class_name Infusion
extends Resource
@export var air: int = 0
@export var earth: int = 0
@export var fire: int = 0
@export var ice: int = 0
```
Which is used by another custom resource (CombatCard) as an export variable, like so:
`@export var infusion:= Infusion.new()`
In 4.2.2, this worked as I expected, with CombatCards that did not have a custom Infusion set defaulting to a new Infusion instance with 0s across the board. Additionally, the autocomplete had that custom resource icon (the cube thing) for the resources, as shown.

This is not the case when I load that same project in 4.3.rc1 or later:

You'll notice that the Infusion resource (which extends Resource) has no symbol. Additionally, QuestInfoScreen now shows the icon for ColorRect, which is what that class extends, whereas previously it also had the cube icon. I don't know if that's intentional.
The project still seems to run fine despite this in 4.3.rc1. In 4.3.rc2, rc3, or 4.3, however, it causes the project to crash when run with the following error.


```
Invalid call. Nonexistent function 'new' in base 'GDScript'
at function: @implicit_new
```
### Steps to reproduce
I've been able to reproduce the visual portion of this issue in a brand new project (here, MyCustomResource is another dead simple class that extends Resource and has a single member variable of type int, not even exported)

I've not been able to reproduce the crash yet. I suspect it has something to do with loading or preloading of the parent CombatCard resource, or scenes which use that resource? I don't really know, feel free to ask if you want more info.
My next plan is to try removing the Infusion class and just storing those variables in the parent resource, and see whether anything else is broken. I should also have access to Linux Mint in the short term to check if it's OS related.
If you think an MRP would help, I can try to pare down my main project a bit and upload that. I'm open to suggestions, if there's anything I can do to help let me know, in the meantime I'll carry on development in 4.2.2
### Minimal reproduction project (MRP)
[95789MRP.zip](https://github.com/user-attachments/files/16669153/95789MRP.zip)
EDIT: Did some hacking and slashing on the project, and got an MRP. It probably won't run _well_, but the important thing is that it results in the issue for me (before any other issue, anyway)
Step 1: Open Project
Step 2: Run Project
Project will throw the error before showing the main menu | bug,topic:gdscript,regression | low | Critical |
2,472,668,163 | flutter | [web] Paint and Path memory is not reclaimed fast enough | ### Steps to reproduce
1. Create new Flutter project: `flutter create bug && cd bug`
2. Add a dependency on `pkg:vector_math`: `flutter pub add vector_math` (dependency on `vector_math` is almost certainly not needed but it makes the repro code cleaner)
3. Replace `lib/main.dart` with the code below (in code sample section).
4. Run in any non-web target (e.g. `flutter run -d macos` or `flutter build macos`) to ensure there is no memory leak (memory stays at a certain level):

6. Run on the web (e.g. `flutter build web`) in any configuration (release / profile / debug, wasm or not, etc.)
### Expected results
Used memory doesn't grow.
### Actual results
Memory used by the web app goes up linearly at several megabytes per second:

After a couple of minutes, the memory consumed reaches 1.0 GB and keeps growing:

I tried debugging this through Dart DevTools but Dart DevTools don't have access to memory info for web apps. All I can do is use Chrome DevTools. I can't say I understand what I'm looking at in Chrome DevTools, but here are some things I noticed:
1. Basically all the RAM is taken by some `JSArrayBufferData` object:

2. This object seems to have something in common with a `flutterCanvasKit` object, and specifically with something called `globalThis`.

Snapshot file from Chrome DevTools: [Heap-20240818T130707.heapsnapshot.zip](https://github.com/user-attachments/files/16657126/Heap-20240818T130707.heapsnapshot.zip)
I initially thought this had something to do with Flame but as you can see from the sample, the same issue appears on vanilla Flutter as well. // cc @zoeyfan
### Code sample
<details open><summary>Code sample</summary>
It's possible that the code could be made simpler but before I invest that kind of work, I'd like to make sure this is actually a bug and that I'm not overlooking something obvious or doing something stupid.
```dart
import 'dart:math';
import 'package:flutter/material.dart';
import 'package:flutter/scheduler.dart';
import 'package:vector_math/vector_math_64.dart' hide Colors;
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
theme: ThemeData(
colorScheme: ColorScheme.fromSeed(seedColor: Colors.deepPurple),
useMaterial3: true,
),
home: const MyHomePage(title: 'Flutter Demo Home Page'),
);
}
}
class MyHomePage extends StatelessWidget {
final String title;
const MyHomePage({super.key, required this.title});
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
backgroundColor: Theme.of(context).colorScheme.inversePrimary,
title: Text(title),
),
body: const Center(
child: MemoryPressureWidget(),
),
);
}
}
class MemoryPressureWidget extends StatefulWidget {
const MemoryPressureWidget({super.key});
@override
State<MemoryPressureWidget> createState() => _MemoryPressureWidgetState();
}
class _MemoryPressureWidgetState extends State<MemoryPressureWidget>
with SingleTickerProviderStateMixin {
late AnimationController _controller;
final List<PairedWanderer> wanderers = [];
@override
Widget build(BuildContext context) {
return Stack(
children: [
for (final wanderer in wanderers)
PairedWandererWidget(wanderer: wanderer),
],
);
}
@override
void dispose() {
_controller.dispose();
super.dispose();
}
@override
void initState() {
super.initState();
_createBatch(1000, const Size(500, 500));
_controller = AnimationController(vsync: this);
}
void _createBatch(int batchSize, Size worldSize) {
assert(batchSize.isEven);
final random = Random(42);
for (var i = 0; i < batchSize / 2; i++) {
final a = PairedWanderer(
velocity: (Vector2.random() - Vector2.all(0.5))..scale(100),
worldSize: worldSize,
position: Vector2(worldSize.width * random.nextDouble(),
worldSize.height * random.nextDouble()),
);
final b = PairedWanderer(
velocity: (Vector2.random() - Vector2.all(0.5))..scale(100),
worldSize: worldSize,
position: Vector2(worldSize.width * random.nextDouble(),
worldSize.height * random.nextDouble()),
);
a.otherWanderer = b;
b.otherWanderer = a;
wanderers.add(a);
wanderers.add(b);
}
}
}
class PairedWanderer {
PairedWanderer? otherWanderer;
final Vector2 position;
final Vector2 velocity;
final Size worldSize;
PairedWanderer({
required this.position,
required this.velocity,
required this.worldSize,
});
void update(double dt) {
position.addScaled(velocity, dt);
if (otherWanderer != null) {
position.addScaled(otherWanderer!.velocity, dt * 0.25);
}
if (position.x < 0 && velocity.x < 0) {
velocity.x = -velocity.x;
} else if (position.x > worldSize.width && velocity.x > 0) {
velocity.x = -velocity.x;
}
if (position.y < 0 && velocity.y < 0) {
velocity.y = -velocity.y;
} else if (position.y > worldSize.height && velocity.y > 0) {
velocity.y = -velocity.y;
}
}
}
class PairedWandererWidget extends StatefulWidget {
final PairedWanderer wanderer;
const PairedWandererWidget({required this.wanderer, super.key});
@override
State<PairedWandererWidget> createState() => _PairedWandererWidgetState();
}
class _PairedWandererWidgetState extends State<PairedWandererWidget>
with SingleTickerProviderStateMixin {
late Ticker _ticker;
Duration _lastElapsed = Duration.zero;
@override
void initState() {
super.initState();
_ticker = createTicker(_onTick);
_ticker.start();
}
@override
void dispose() {
_ticker.dispose();
super.dispose();
}
@override
Widget build(BuildContext context) {
return Positioned(
left: widget.wanderer.position.x - 128 / 4,
top: widget.wanderer.position.y - 128 / 4,
child: const SizedBox(
width: 8,
height: 8,
child: Placeholder(),
),
);
}
void _onTick(Duration elapsed) {
var dt = (elapsed - _lastElapsed).inMicroseconds / 1000000;
dt = min(dt, 1 / 60);
widget.wanderer.update(dt);
_lastElapsed = elapsed;
setState(() {});
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
The code sample is compatible with Dartpad.
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
flutter doctor -v
[✓] Flutter (Channel stable, 3.24.0, on macOS 14.4.1 23E224 darwin-arm64,
locale en-US)
• Flutter version 3.24.0 on channel stable at
/Users/filiph/fvm/versions/stable
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 80c2e84975 (3 weeks ago), 2024-07-30 23:06:49 +0700
• Engine revision b8800d88be
• Dart version 3.5.0
• DevTools version 2.37.2
[✓] Android toolchain - develop for Android devices (Android SDK version
35.0.0)
• Android SDK at /Users/filiph/Library/Android/sdk
• Platform android-35, build-tools 35.0.0
• Java binary at: /Applications/Android
Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build
17.0.7+0-17.0.7b1000.6-10550314)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 15.4)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 15F31d
• CocoaPods version 1.15.0
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2023.1)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build
17.0.7+0-17.0.7b1000.6-10550314)
[✓] IntelliJ IDEA Ultimate Edition (version 2023.3.2)
• IntelliJ at /Users/filiph/Applications/IntelliJ IDEA Ultimate.app
• Flutter plugin version 78.4.2
• Dart plugin version 233.15271
[✓] VS Code (version 1.87.2)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.84.0
[✓] Connected device (3 available)
• macOS (desktop) • macos • darwin-arm64
• macOS 14.4.1 23E224 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin
• macOS 14.4.1 23E224 darwin-arm64
• Chrome (web) • chrome • web-javascript
• Google Chrome 127.0.6533.120
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| c: performance,platform-web,perf: memory,has reproducible steps,P1,team-web,triaged-web,found in release: 3.24,found in release: 3.25 | medium | Critical |
2,472,691,528 | PowerToys | View desktop folders in a simplified view like a Start Menu Pinned folder | ### Description of the new feature / enhancement
I'd like a feature where you can set Desktop folders to open into a simplified window of shortcuts similar to Start Menu Pinned folders. Think of this as a crossover between the Start Menu Pinned folders and [Stardock's Fences](https://www.stardock.com/products/fences/).
### Scenario when this would be used?
- On the desktop, of course.
- This is important for me since Fences is shareware and a competing product caled [Portals](https://portals-app.com/) is donationware, and I couldn't find any open-source alternatives to both (both are proprietary). I want an open-source variant.
- This is also useful if someone doesn't want to have File Explorer opened when trying to group Desktop shortcuts into folders.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,472,698,495 | ant-design | Date picker with more than one month in calendar | ### What problem does this feature solve?
It would be nice if the calendar dropdown has the option to show two months next to each other similar to the range picker. This will allow the user to quickly choose a date in the next/previous month instead of using the buttons at the top to navigate.
### What does the proposed API look like?
Probably a extra property like displayMonths?
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | Inactive | low | Minor |
2,472,699,231 | pytorch | Pytorch Mobile - Different output on Android and PC with the same model | ### 🐛 Describe the bug
Hi!
I've trained EfficientNetB0 model from timm and converted it to .ptl with jit.trace. Here is my code:
```
dummy_input = torch.randn(1,3,256,256)
traced_script_module = torch.jit.trace(model, dummy_input)
traced_script_module_optimized = optimize_for_mobile(traced_script_module)
traced_script_module_optimized._save_for_lite_interpreter("./models/classifier_best.ptl")
```
Then I've loaded my model to "/assets" tried to forward imageBitmap to it. Here is code:
```
try {
module = Module.load(assetFilePath(TestModelActivity.this, "classifier_best.ptl"));
} catch (IOException e) {
throw new RuntimeException(e);
}
try {
bitmap = BitmapFactory.decodeStream(getAssets().open("solara_cropped.jpg"));
} catch (IOException e) {
throw new RuntimeException(e);
}
Tensor inputTensor = TensorImageUtils.bitmapToFloat32Tensor(bitmap,
TensorImageUtils.TORCHVISION_NORM_MEAN_RGB, TensorImageUtils.TORCHVISION_NORM_STD_RGB,
MemoryFormat.CHANNELS_LAST);
Tensor outputTensor = module.forward(IValue.from(inputTensor)).toTensor();
float[] scores = outputTensor.getDataAsFloatArray();
```
Results are different. What I'm doing wrong? Or it's a bug?
### Versions
collect_env.py does nothing but here are my versions
PC:
Python 3.11.9
Pytorch 2.0.1+cu117
timm 0.9.2
Android:
implementation("org.pytorch:pytorch_android_lite:1.13.0") // tried without lite, nothing changed
implementation("org.pytorch:pytorch_android_torchvision_lite:1.13.0") // tried without lite, nothing changed
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel | oncall: jit,oncall: mobile | low | Critical |
2,472,722,322 | godot | Linux: When dragging files Godot can't track the mouse position | ### Tested versions
- Reproducible in 4.3-stable, might also affect other 4.x versions
### System information
Godot v4.3.stable - Arch Linux #1 SMP PREEMPT_DYNAMIC Thu, 15 Aug 2024 00:25:30 +0000 - Wayland - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3080 Laptop GPU - 11th Gen Intel(R) Core(TM) i9-11900K @ 3.50GHz (16 Threads)
### Issue description
So this is a bug I found, as I don't think this is intended behavior.
When dragging files on Linux (not tested on other OSes) any mouse-related signals on the Control class aren't emitted when they should be.
Note that Node2D.get_global_mouse_position() also doesn't update the position. Godot seems to be unable to track the mouse position when files are dragged.
From my testing, this affects both Wayland and X11.
My expectation is that these mouse-related signals are still emitted even when dragging files.
Why would I need this? To make essentially just an area where a file can be dropped, instead of the whole window. But since tracking the mouse doesn't work, that is much harder (still technically possible using a Window).
### Steps to reproduce
1. Create a project
2. Connect the signals "mouse_entered" and/or "mouse_exited" of any Control Node to a function
3. print() any feedback
4. Drag files from the system file manager and check the output
### Minimal reproduction project (MRP)
[drop-bug.zip](https://github.com/user-attachments/files/16657448/drop-bug.zip)
| bug,platform:linuxbsd,needs testing,topic:gui | low | Critical |
2,472,731,644 | TypeScript | [tsserver] Make "configure excludes" warning more debuggable | ### 🔍 Search Terms
To enable project-wide JavaScript/TypeScript language features
exclude large folders
configure excludes
### ✅ Viability Checklist
- [X] This wouldn't be a breaking change in existing TypeScript/JavaScript code
- [X] This wouldn't change the runtime behavior of existing JavaScript code
- [X] This could be implemented without emitting different JS based on the types of the expressions
- [X] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, new syntax sugar for JS, etc.)
- [X] This isn't a request to add a new utility type: https://github.com/microsoft/TypeScript/wiki/No-New-Utility-Types
- [X] This feature would agree with the rest of our Design Goals: https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals
### ⭐ Suggestion
Make "configure excludes" warning more debuggable
# Context
In some scenarios, one may get the following warning in VSCode from tsserver, together with "Configure excludes" button in the status bar in bottom-right:
> To enable project-wide JavaScript/TypeScript language features, exclude large folders with source files that you do not work on.
When this happens, "find references" and other VSCode features do not work.
# Problem
This warning is not very debuggable; the burden is on the user to guess possible causes. Maybe they checked in some file(s) recently that trigger it; maybe they did a local build which produced a lot of artifacts which is picked by `include` but not in `exclude` list. etc.
In a huge monorepo, finding this out is difficult.
Also it's not clear what is the root cause of this warning:
- is it about _large number_ of files?
- is it about _large size_ of files? (individual files size? sum?)
- is it about `.js` files only? `.ts` files? other types?
In my `tsserver.log` (which is 100k+ lines), I found some `largeFileReferenced` logs, I fixed those, but it didn't fix the issue. I also found some `Non TS file size exceeded limit` logs. But in any case it's just my random guesses, not sure if this is linked to the problem.
# Solution
- The warning in VSCode should be more actionable / debuggable. Perhaps it should link to a help article or a GH issue?
- There should be enough info in `tsserver.log` to make it debuggable.
- The help article should explain how to analyze tsserver.log to find relevant info.
### 📃 Motivating Example
Past discussions where people report the warning but don't provide repro (probably because they don't understand why the issue happens), example: https://github.com/microsoft/TypeScript/issues/53492
### 💻 Use Cases
1. What do you want to use this for?
When I get the warning about "configure excludes" I want to know exactly what to look for. Should I look for large files? (define how large?) Should I look for large folders? (define how large?)
2. What shortcomings exist with current approaches?
3. What workarounds are you using in the meantime?
Random guesses + `git bisect`.
| Suggestion,Awaiting More Feedback | low | Critical |
2,472,764,600 | ollama | Feature Request: Adding FalconMamba 7B Instruct in `ollama` | FalconMamba is being added here in llama.cpp: https://github.com/ggerganov/llama.cpp/pull/9074 it would be nice to have the first SSM-based LLM on ollama !
Instruct weights: https://huggingface.co/tiiuae/falcon-mamba-7b-instruct
GGUF weights: https://huggingface.co/collections/tiiuae/falconmamba-7b-66b9a580324dd1598b0f6d4a | model request | low | Major |
2,472,786,035 | bitcoin | TSAN/MSAN fails with vm.mmap_rnd_bits=32 even with llvm 18.1.3 | The Cirrus CI on my fork of the repo runs on Ubuntu 24.04 with kernel version 6.8.0-38. This has `vm.mmap_rnd_bits=32` set, which causes the TSAN and MSAN jobs to fail.
See:
TSAN: https://cirrus-ci.com/task/6619444124844032
```
FAIL: minisketch/test
=====================
ThreadSanitizer: CHECK failed: tsan_platform_linux.cpp:282 "((personality(old_personality | ADDR_NO_RANDOMIZE))) != ((-1))" (0xffffffffffffffff, 0xffffffffffffffff) (tid=42931)
FAIL minisketch/test (exit status: 139)
FAIL: univalue/test/object
==========================
ThreadSanitizer: CHECK failed: tsan_platform_linux.cpp:282 "((personality(old_personality | ADDR_NO_RANDOMIZE))) != ((-1))" (0xffffffffffffffff, 0xffffffffffffffff) (tid=42964)
FAIL univalue/test/object (exit status: 139)
FAIL: qt/test/test_bitcoin-qt
=============================
ThreadSanitizer: CHECK failed: tsan_platform_linux.cpp:282 "((personality(old_personality | ADDR_NO_RANDOMIZE))) != ((-1))" (0xffffffffffffffff, 0xffffffffffffffff) (tid=42994)
FAIL qt/test/test_bitcoin-qt (exit status: 139)
```
MSAN: https://cirrus-ci.com/task/4578750543691776
```
unning tests: base58_tests from test/base58_tests.cpp
Running tests: base64_tests from test/base64_tests.cpp
MemorySanitizer: CHECK failed: msan_linux.cpp:192 "((personality(old_personality | ADDR_NO_RANDOMIZE))) != ((-1))" (0xffffffffffffffff, 0xffffffffffffffff) (tid=22112)
<empty stack>
make[3]: *** [Makefile:22563: test/base32_tests.cpp.test] Error 1
make[3]: *** Waiting for unfinished jobs....
MemorySanitizer: CHECK failed: msan_linux.cpp:192 "((personality(old_personality | ADDR_NO_RANDOMIZE))) != ((-1))" (0xffffffffffffffff, 0xffffffffffffffff) (tid=22137)
<empty stack>
```
This job was from mid July. Just in case I reproduced it against todays master: https://github.com/Sjors/bitcoin/pull/57 / https://cirrus-ci.com/task/4886869396160512
My (limited) understanding is that the underlying issue should have been fixed and the fix has been backported to llvm 18.1.3: https://github.com/google/sanitizers/issues/1614#issuecomment-2010316781
Ubuntu 24.04 has shipped that version since early July:https://launchpad.net/ubuntu/noble/amd64/clang-18
I can see in the CI log this this version was indeed used:
```
Get:123 http://archive.ubuntu.com/ubuntu noble-updates/main amd64 libllvm18 amd64 1:18.1.3-1ubuntu1 [27.5 MB]
```
Although I can trivially work around the issue by setting `vm.mmap_rnd_bits=28`, perhaps there is a deeper issue worth investigating.
Possibly related: https://github.com/ClickHouse/ClickHouse/issues/64086 (they also tried 18.1.3 and 18.1.6). | CI failed | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.