id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k ⌀ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,482,995,239 | material-ui | [joy-ui] `--joy-spacing` CSS var no longer injected | ### Steps to reproduce
Link to live example (Any of the Joy UI Codesandbox):
https://codesandbox.io/s/pt8svk?file=/src/Demo.tsx
Steps:
1. Update to latest version of `@mui/joy`
2. `--joy-spacing` CSS var is no longer injected into styles
### Current behavior
`--joy-spacing` CSS var is not injected into styles
### Expected behavior
`--joy-spacing` CSS var is injected into styles
### Context
I used `var(--joy-spacing)` in quite a few of my custom styles rather than `theme -> theme.spacing(x)` to minimize JS and it worked nicely.
### Your environment
<details>
<summary><code>npx @mui/envinfo</code></summary>
```
System:
OS: macOS 14.6.1
Binaries:
Node: 21.6.0 - /usr/local/bin/node
npm: 10.2.4 - /usr/local/bin/npm
pnpm: 9.6.0 - /usr/local/bin/pnpm
Browsers:
Chrome: 128.0.6613.84
Edge: 128.0.2739.42
Safari: 17.6
```
</details>
I used Chrome.
**Search keywords**: joy, spacing, theme | on hold,package: joy-ui,customization: theme | low | Minor |
2,483,020,960 | godot | Window resized on secondary monitor resets to default window size if moved to main display | ### Tested versions
Reproducible on 4.3.stable, and as far back as I can remember (at least 4.2) so it's not a recent regression I think.
### System information
macOS 14.6.1
### Issue description
If you move a window to a secondary display by setting the window.position to a position on that display and then changing the window.size this works as expected. However, if you then move this window back to the main display the window resized back to the project default window size (set small in the example to see the effect).
https://github.com/user-attachments/assets/dc80d385-5ac8-48e9-b05f-9dcc57a28e0d
If you do the resize on the main monitor you can move the window back and forth to the secondary monitor with no issue.
Same if you manually resize the window on the secondary display before moving it to the main display.
### Steps to reproduce
All that is required is to position and resize a window like so:
```
func _ready() -> void:
get_window().position = Vector2(200,200) #position on secondary display
get_window().size = Vector2(1000,1000) #Something different from project default window size
```
And the drag the window back to the main monitor.
However, if you set the size first and then move the window, the issue does not occur. So it's as if setting the window size when it's on a secondary monitor it puts it in some invalid size state that is revealed when ending a move of the window to a different display?
### Minimal reproduction project (MRP)
As you can see in the video above only a couple of lines of code are required, apart from having two monitors to test with. I've seen this on macOS only, since it's the only multi-monitor system I have available to test on, so I'm not sure if it's macOS specific or not.
In my case both monitors are hiDPI monitors, but with the same display scale, so it shouldn't be an issue related to trying to switch between monitors with different dpi (which is often a cause for issues like this). | bug,platform:macos,topic:gui | low | Minor |
2,483,049,139 | next.js | `useSearchParams` and `usePathname` do not retain original values when an intercepting route is open | ### Link to the code that reproduces this issue
https://github.com/ali-idrizi/next-intercepting-routes-hooks-reproduction
### To Reproduce
1. `next dev`
2. Click the link in the homepage
3. Notice all values are correct:
```
Props params: {"slug":"some-exmaple-blog-page"}
useParams: {"slug":"some-exmaple-blog-page"}
usePathname: /blog/some-exmaple-blog-page
useSearchParams: {"query":"test"}
````
4. Click the link in the page which renders a `<dialog>` through intercepting routes
5. Under the dialog, notice correct values for props params and `useParams`, but not for `usePathname` and `useSearchParams`:
```
Props params: {"slug":"some-exmaple-blog-page"}
useParams: {"slug":"some-exmaple-blog-page"}
usePathname: /about
useSearchParams: {}
```
https://github.com/user-attachments/assets/6d5b74a5-3a3c-4fc1-9cc3-85edbeb58f8b
### Current vs. Expected behavior
Opening an intercepting route should not affect the router of the original page. Currently, only the `params` prop and the `useParams` hook correctly retain the values.
The values of `usePathname` and `useSearchParams` however are lost. If the page is running conditional logic (similar to the example), then opening an intercepting route changes the content under the dialog.
The issue is also present in the latest canary with React 19.
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: #1 SMP Thu Oct 5 21:02:42 UTC 2023
Available memory (MB): 38098
Available CPU cores: 24
Binaries:
Node: 20.10.0
npm: 10.2.3
Yarn: N/A
pnpm: 8.15.1
Relevant Packages:
next: 14.2.6 // Latest available version is detected (14.2.6).
eslint-config-next: N/A
react: 18.3.1
react-dom: 18.3.1
typescript: 5.5.4
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Navigation
### Which stage(s) are affected? (Select all that apply)
next dev (local), next start (local)
### Additional context
_No response_ | bug,Navigation | low | Minor |
2,483,078,242 | pytorch | Very large memory allocation by torch.linalg.qr | ### 🐛 Describe the bug
Working with LLMs, I got a strangely large CUDA OOM error. I was using torch.svd_lowrank, which again calls on torch._lowrank.get_approximate_basis. Below I paste the minimal viable code for reproduction, which only calls on torch.linalg.qr:
```python
import torch
device = torch.device("cuda", 0)
a = 2**13
b = 2**14 + 2**13 + 2**12
size = a * b // 4 # 58_720_256
A = torch.randn(size, 1).to(device)
q = 37
# from torch._lowrank.py: get_approximate_basis()
R = torch.randn(A.shape[-1], q, dtype=A.dtype, device=A.device)
X = A @ R
Q = torch.linalg.qr(X).Q # This causes OOM
```
Full traceback:
```
Traceback (most recent call last):
File "/home/local/hator/large_qr.py", line 15, in <module>
Q = torch.linalg.qr(X).Q
^^^^^^^^^^^^^^^^^^
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate more than 1EB memory.
```
1EB = 10^9 GB. That is an insane number.
When using q=36, it uses at most around 25GB on my GPU. For every larger q, the error is as above. Perhaps the number given by the traceback is wrong?
The reason for the A-tensor being shaped like (size, 1) was another bug forgetting to reshape a flattened tensor, which is how I came over this. Using square tensors I do not get this error, for instance using A.shape = (2^16, 2^16) works fine.
### Versions
Collecting environment information...
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.12.2 | packaged by conda-forge | (main, Feb 16 2024, 20:50:58) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-117-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 PCIe
GPU 1: NVIDIA H100 PCIe
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6444Y
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
Stepping: 8
CPU max MHz: 4000.0000
CPU min MHz: 800.0000
BogoMIPS: 7200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.5 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 64 MiB (32 instances)
L3 cache: 90 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-15,32-47
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.4.0
[pip3] triton==3.0.0
[conda] numpy 1.26.4 py312h2809609_0
[conda] numpy-base 1.26.4 py312he1a6c75_0
[conda] torch 2.4.0 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano | triaged,module: linear algebra | low | Critical |
2,483,148,784 | PowerToys | 关于rename功能无法使用的问题 | ### Microsoft PowerToys version
0.83.0
### Installation method
Microsoft Store
### Running as admin
None
### Area(s) with issue?
PowerRename
### Steps to reproduce
选中文件浏览器里的文件右击出现的菜单中没有powerrename选项

### ✔️ Expected Behavior
希望能正常使用rename功能
### ❌ Actual Behavior
无法使用
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,483,172,464 | transformers | llama3 position_ids error with left padding | ### Feature request
The LLaMA 3 implementation should generate default `position_ids` that take the `attention_mask` into account.
@ArthurZucker @younesbelkada
### Motivation
Is there a specific reason why the default `position_ids` generation doesn’t consider the `attention_mask`? A friend mentioned that this issue has persisted for almost half a year now.
https://github.com/huggingface/transformers/blob/adb91179b9e867b7278e0130c87558974056c7b4/src/transformers/models/llama/modeling_llama.py#L962
The problem arises when using left padding, as the default position_ids start from the first index in the sequence length rather than from the first non-zero index in the `attention_mask`.
As far as I know, this is handled correctly in the `generate` function, but I would expect consistency during training as well.
https://discuss.huggingface.co/t/llama-position-ids/75870
### Your contribution
I can submit a PR changing
https://github.com/huggingface/transformers/blob/adb91179b9e867b7278e0130c87558974056c7b4/src/transformers/models/llama/modeling_llama.py#L963
to
```
position_ids = (attn_mask.cumsum(-1) - 1).clamp(min=0)
position_ids.masked_fill_(attn_mask.to(torch.bool) == 0, 0)
```
if noone has any objections on correctness of that. Otherwise, let's discuss. | Feature request | low | Critical |
2,483,184,390 | deno | deno fmt (v1.46+): Ignore front matter in HTML files | Thank you very much for adding formatters for HTML, YAML, CSS etc. Fantastic!
If you work with static site generators (my own is based on deno) you can use front matter data (e.g. in YAML, TOML, JSON or JS format) in HTML files. The [11ty documentation](https://www.11ty.dev/docs/data-frontmatter/) provides examples.
Also, template files are often HTML files with a bit of template code ([vento](https://vento.js.org/), nunjucks or liquid for example). These template files can also contain front matter.
It would be excellent, if deno could just ignore front matter -- generally or by using a flag or language setting.
#19458 is kind of related.
Thanks a lot! | feat,deno fmt | low | Minor |
2,483,208,951 | pytorch | CMake Error: When installing PyTorch from source, CUDA not being detected. | ### 🐛 Describe the bug
Bug as shown in the title.
Solution: export CUDA_PATH=\~/anaconda3/envs/\<env-name\>:\~/anaconda3/envs/\<env-name\>/targets/x86_64-linux
### Versions
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.3 LTS (x86_64)
GCC version: (Anaconda gcc) 11.2.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.31
Python version: 3.11.8 | packaged by conda-forge | (main, Feb 16 2024, 20:53:32) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.4.0-187-generic-x86_64-with-glibc2.31
Is CUDA available: N/A
CUDA runtime version: 12.5.40
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA L40
GPU 1: NVIDIA L40
GPU 2: NVIDIA L40
GPU 3: NVIDIA L40
GPU 4: NVIDIA L40
GPU 5: NVIDIA L40
GPU 6: NVIDIA L40
GPU 7: NVIDIA L40
Nvidia driver version: 525.105.17
cuDNN version: Probably one of the following:
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn.so.8.9.6
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.9.6
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.9.6
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.9.6
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.9.6
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.9.6
/usr/local/cuda-12.2/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.9.6
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 112
On-line CPU(s) list: 0-111
Thread(s) per core: 2
Core(s) per socket: 28
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Gold 6348 CPU @ 2.60GHz
Stepping: 6
Frequency boost: enabled
CPU MHz: 2966.079
CPU max MHz: 2601.0000
CPU min MHz: 800.0000
BogoMIPS: 5200.00
Virtualization: VT-x
L1d cache: 2.6 MiB
L1i cache: 1.8 MiB
L2 cache: 70 MiB
L3 cache: 84 MiB
NUMA node0 CPU(s): 0-27,56-83
NUMA node1 CPU(s): 28-55,84-111
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI Vulnerable, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==2.1.0
[pip3] optree==0.12.1
[pip3] pytorch-triton==3.0.0+dedb7bdf33
[conda] magma-cuda121 2.6.1 1 pytorch
[conda] mkl-include 2024.2.1 pypi_0 pypi
[conda] mkl-static 2024.2.1 pypi_0 pypi
[conda] numpy 2.1.0 pypi_0 pypi
[conda] optree 0.12.1 pypi_0 pypi
[conda] pytorch-triton 3.0.0+dedb7bdf33 pypi_0 pypi
cc @malfet @seemethere | needs reproduction,module: build,triaged | low | Critical |
2,483,230,954 | PowerToys | sticky or delayed keys | ### Microsoft PowerToys version
0.83.0
### Installation method
Microsoft Store
### Running as admin
Yes
### Area(s) with issue?
General
### Steps to reproduce
lately the ctrl or alt keys have a lag. I notice it when I use it for shortcuts. For instance, I hit ctrl V and many times it types V, because it didn´t get the ctrl. But if I slowly repeat the shortcut, it works.
### ✔️ Expected Behavior
that it works properly.
### ❌ Actual Behavior
it doesn´t.
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Major |
2,483,251,786 | storybook | [Bug]: isTemplate prop on Meta component from @storybook/blocks is missing in TS types | ### Describe the bug
<img width="814" alt="Screenshot 2024-08-23 at 16 10 44" src="https://github.com/user-attachments/assets/b7272332-70e6-4000-ad10-9911d2e90125">
```
type MetaProps = BaseAnnotations & {
of?: ModuleExports;
title?: string;
};
```
The prop works and should be defined based on https://storybook.js.org/docs/api/doc-blocks/doc-block-meta#istemplate
### Reproduction link
https://github.com/storybookjs/storybook/blob/next/code/lib/blocks/src/blocks/Meta.tsx#L9
### Reproduction steps
_No response_
### System
```bash
Storybook Environment Info:
System:
OS: macOS 14.6.1
CPU: (8) arm64 Apple M1
Shell: 5.9 - /bin/zsh
Binaries:
Node: 22.7.0 - ~/.local/state/fnm_multishells/34287_1724422541447/bin/node
npm: 10.8.2 - ~/.local/state/fnm_multishells/34287_1724422541447/bin/npm <----- active
Browsers:
Chrome: 128.0.6613.85
Safari: 17.6
npmPackages:
@storybook/addon-docs: ^8.2.9 => 8.2.9
@storybook/addon-essentials: ^8.2.9 => 8.2.9
@storybook/addon-webpack5-compiler-babel: ^3.0.3 => 3.0.3
@storybook/blocks: ^8.2.9 => 8.2.9
@storybook/preview-api: ^8.2.9 => 8.2.9
@storybook/react: ^8.2.9 => 8.2.9
@storybook/react-webpack5: ^8.2.9 => 8.2.9
@storybook/test: ^8.2.9 => 8.2.9
storybook: ^8.2.9 => 8.2.9
```
### Additional context
_No response_ | bug,needs triage | low | Critical |
2,483,288,099 | react-native | Application Scene Delegates support in RN >= 0.74 | ### Description
When we added CarPlay support to our React Native app we needed to switch from [App Delegate to Application Scene Delegates](https://medium.com/@Ariobarxan/ios-application-scene-delegate-vs-app-delegate-a-talk-about-life-cycle-a2ecae9d507e).
Independently of whether the app was started on the phone (PhoneScene) or on the CarPlay-client (CarScene), the first code to run natively will always be the AppDelegates `application:didFinishLaunchingWithOptions:` method.
A React Native app usually calls the super-method in its AppDelegate, which is implemented in React Native's own `RCTAppDelegate`. The problem with this is that `RCTAppDelegate` assumes a phone usage and creates a `rootViewController` along with a window for the app to be displayed in. This leads to problems when launching the app on the CarPlay-client first, since CarPlay does not require a rootViewController or a window to display its views.
The key to solving this problem is to split the app initialization logic into PhoneScene and CarScene (which are both subclasses of `UIResponder`) and only run the code required to set up the React Native bridge in the AppDelegate. We can achieve this by not calling the super-method in `application:didFinishLaunchingWithOptions:` but instead create and call a
custom init method.
Prior to React Native 0.74 this wasn't a problem, since all methods needed for setup were publicly exposed.
Starting with React Native 0.74, the root view is created via `RCTRootViewFactory` with no way of instantiating one from the custom initialization routine in App Delegate.
How do you plan to support Application Scene Delegates in the future?
Are there any options to create a `RCTRootViewFactory` without patching the header file as described [here](https://github.com/birkir/react-native-carplay/pull/158#issuecomment-2302350901)?
Would it be problematic to expose [createRCTRootViewFactory](https://github.com/facebook/react-native/blob/516428771d4e98e1f5d6fcebc1c9a0fcdbd0b3ad/packages/react-native/Libraries/AppDelegate/RCTAppDelegate.mm#L233) in the header, making it accessible from the App Delegate?
### Steps to reproduce
Try setting up a RN 0.74 or 0.75 app via [application scene delegates](https://medium.com/@Ariobarxan/ios-application-scene-delegate-vs-app-delegate-a-talk-about-life-cycle-a2ecae9d507e)
### React Native Version
0.74
### Affected Platforms
Runtime - iOS
### Output of `npx react-native info`
```text
irrelevant
```
### Stacktrace or Logs
```text
none
```
### Reproducer
none
### Screenshots and Videos
_No response_ | Resolution: Answered | low | Major |
2,483,307,122 | kubernetes | lifecycle hooks forbidden from specifying more than 1 handler | ### What happened?
We are running an application deployment with `PreStop` hooks and has two handlers `httpGet `and `sleep`.
This is our lifecycle section in the deployment:
```
lifecycle:
preStop:
httpGet:
port: 8080
path: /stop
sleep:
seconds: 5
```
Getting this error while creating a deployment.
`
[spec.template.spec.containers[0].lifecycle.preStop.httpGet: Forbidden: may not specify more than 1 handler type, spec.template.spec.containers[0].lifecycle.preStop.sleep: Forbidden: may not specify more than 1 handler type]`
Deployment suceeds only when one handler is specified.
FYI - We are running on `1.30` version of kubernetes and can confirm the feature gate `PodLifecycleSleepAction` is enabled by default.
### What did you expect to happen?
Should accept in two handlers as part of `kind: deployment`.
### How can we reproduce it (as minimally and precisely as possible)?
Adding the above mentioned `lifecycle` section to the `deployment` spec.
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
$ kubectl version
Client Version: v1.31.0
Kustomize Version: v5.4.2
Server Version: v1.30.2-eks-db838b0
```
</details>
### Cloud provider
<details>
AWS
</details>
### OS version
<details>
Bottlerocket OS 1.20.5 (aws-k8s-1.30)
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
containerd://1.6.34+bottlerocket
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| priority/backlog,kind/documentation,sig/docs,needs-kind,triage/accepted | low | Critical |
2,483,320,438 | deno | Running "create-next-app" without permissions flags causes terminal to crash | Version: Deno 1.46.0
If I try to run
```
deno run npm:create-next-app@latest my-next-app
```
and walk through the permissions prompts, after the final prompt the program hangs and crashes the terminal.
Note: `deno run -A npm:create-next-app@latest my-next-app` runs absolutely fine. | bug,permissions | low | Critical |
2,483,323,817 | PowerToys | Advanced Paste clipboard - line break (/line folding) get´s copied | ### Microsoft PowerToys version
0.83.0
### Installation method
Microsoft Store
### Running as admin
Yes
### Area(s) with issue?
Advanced Paste
### Steps to reproduce
**Line Breaks** get copied with **Advanced Paste clipboard**. It would be great, if this could be stopped. Thanks for the great work so far! You´re awesome!!!
### ✔️ Expected Behavior
Line Breaks should not be copied
### ❌ Actual Behavior
Line Breaks are copied
### Other Software
_No response_ | Issue-Bug,Needs-Triage,Product-Advanced Paste | low | Minor |
2,483,382,575 | ollama | Llama3.1 template doesn't work well with multi function calling as well as Environment: ipython mode | ### What is the issue?
## Tool descriptions
The current template checks if the final message is of Role "user" to decide whether to add the tool descriptions to it:
```go
{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1 }}
{{- if eq .Role "user" }}<|start_header_id|>user<|end_header_id|>
{{- if and $.Tools $last }}
...
```
In a multi function use case however, where the last 2 messages are assistant and tool, and we would want the assistant to continue and use another tool (instead of giving the final response), then the tool descriptions aren't added anywhere, because the last message isn't a user message.
I get that this behavior is actually ok for a single function calling use case, where the assistant doesn't need to know the tools for generating the final response, but for multi function calling this makes it completely not work, as the assistant won't know what tools exist and how to use them for the second function call.
My proposed solution is:
```go
{{- $lastUserIdx := -1 }}
{{- range $i, $_ := .Messages }}
{{- with eq .Role "user" }}
{{- $lastUserIdx = $i }}{{ end }}
{{- end }}
{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1 }}
{{- if eq .Role "user" }}<|start_header_id|>user<|end_header_id|>
{{- if and $.Tools (eq $i $lastUserIdx) }}
...
```
This adds the descriptions to the last user message, which doesn't necessarily have to be the last message overall.
## Usage of `<|eom_id|>` token
According to the Meta documentation <https://llama.meta.com/docs/model-cards-and-prompt-formats/> (Meta Llama docs are down at the time of writing). When using `Environment: ipython`, after the assistant calls a tool it is recommended to use an `<|eom_id|>` token instead of a `<|eot_id|>` token in order to signify that it is expecting a tool response next. When the assistant generates the final response, it then needs to be `<|eot_id|>`. I have checked the `tokenizer_config.json` jinja2 template, and this is how it is done there, but the Ollama template doesn't do the same, instead it doesn't add any token after the assistant tool call.
Parts of jinja2 template that does this:
```jinja2
...
{%- if builtin_tools is defined or tools is not none %}
{{- "Environment: ipython\n" }}
...
{%- for message in messages %}
...
{%- if builtin_tools is defined %}
{#- This means we're in ipython mode #}
{{- "<|eom_id|>" }}
{%- else %}
{{- "<|eot_id|>" }}
{%- endif %}
...
```
What needs to be added to the Ollama template (the `<|eom_id|>` at the end):
```
{{- range .ToolCalls }}{"name": "{{ .Function.Name }}", "parameters": {{ .Function.Arguments }}}{{ end }}<|eom_id|>
```
## Conclusion
These 2 things lead to poorer performance when using Llama3.1 in Ollama for multi function calling using the `chat` API. I believe that these should be addressed and fixed in order to achieve the intended quality and results from Llama3.1.
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.5 | bug | low | Major |
2,483,442,014 | godot | Show a selected Bone Gizmo when Clicking on it keyframe | ### Tested versions
Godot v4.3.stable
### System information
Ubuntu 24.04 LTS 24.04 - Wayland - GLES3 (Compatibility) - Mesa Intel(R) HD Graphics 2000 (SNB GT1) - Intel(R) Core(TM) i5-2400 CPU @ 3.10GHz (4 Threads)
### Issue description
[Screencast from 2024-08-23 16-28-05.webm](https://github.com/user-attachments/assets/736deec5-da14-4424-bfb3-d5095085ecc9)
Animating inside godot is pretty annoying after the Skeleton3D lose focus:
1 - it disables edit mode
2 - Clicking on a specific keyframe belong to a Bone rotation ... will do nothing (even if the Skeleton3d is on edit mode)
### Steps to reproduce

clicking on a keyframe should show a Gizmo for the selected bone
currently, it does nothing
### Minimal reproduction project (MRP)
N/A | feature proposal,topic:editor,topic:animation,topic:3d | low | Minor |
2,483,505,955 | vscode | Build: Should be able to run `Web` only builds | Several times this week in `Publish Build > Process Artifacts` i've seen this timeout error after ~3 hours:

```
...
Stages in progress: CompileCLI
Stages in progress: CompileCLI
Stages in progress: CompileCLI
Stages in progress: CompileCLI
Stages in progress: CompileCLI
Stages in progress: CompileCLI
Stages in progress: CompileCLI
Stages in progress: CompileCLI
##[error]The Operation will be canceled. The next steps may not contain expected logs.
##[error]The operation was canceled.
```
Build: https://dev.azure.com/monacotools/a6d41577-0fa3-498e-af22-257312ff0545/_build/results?buildId=289508
Changes: https://github.com/Microsoft/vscode/compare/00082d1...00082d1
I've seen this several times over the course of the week, in builds:
- https://dev.azure.com/monacotools/a6d41577-0fa3-498e-af22-257312ff0545/_build/results?buildId=288906
- https://dev.azure.com/monacotools/a6d41577-0fa3-498e-af22-257312ff0545/_build/results?buildId=288906
| feature-request,vscode-build | low | Critical |
2,483,522,623 | godot | Godot 4.3 Android export: Mismatching NDK/Platform API versions | ### Tested versions
Godot 4.3, as each Godot have its own Android export templates.
### System information
Linux ubuntu 24.04 - Godot 4.3 -
### Issue description
When configuring Android-Studio (latest Android Studio Koala | 2024.1.1 Patch 1) to the required versions of SDK/NDK, refering on the doc:
[https://docs.godotengine.org/en/4.3/tutorials/export/exporting_for_android.html](https://docs.godotengine.org/en/4.3/tutorials/export/exporting_for_android.html)
where we can read:
```
- Android SDK Platform-Tools version 34.0.0 or later
- Android SDK Build-Tools version 34.0.0
- Android SDK Platform 34
...
- NDK version r23c (23.2.8568313)
```
we get that error message:
```
AGPBI: {"kind":"warning","text":"C/C++: Platform version 33 is beyond 31, the maximum API level supported by this NDK. Using 31 instead.","sources":[{}]}
C/C++: Platform version 33 is beyond 31, the maximum API level supported by this NDK. Using 31 instead.
```
As far as I can play with GDExtension, I can't test my project in arm64 (but arm32 is fine), getting that error, and I suspect the platform version (=31, while Godot Android export is compiled with 33):
```
adb: failed to install /home/rtrave/.cache/godot/tmpexport.1724430902.apk: Failure [INSTALL_FAILED_NO_MATCHING_ABIS: Failed to extract native libraries, res=-113]
```
thanks for reading,
and for all the good work of the Godot Team.
### Steps to reproduce
Simply create a project in Android-Studio, and then configure SDK/NDK to the required versions by Godot.
Then get the message
```
AGPBI: {"kind":"warning","text":"C/C++: Platform version 33 is beyond 31, the maximum API level supported by this NDK. Using 31 instead.","sources":[{}]}
C/C++: Platform version 33 is beyond 31, the maximum API level supported by this NDK. Using 31 instead.
```
### Minimal reproduction project (MRP)
Not pertinent I think, but I can try to make a simple test if needed.
N/A | platform:android,needs testing,topic:export | low | Critical |
2,483,537,749 | vscode | Behavioral change in quick pick with selected items | @TylerLeonhardt I observe a behavioural change in the quick pick behaviour in the last milestone. I am not able to find out what caused this in quick pick component. I can only explain following
- When a checkbox is changed, following function is called
https://github.com/microsoft/vscode/blob/5a0335dcf3ffe79466b90d7c24923f3158fac81a/src/vs/workbench/services/userDataProfile/browser/userDataProfileImportExportService.ts#L435-L449
- This function updates the description of items and call following `update` function
https://github.com/microsoft/vscode/blob/5a0335dcf3ffe79466b90d7c24923f3158fac81a/src/vs/workbench/services/userDataProfile/browser/userDataProfileImportExportService.ts#L414-L417
- This function sets the items in the quick pick to update the description.
```ts
quickPick.items = resources;
```
🐛 This is triggering `onDidChangeSelection` event on quick pick with no items selected. This is causing all items to be deselected automatically. This is causing the bug. BTW this event was not triggered before (`1.91`)
Please take a look and let me know if there is something I should adopt as a consumer.
_Originally posted by @sandy081 in https://github.com/microsoft/vscode/issues/224788#issuecomment-2295375822_
| bug,quick-pick | low | Critical |
2,483,537,948 | godot | Subsurface scattering breaks rendering when used on a Subviewport with transparent background enabled | ### Tested versions
Reproducible in 4.3.stable. I've had this issue since 4.1.stable, I don't know if it was already happening on prior versions (it probably was).
### System information
PopOS! 22.04 LTS - Godot v4.3.stable (77dcf97d8) - Freedesktop SDK 23.08 (Flatpak runtime) - X11 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1080 (nvidia) - AMD Ryzen 5 2600 Six-Core Processor (12 Threads)
### Issue description
When using a material with subsurface scattering enabled inside a Subviewport with transparent background enabled, only meshes whose material has Alpha enabled will be rendered. Materials with Alpha disabled won't be rendered at all.
### Steps to reproduce
- Setup a 3D scene with a Subviewport with `Transparent BG` enabled.
- Add two meshes of your preference as children of the subviewport.
- Add a `StandardMaterial3D` to both meshes. In one of them, enable `Subsurface Scattering` and set its strength to 1, and enable `Alpha`. Leave the other with the default values.
- Setup a `Camera3D` so you can view both meshes.
- Play the project or previsualize the camera/subviewport. Only the mesh with alpha enabled will be visible. This is true for any mesh, it doesn't have to be the same that has subsurface scaterring enabled. If you enable alpha for the other one, it will also be rendered normally.
### Minimal reproduction project (MRP)
[subviewport-alpha-repro.zip](https://github.com/user-attachments/files/16731742/subviewport-alpha-repro.zip)
| bug,topic:rendering | low | Minor |
2,483,541,023 | godot | Crash when debugging with neovim from a use-after-free with List | ### Tested versions
Reproducible in 4.4.dev master
Reproducible in 4.3.stable
Not reproducible in 4.2.2.stable
### System information
Linux (I'm writing this on mobile)
### Issue description
https://media.discordapp.net/attachments/669162571864997888/1276353485256003584/2024-08-23_03-30-07.mp4?ex=66c9e110&is=66c88f90&hm=907d27b127b0c4b40f176fb06384bb3dc19e90d2aeca98ecaa444890cce7c5c2&
https://media.discordapp.net/attachments/669162571864997888/1276341929654747206/image.png?ex=66c9d64d&is=66c884cd&hm=c9f6683e881bed98685672f9e10bc375d2b668a2362dd3e1fafb653847ab0c72&

https://media.discordapp.net/attachments/669162571864997888/1276341715594252390/image.png?ex=66c9d61a&is=66c8849a&hm=214b454c49b47884ebee0b95db7dce066bd256bcddfe8f3e151ff9d7b04c6a2c&
### Steps to reproduce
https://media.discordapp.net/attachments/669162571864997888/1276341391143993344/Screencast_from_2024-08-22_22-27-33.webm?ex=66c9d5cc&is=66c8844c&hm=9021406895e2be6d5c18408aac89a0a4c0d3d47fac5493db2a0575c1c3436e67&
### Minimal reproduction project (MRP)
N/A | bug,topic:editor,needs work,topic:thirdparty,needs testing | low | Critical |
2,483,555,857 | terminal | ReadConsole returns TRUE after CancelIoEx | ### Windows Terminal version
Latest source
### Windows build number
10.0.19045.4780
### Other Software
No
### Steps to reproduce
1. Compile the code:
<details>
<summary>Code</summary>
```C++
#ifndef UNICODE
#define UNICODE
#endif
#ifndef _UNICODE
#define _UNICODE
#endif
#include <chrono>
#include <iostream>
#include <thread>
#include <string>
#include <windows.h>
using namespace std::literals;
int main()
{
const auto In = GetStdHandle(STD_INPUT_HANDLE);
std::thread Thread([&]
{
std::this_thread::sleep_for(3s);
CancelIoEx(In, {});
});
std::wcout << L"Do not touch anything for about 3s" << std::endl;
wchar_t Buffer[1024];
DWORD NumberOfCharsRead{};
const auto Result = ReadConsole(In, Buffer, ARRAYSIZE(Buffer), &NumberOfCharsRead, {});
const auto Error = GetLastError();
std::wcout << L"Result: " << Result << std::endl;
std::wcout << L"Error: " << Error << std::endl;
std::wcout << L"Chars: " << NumberOfCharsRead << std::endl;
std::wcout << L"Data: " << std::wstring(Buffer, NumberOfCharsRead) << std::endl;
Thread.join();
std::wcout << L"Now enter something: " << std::endl;
if (ReadConsole(In, Buffer, ARRAYSIZE(Buffer), &NumberOfCharsRead, {}))
{
std::wcout << L"You have entered: " << std::wstring(Buffer, NumberOfCharsRead) << std::endl;
}
}
```
</details>
2. Run it (anywhere: WT, OpenConsole, conhost)
3. Inspect the output
### Expected Behavior
The program reads the console input in a blocking way.
If there is no input, another thread issues CancelIoEx after 3 seconds.
Since the `ReadConsole` did not read anything, it should return FALSE:
- It is logical.
- Even Raymond Chen [says so](https://devblogs.microsoft.com/oldnewthing/20150323-00/?p=44413):
> If you had used ReadFile instead of fgets, the read would have failed with error code ERROR_OPERATION_ABORTED, as documented by CancelIoEx.
So the program should print `Result: 0`.
### Actual Behavior
`ReadConsole` does not read anything, does not update `NumberOfCharsRead`, but returns **TRUE**.
It also leaves the input in somewhat inconsistent state, which you can see by typing something after the cancellation: the first input will be discarded. I think this was already mentioned here: https://github.com/microsoft/terminal/issues/12143#issuecomment-1895629003.
The incorrect return value is much worse though: it is a common pattern to leave `NumberOfCharsRead` uninitialized, because either `ReadConsole` succeeds and initializes it, or it fails and it makes no sense to look there anyway.
In the code above I initialized it, but if I didn't do so an uninitialized read would've occurred, from both `NumberOfCharsRead` and `Buffer`.
Notably the last error is correctly set to 995 - `ERROR_OPERATION_ABORTED` - "The I/O operation has been aborted because of either a thread exit or an application request.", but who checks the last error on successful calls? | Area-Server,Issue-Bug,Product-Meta,Priority-1,Tracking-External,zInbox-Bug | low | Critical |
2,483,588,327 | terminal | All nested commands get keybindings (`ctrl+t`) in command palette? | `main`, ~ e006f75f6
Something is WRONG with all nested commands:

You'll note, they seemingly all have `ctrl+t` as their keychord text.
This doesn't repro for me on `1.22.2334.0`
Looking now at my settings to see if there's something sus in there... | Issue-Bug,Area-Settings,Product-Terminal,Area-CmdPal | low | Minor |
2,483,601,313 | vscode | Cannot run dynamically-created shell tasks when a workspace folder is not opened | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: N/A
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.92.2
- OS Version: Windows 11
Steps to Reproduce:
1. For ease of repro, we're going to use the [Docker extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-docker), go ahead and install it
1. In the terminal run `docker pull alpine:latest`
1. In the Docker extension's Images view, find that alpine:latest image, expand it and right click the tag, and do `Pull`, **this will execute a shell task using vscode.tasks.executeTask()**
1. The shell task fails with the following error (on Windows, so it shouldn't be looking for bash):
```
Executing task: docker image pull alpine:latest
* The terminal process failed to launch: Path to shell executable "\bin\bash" does not exist.
```
There is no output in the Tasks window. | bug,tasks | low | Critical |
2,483,637,503 | next.js | `npm run test` fails with-jest-app using --watch flag. | ### Verify canary release
- [X] I verified that the issue exists in the latest Next.js canary release
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: #40~22.04.3-Ubuntu SMP PREEMPT_DYNAMIC Tue Jul 30 17:30:19 UTC 2
Available memory (MB): 64156
Available CPU cores: 32
Binaries:
Node: 20.9.0
npm: 10.2.4
Yarn: 1.22.22
pnpm: 9.8.0
Relevant Packages:
next: 14.2.6 // Latest available version is detected (14.2.6).
eslint-config-next: N/A
react: 18.3.1
react-dom: 18.3.1
typescript: 5.3.3
Next.js Config:
output: N/A
```
### Which example does this report relate to?
with-jest-app
### What browser are you using? (if relevant)
_No response_
### How are you deploying your application? (if relevant)
_No response_
### Describe the Bug
The default with-jest-app example fails to run the tests and produces an error with no message. Example:
```
Determining test suites to run...
● Test suite failed to run
thrown: [Error]
```
I changed package.json to use a different test command. The default is as follows:
```
"scripts": {
"test": "jest --watch",
},
```
I changed it to two alternative configurations, both which ran the tests successfully:
1.
```
"scripts": {
"test": "jest",
},
```
2.
```
"scripts": {
"test": "jest --watchAll",
},
```
The example should work out of the box, so either the error should be figured out so --watch works, or --watchAll should be used, or the flag should be removed.
### Expected Behavior
```
npm run test
> test
> jest
PASS __tests__/index.test.tsx
PASS __tests__/snapshot.tsx
PASS app/counter.test.tsx
PASS app/page.test.tsx
PASS app/blog/[slug]/page.test.tsx
PASS app/utils/add.test.ts
Test Suites: 6 passed, 6 total
Tests: 6 passed, 6 total
Snapshots: 1 passed, 1 total
Time: 0.924 s, estimated 1 s
Ran all test suites.
```
### To Reproduce
Clone the with-jest-app example project and run `npm run test` | examples | low | Critical |
2,483,669,057 | godot | Double dot (`..`) operator causes misleading error in GDScript | ### Tested versions
v4.2.2.stable.official [15073afe3]
### System information
Godot v4.2.2.stable - Ubuntu 24.04 LTS 24.04 - Wayland - Vulkan (Forward+) - integrated Intel(R) UHD Graphics 620 (WHL GT2) () - Intel(R) Core(TM) i5-8365U CPU @ 1.60GHz (8 Threads)
### Issue description
When accidentally using a double dot (..) operator instead of a single dot (.) in an if condition in GDScript, the error message is misleading. It says "Expected ':' after 'if' condition" instead of pointing out the incorrect double dot usage.

### Steps to reproduce
1. Use the following code:
```
if not head_ledge_obj..is_in_group("ledge") and shoulder_ledge_obj.is_in_group("ledge"):
return true
```
2. Run the script.
Expected Behavior: The error message should indicate the incorrect use of the double dot operator.
Actual Behavior: The error message incorrectly suggests a missing colon instead of highlighting the syntax error.Use the following code:
### Minimal reproduction project (MRP)
1. Use the following code:
```
if not head_ledge_obj..is_in_group("ledge") and shoulder_ledge_obj.is_in_group("ledge"):
return true
```
2. Run the script.
Expected Behavior: The error message should indicate the incorrect use of the double dot operator.
Actual Behavior: The error message incorrectly suggests a missing colon instead of highlighting the syntax error.Use the following code: | bug,topic:gdscript,topic:editor | low | Critical |
2,483,669,919 | langchain | Ollama (Partner package) and cache integration not working correctly - missing filters / Community Package works | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.cache import SQLiteCache
from langchain_ollama import OllamaEmbeddings, OllamaLLM
llm = OllamaLLM(
model=model,
cache=SQLiteCache(str(cache_dir / f"ollama-{model}.db")),
temperature=0.4,
num_ctx=8192,
num_predict=-1,
)
```
### Error Message and Stack Trace (if applicable)
There is no error stack as the problem is how the LLM message is being cached in SQLLite
### Description
Here is how the entries in SQLiteCache looks when langchain-ollama partner package is used

Whereas if the Ollama from langchain_community is used then the SQLLiteCache looks like

As you can see that the entries in filter column do not include other properties like temperature, model name etc and hence when these parameters are changes the old entries for a prompt if present are picked instead of creating new
### System Info
langchain==0.2.12
langchain-chroma==0.1.2
langchain-community==0.2.11
langchain-core==0.2.28
langchain-ollama==0.1.1
langchain-openai==0.1.20
langchain-text-splitters==0.2.2 | 🤖:bug | low | Critical |
2,483,679,466 | kubernetes | [FG:InPlacePodVerticalScaling] Scheduling race condition | What is the desired behavior when a pod is resized in the middle of scheduling? The current behavior is that after a certain point the scheduler would not pick up the updated resources and proceed to schedule a pod to a node, even if it no longer fits. The kubelet will then reject the pod in admission (`OutOfCPU` or `OutOfMemory`).
I think we have 3 options:
1. Proceed with the current approach, and accept (document) this as a possible failure mode
2. Forbid resizing un-scheduled pods (use case is questionable)
3. Minimize (but don't eliminate) the risk by checking for changes before binding. There will always be a race condition here, since the bind request is not transactional.
I'm leaning towards the first option.
/sig node
/priority important-longterm | sig/scheduling,sig/node,kind/feature,priority/important-longterm,triage/accepted | low | Critical |
2,483,765,794 | PowerToys | Showing what are overrides of Window / Common keys in OOBE/SCOOBE & Settings | ### Description of the new feature / enhancement
If a common key is overrode inside of PowerToys, (Example PT Run does Alt+Space), we should make that clear to the user.
This would be need to be inside OOBE and Scoobe incase we add new functionality as well as the key shortcut control. A user may not be aware they are overriding a behavior
As part of this work, it would be basically documenting in code https://github.com/microsoft/PowerToys/issues/179
### Scenario when this would be used?
Awareness of overrides that could have hidden impact
### Supporting information
Alt+Space is a keyshort cut people do actively use. | Idea-Enhancement,Area-OOBE | low | Minor |
2,483,782,435 | pytorch | Improving inductor pattern matcher's replacement graph assumptions | The pattern matcher assumes by default that it is safe to run functional passes on the replacement graph. The ones it runs are DCE and remove_noop_ops. We should try to change it to be more automatic and determine if those passes are safe to run (or add invariants if this is too hard, see also https://github.com/pytorch/pytorch/issues/133250).
This is non-trivial:
- For DCE we can check if there are any mutable ops. If there are, then it's probably a bad idea to DCE
- For remove_noop_ops -- this is trickier. We cannot remove intermediate clones if they end up changing the alias relationship between the outputs and the inputs of the replacement graph.
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire | triaged,oncall: pt2,module: inductor | low | Minor |
2,483,814,068 | TypeScript | Google feedback on TS 5.6-beta | ### Acknowledgement
- [x] I acknowledge that issues using this template may be closed without further explanation at the maintainer's discretion.
### Comment
This GitHub issue contains feedback on the TS 5.6-beta release from the team that is responsible for keeping Google's internal software working with the latest version of TypeScript.
## Executive summary
* We do not expect to have significant difficulty in upgrading Google to TS 5.6.
* Some changes to our TypeScript code are required to make it compile with TS 5.6.
* We observed regressions in relation to generic type inference (https://github.com/microsoft/TypeScript/issues/59656)
* Detail sections below explain the changes to our code we expect to make to unblock the upgrade.
## Impact summary
Change description | Announced | Libraries affected
---------------------------------------------- | --------: | -----------------:
Disallowed Nullish and Truthy Checks | Yes | 0.070%
lib.d.ts Changes | Yes | 0.004%
Correct override Checks on Computed Properties | Yes | 0.004%
Co- vs. contra-variant inference improvements | No | 0.004%
The **Announced** column indicates whether we were able to connect the observed change with a section in the [TS5.6-beta announcement](https://devblogs.microsoft.com/typescript/announcing-typescript-5-6-beta/).
The following sections give more detailed explanations of the changes listed above.
## Announced Changes
This section reviews all announced changes from the [TS5.6-beta announcement](https://devblogs.microsoft.com/typescript/announcing-typescript-5-6-beta/), whether we support these changes at Google, how we will resolve pre-existing issues for these changes (if applicable), and other thoughts.
### Disallowed Nullish and Truthy Checks
We support this change. The check uncovers real unintentional runtime behavior.
As part of this migration, we will add `// @ts-ignore` suppressions to silence pre-existing errors with a note advising code authors to revisit the code and fix these genuine problems.
Includes the following errors: TS2869, TS2870, TS2871, TS2872, TS2873
### Iterator Helper Methods
We support this change.
As these are new, none of our codebase in TS utilizes these so nothing was impacted.
However, the wide number of new unique iterators added is somewhat disruptive for our TS to JS interoperability infrastructure but we have a fairly simple workaround for now (e.g. `ArrayIterator`, `MapIterator`, `SetIterator`, `StringIterator` etc).
### Strict Builtin Iterator Checks (and --strictBuiltinIteratorReturn)
We support this change. We typically interact with `Iterator` types via methods like `[...it]`, `Array.from(it)`, and `for (const val of it) {}` so this change is less interesting to us.
We did notice some new compiler errors in our copy of the vscode code base but I can see they have been resolved already so we'll update: https://github.com/microsoft/vscode/pull/222009
### Support for Arbitrary Module Identifiers
Presently, we do not plan on utilizing this feature in Google. We support TypeScript adding support for this though.
The changes to the TSC API broke a handful of code locations utilizing the TypeScript API that expected an identifier but now have to support string literals. That is, for the code that assumes `propertyName` in `ImportSpecifier` and `ExportSpecifier` is an `Identifier` will need to be updated to account for `ModuleExportName` which also accepts a `StringLiteral`. For now, we will resolve these instances by either throwing an exception for string literal values or emitting a diagnostic (where possible).
### The `--noUncheckedSideEffectImports` Option
We support this change.
Note: We already enforced this in our build infrastructure. The error simply shows up earlier now in the stack.
### The `--noCheck` Option
We support this change.
Note: This seems useful for incremental development scenarios where we want our edit-refresh cycle to be as short as possible.
### Allow `--build` with Intermediate Errors
We support this change.
Note: We do not utilize the `--build` flag.
### Region-Prioritized Diagnostics in Editors
We support this change.
Note: Editor-specific changes don't impact us.
### Search Ancestor Configuration Files for Project Ownership
We support this change.
Note: Editor-specific changes don't impact us though.
### lib.d.ts Changes
We support this change.
The changes here were pretty sweeping and sometimes at odds with other typings we already had in the codebase.
As part of this migration, we will add `// @ts-ignore` suppressions to silence pre-existing errors with a note advising code authors to update the code.
It should be noted that there is a lot of noise in the `lib.dom.d.ts` updates including many dropped MDN comments as well as new ones added. The general order and structure of some types changed as well. This makes is somewhat difficult to identify the meaningful changes.
In addition, it would be helpful for the `lib.dom.d.ts` file to be directly checked into github so that updates are viewable in the repo itself to detect unwanted diffs. At the moment, this file appears to be generated as part of the TSC release process and so its changes are harder to detect.
### .tsbuildinfo is Always Written
We support this change.
Note: We do not utilize the `--build` flag.
### Respecting File Extensions and package.json from within node_modules
We support this change.
Note: We presently don't use `.mts` nor `.cts` file extensions though it's probable that we will support these as we start relying on Open Source projects that use them.
### Correct override Checks on Computed Properties
We support this change.
This change identified a number of code patterns in our codebase that needed override.
We will resolve these by adding the missing overrides.
## Unannounced Changes
### Improved logic to chooses co- vs. contra-variant inference
We support this change (provided that observed regressions are resolved).
https://github.com/microsoft/TypeScript/pull/57909 introduced improvements to the compiler that fixes two issues.
We observed several new types of inference issues in our codebase that appear to have stemmed from this.
https://github.com/microsoft/TypeScript/issues/59656 captures one issue where I was able to make a reproduction.
Another class of issues surrounds some of our test infrastructure that uses generics to render templates. We've observed typing errors when passing in object literals containing parameters for the templates. These include optional parameters disappearing or becoming required in the parameter object bag type.
We're hoping that https://github.com/microsoft/TypeScript/pull/59709 resolves both of these classes of issues.
Once that is submitted, we will rebuild the failing targets and report back if the issue persists (hopefully with a simplified reproduction for the second issue). | Discussion | low | Critical |
2,483,824,836 | vscode | Windows 11: VScode crashed (reason: 'oom', code: '-536870904' | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.92.2
- OS Version: Windows 11 Pro, Version 23H2, OS build: 22631.4037
- System RAM: 64GB
- GPU: NVidia GeForce RTX 3080, Driver: 560.81
Steps to Reproduce:
1. Start writing code.
Attempted Fixes:
1. Verified that VScode folders are not being uploaded to OneDrive (or any other cloud backup providers)
2. Deleted all VScode and Project cache files
3. Deleted VScode backups
4. Deleted .vscode from App folder
5. Deleted Windows temp files
6. Removed all extensions
7. Excluded VScode from Windows Defender
8. Verified Large File Optimizations
9. Changed paging file size from 4096 (Windows auto manage) to 9216 (min) - 196609 (max)
11. Disabled hardware acceleration
Here is one of the many .dmp files:
[c0eba4c2-10ff-48ec-94c9-66b49dd2fdfd.dmp](https://github.com/user-attachments/files/16733252/c0eba4c2-10ff-48ec-94c9-66b49dd2fdfd.dmp)
Crash report preview:
> EXCEPTION_RECORD: (.exr -1)
> ExceptionAddress: 00007fffbb82fabc (KERNELBASE!RaiseException+0x000000000000006c)
> ExceptionCode: e0000008
> ExceptionFlags: 00000081
> NumberParameters: 1
> Parameter[0]: 0000000000b1b1c7
>
> PROCESS_NAME: Code.exe
>
> ERROR_CODE: (NTSTATUS) 0xe0000008 - <Unable to get error code text>
>
> EXCEPTION_CODE_STR: e0000008
>
> EXCEPTION_PARAMETER1: 0000000000b1b1c7
>
> STACK_COMMAND: ~0s; .ecxr ; kb
>
> SYMBOL_NAME: code+b55379
>
> MODULE_NAME: Code
>
> IMAGE_NAME: Code.exe
>
> FAILURE_BUCKET_ID: APPLICATION_FAULT_e0000008_Code.exe!Unknown
>
> OSPLATFORM_TYPE: x64
>
> OSNAME: Windows 10
>
> IMAGE_VERSION: 1.92.2.0
>
> FAILURE_ID_HASH: {203dc02c-fd4a-1c4e-3e48-6581c93eb6d2}
| bug,freeze-slow-crash-leak | low | Critical |
2,483,843,404 | flutter | Update run_verify_binaries_codesigned_tests to collect and print all the missing/unexpected binaries on failure, instead of failing-fast on the first | This was an issue in https://github.com/flutter/flutter/pull/153787.
@itsjustkevin asks
> Is it possible that the test failed only on the first binary?
I think so, based on more binaries missing from the list: https://github.com/flutter/flutter/pull/154027
The test should be updated to collect all the unexpected binaries, and print them all:
https://github.com/flutter/flutter/blob/5e194383af022816e1f551fcd16abfd8d1e54694/dev/bots/suite_runners/run_verify_binaries_codesigned_tests.dart#L150-L152 | team-release | low | Critical |
2,483,857,718 | rust | Unstable Feature Usage Metrics | ## Unstable Feature Usage Metrics
Track unstable feature usage trends by Rust users.
### Motivation
* Support feature stabilization prioritization.
* Helping teams know which unstable features to invest their energy into.
* Evaluating how representative crates.io is as a sample of the rust ecosystem (will we see different patterns in crates.io feature usage vs private feature usage?)
### Context
* [research paper on unstable feature usage in the rust ecosystem](https://arxiv.org/pdf/2310.17186)
### Steps / History (PROVISIONAL)
- [ ] define the format the metrics will be stored in
- [ ] add flag to dump unstable feature status metrics (what features exist, if they're stable or not, both for library and lang/compiler, tidy may already support this for lib features)
- [x] add flag to enable unstable feature usage metrics in the compiler
- [ ] front end to display reports based on gathered usage and status metrics
- [ ] back end to save metrics data from published crates.io crates (private data should be anonymous and stored separately once we have established the ability to gather that in a statistically relevant volume)
- [ ] integrate with docs.rs to have it gather metrics and upload them to the backend while building newly uploaded crate versions | T-compiler,C-tracking-issue,A-metrics | low | Minor |
2,483,858,362 | kubernetes | Int overflow in hpa causing incorrect replica count | ### What happened?
The setup:
I am using keda with the prometheus scaler. The query I am using, returns the lag in the message queue i am using, and the threshold is set to `0.1`.
What happened:
The lag was increasing for a long time, and the replica count reached the max setting as expected. Everything was running fine for some time. When the lag value reached `214,748,364` hpa decided to reduce the replicas from the max limit to `1`.
What I think is the problem:
When the lag passes `214,748,364`, the calculation [here](https://github.com/kubernetes/kubernetes/blob/7b80cdb66a390f225d23cd612950144e3a39d1ae/pkg/controller/podautoscaler/replica_calculator.go#L278) divides by the threshold `0.1` and it passes the max int32 value. causing hpa to scale to the minimum value, 1.
It also seems like a lot of other places in this file cast a 64 bit float to a 32 bit int. Should there maybe be a check everywhere this is done?
### What did you expect to happen?
I expected the replica count to stay at the max value. Or alternatively, get an error that we have reached the max value for an external metric value
### How can we reproduce it (as minimally and precisely as possible)?
Use an external metric, and set it above `214,748,364` with a threshold of `0.1`.
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
$ kubectl version
# paste output here
```
1.29
</details>
### Cloud provider
<details>
aws eks
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/autoscaling,needs-triage | low | Critical |
2,483,872,627 | pytorch | [torch.compile] Graphs differ between 2.4 and 2.5 | ### 🐛 Describe the bug
Within Torch-TensorRT, we use [aot_export_joint_simple](https://github.com/pytorch/TensorRT/blob/main/py/torch_tensorrt/dynamo/backend/backends.py#L90-L97) to get aten level graph for TensorRT compilation. This used to return a graph with weights registered as `get_attr` nodes in the graph in Pytorch 2.4. However, this behavior seems to be changed in 2.5. All the constants are now registered as placeholders. We observed this to potentially impact performance of models.
To reproduce:
```py
import torch
import torch._dynamo as td
from transformers import AutoModelForCausalLM, AutoTokenizer
from torch._functorch.aot_autograd import aot_export_joint_simple
from torch._dynamo.utils import detect_fake_mode
import unittest
from torch_tensorrt.dynamo.lowering import (
get_decompositions,
remove_sym_nodes,
repair_input_aliasing,
)
@td.register_backend(name="random") # type: ignore[misc]
def aot_torch_tensorrt_aten_backend(
gm: torch.fx.GraphModule, sample_inputs, **kwargs
) -> torch.nn.Module:
fake_mode = detect_fake_mode(sample_inputs)
# Place backend tracing within FakeTensor context allowing nonfake Tensors
with unittest.mock.patch.object(
fake_mode, "allow_non_fake_inputs", True
), fake_mode:
# Invoke AOTAutograd to translate operators to aten
gm = aot_export_joint_simple(
gm,
sample_inputs,
trace_joint=False,
decompositions=get_decompositions(
False
),
)
print(gm.graph)
return gm
# define the model
with torch.no_grad():
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = AutoModelForCausalLM.from_pretrained(
"gpt2",
pad_token_id=tokenizer.eos_token_id,
use_cache=False,
attn_implementation="eager",
).eval().cuda()
prompt = "I enjoy walking with my cute dog"
model_inputs = tokenizer(prompt, return_tensors="pt")
input_ids = model_inputs["input_ids"].cuda()
torch._dynamo.mark_dynamic(input_ids, 1, min=7, max=1023)
model.forward = torch.compile(model.forward, backend="random", dynamic=None)
model(input_ids)
```
Please find the attached [gm_2.4.txt](https://github.com/user-attachments/files/16733544/gm_2.4.txt) and [gm_2.5.txt](https://github.com/user-attachments/files/16733541/gm_2.5.txt) to notice the differences.
Is there a setting to disable this behavior ?
cc: @angelayi @avikchaudhuri
### Error logs
_No response_
### Minified repro
_No response_
### Versions
Pytorch 2.4 and 2.5.0.dev20240822+cu124
cc @ezyang @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | triaged,oncall: pt2,oncall: export | low | Critical |
2,483,874,979 | flutter | Web SDK uses non-`DEPS`'d dependencies, and can't use the workspace as a result | In https://github.com/flutter/engine/pull/54232, most of the engine repository moved to the pub workspace.
As of time of this writing, the web SDK and web SDK tools are _not_ included, as they do not use `DEPS`-vendored dependencies:
- https://github.com/flutter/engine/blob/main/web_sdk/web_engine_tester/pubspec.yaml
- https://github.com/flutter/engine/blob/main/web_sdk/web_test_utils/pubspec.yaml
- https://github.com/flutter/engine/blob/main/lib/web_ui/pubspec.yaml
I would be happy to review PRs or answer questions, but someone on the web team will need to own making the change:
- [ ] Making sure all packages are available on ~CIPD~ mirror, and adding them if needed
- [ ] Changing the dependencies to use the mirror'd versions, and making sure tests still pass/are not flakey
- [ ] Moving the pubspecs, now using DEPS-vendored dependencies, to the workspace | engine,platform-web,P2,c: tech-debt,team-web,triaged-web | low | Major |
2,483,879,415 | go | cmd/compile: PGO fails to do multiple levels of inlining into a single function | Reproducer:
```
-- dep/dep.go --
package dep
var sink int
//go:noinline
func spin() {
for i := 0; i < 1000000; i++ {
sink = i
}
}
func b() {
// Two calls, too hot to inline, darn...
spin()
spin()
for i := 0; i < 1000000; i++ {
sink = i
}
}
func a() {
// Two calls, too hot to inline, darn...
b()
b()
for i := 0; i < 1000000; i++ {
sink = i
}
}
//go:noinline
func Foo() {
a()
}
-- dep/dep_test.go --
package dep
import "testing"
func BenchmarkMain(b *testing.B) {
for i := 0; i < b.N; i++ {
Foo()
}
}
-- main.go --
package main
import "example.com/pgo/dep"
func main() {
dep.Foo()
}
-- go.mod --
module example.com/pgo
go 1.23
```
https://go.dev/play/p/QxBt13wV3Uh
Run the benchmark to collect a profile, and then use that as a PGO profile:
```
$ go test -bench . -cpuprofile=cpu.pprof ./dep
$ go build "-gcflags=example.com/pgo/dep=-m=2 -d=pgodebug=1" -pgo=cpu.pprof
# example.com/pgo/dep
hot-callsite-thres-from-CDF=0.6493506493506493
dep/dep.go:6:6: cannot inline spin: marked go:noinline
hot-node enabled increased budget=2000 for func=example.com/pgo/dep.b
dep/dep.go:12:6: can inline b with cost 133 as: func() { spin(); spin(); for loop }
hot-node enabled increased budget=2000 for func=example.com/pgo/dep.a
hot-budget check allows inlining for call example.com/pgo/dep.b (cost 133) at dep/dep.go:23:3 in function example.com/pgo/dep.a
hot-budget check allows inlining for call example.com/pgo/dep.b (cost 133) at dep/dep.go:24:3 in function example.com/pgo/dep.a
dep/dep.go:21:6: can inline a with cost 285 as: func() { b(); b(); for loop }
dep/dep.go:31:6: cannot inline Foo: marked go:noinline
hot-budget check allows inlining for call example.com/pgo/dep.b (cost 133) at dep/dep.go:23:3 in function example.com/pgo/dep.a
dep/dep.go:23:3: inlining call to b
hot-budget check allows inlining for call example.com/pgo/dep.b (cost 133) at dep/dep.go:24:3 in function example.com/pgo/dep.a
dep/dep.go:24:3: inlining call to b
hot-budget check allows inlining for call example.com/pgo/dep.a (cost 285) at dep/dep.go:32:3 in function example.com/pgo/dep.Foo
dep/dep.go:32:3: inlining call to a
```
The `hot-budget` lines note that PGO allows inlining `b` into `a` and `a` into `Foo`. `dep/dep.go:32:3: inlining call to a` indicates that `a` was inlined into `Foo`, but there should then be a subsequent inline `b` into `Foo`. The output makes this a bit confusing, but objdump makes it clear that `b` was not inlined:
```
0000000000467a80 <example.com/pgo/dep.Foo>:
...
467a8b: | | e8 b0 ff ff ff call 467a40 <example.com/pgo/dep.b>
467a90: | | e8 ab ff ff ff call 467a40 <example.com/pgo/dep.b>
...
```
A bit of custom logging in the compiler makes it more clear what is happening:
```
dep/dep.go:31:6: function Foo: DevirtualizeAndInlineFunc
dep/dep.go:31:6: function Foo: TryInlineCall to a
hot-budget check allows inlining for call example.com/pgo/dep.a (cost 285) at dep/dep.go:32:3 in function example.com/pgo/dep.Foo
dep/dep.go:32:3: inlining call to a
dep/dep.go:31:6: function Foo: TryInlineCall to b
dep/dep.go:31:6: canInlineCallExpr from Foo to b: cost too high: maxCost 80, callSiteScore 133, hot false
dep/dep.go:31:6: mkinlcall b: !canInlineCallExpr
```
It looks like what is happening here is:
1. `a` in inlined into `Foo`.
2. We then attempt to inline `b` into `Foo` (from the inlined body of `a`).
3. `canInlineCallExpr` checks `inlineCostOK` of `Foo -> b`. This consults the PGO edge map, which does not contain a hot edge from `Foo -> b` because PGO includes inline calls in edges. i.e., it has hot edges `Foo -> a` and `a -> b`.
4. Inline does not occur because we didn't increase the budget and base cost is too high.
I believe what should happen is if inlining a call inside of an `ir.InlinedCallExpr`, we should use the original name of the `InlinedCallExpr` to consult the PGO edge map.
cc @cherrymui | NeedsFix,compiler/runtime | low | Critical |
2,483,879,757 | kubernetes | Migrate hack/verify-featuregates.sh and hack/update-featuregates.sh to golangci-lint plugin architecture | ### What would you like to be added?
Recently in #125830 (as part of https://github.com/kubernetes/enhancements/issues/4330) a `static analysis script to verify new feature gates are added as versioned feature specs` was added to the code base that uses a bespoke linter setup. This approach is temporary/alpha with the desired approach for beta of the https://github.com/kubernetes/enhancements/issues/4330 using a golangci-lint plugin (see https://github.com/kubernetes/kubernetes/pull/125830#issuecomment-2260449394)
### Why is this needed?
From comments on the initial PR https://github.com/kubernetes/kubernetes/pull/125830#issuecomment-2260449394 and some associated user issues/friction posted after this change (https://github.com/kubernetes/kubernetes/issues/126741#issuecomment-2293634664) it makes sense to migrate the current bespoke linter script and go code to instead be a golangci-lint plugin to address. Doing so is identified as a must-do before https://github.com/kubernetes/enhancements/issues/4330 can move to `beta` (mentioned in the above PR comment as future work) and the bug here tracks that work. Reasons the golangci-lint plugin approach is preferred is:
- more performance (golangci-lint caching, etc)
- in line with kubernetes/kubernetes linting best practices
- removes need for `update-features` concept and helps with source of truth issues seen currently | sig/api-machinery,kind/feature,triage/accepted | low | Critical |
2,483,929,367 | flutter | Suggest adding a new widget or handling Wrap with "isExpandable" flag | ### Use case
I'm building a cross-platform app for courses, including a web version. The design of the app requires the course cards to be arranged in a flexible wrap layout, where the cards can expand to fill the available space and wrap to the next line as needed.
To address this problem, I have developed the "flexible_wrap" package (https://pub.dev/packages/flexible_wrap) as a solution.
### Proposal

I've been working on a package called "flexible_wrap" (https://pub.dev/packages/flexible_wrap) that aims to provide a flexible and customizable way to arrange widgets in a wrap layout. The key feature is allowing the widgets to expand according to the available space, similar to how items behave in a shopping cart or a list of cards.
The current implementation of the "flexible_wrap" package uses the existing Flutter 'RenderWrap' Render Object, but I believe there is room for improvement in the core Flutter framework to better handle this use case.
I would like to propose two potential solutions:
1. **Create a new widget:** Add a new widget to the Flutter framework that provides the "expandable wrap" behavior out of the box. This would allow developers to easily create dynamic layouts where widgets can wrap onto the next line and expand to fill the available space.
2. **Add an "isExpandable" flag to the Wrap widget:** Enhance the existing `Wrap` widget by adding an "isExpandable" flag that, when set, would allow the items to expand according to the available space. This would provide more flexibility and control for developers who need this behavior.
The key benefits of these solutions would be:
- Provide a built-in, well-tested, and performant way to create flexible wrap layouts in Flutter applications.
- Reduce the need for third-party packages and custom implementations, making it easier for developers to create these types of layouts.
- Ensure consistency and best practices across the Flutter ecosystem.
I've already explored implementing this functionality in the "flexible_wrap" package, but I believe having it as a core part of the Flutter framework would be more beneficial for the community. I'm happy to discuss this further and provide more details on the implementation and use cases.
Custom Layout Implementation
As part of the "flexible_wrap" package, I've developed a custom RenderBox implementation that extends the RenderWrap class. This implementation provides the core functionality for the flexible wrap layout. Here's a snippet of the code:
```dart
@override
void performLayout() {
final BoxConstraints constraints = this.constraints;
assert(_debugHasNecessaryDirections);
_hasVisualOverflow = false;
RenderBox? child = firstChild;
if (child == null) {
size = constraints.smallest;
return;
}
double dx = 0.0;
double dy = 0.0;
double extraWidth = 0.0;
int lines = 1;
final parentWidth = constraints.maxWidth;
while (child != null) {
child.layout(constraints, parentUsesSize: true);
final double itemWidth = child.size.width;
if (parentWidth.isFinite) {
final items = (parentWidth / itemWidth).floor();
double remainder = parentWidth - (itemWidth * items);
extraWidth = remainder / items;
}
final newWidth = extraWidth + itemWidth;
child.layout(BoxConstraints.tight(Size(newWidth, child.size.height)),
parentUsesSize: true);
final WrapParentData childParentData =
child.parentData! as WrapParentData;
childParentData.offset = Offset(dx, dy);
if (dx + newWidth < constraints.maxWidth) {
dx += newWidth;
} else {
dx = 0;
dy = child.size.height * lines;
lines++;
}
size = constraints.constrain(Size(constraints.maxWidth, dy));
child = childAfter(child);
}
}
```
This custom RenderBox implementation is used to provide the "flexible wrap" behavior in the FlexibleWrap widget, which is the main component of the flexible_wrap package.
Please let me know if you have any questions or thoughts on this proposal. I'm excited to work with the Flutter team to improve the framework and make it easier for developers to create dynamic and responsive layouts
| c: new feature,framework,c: proposal,P3,team-framework,triaged-framework | low | Critical |
2,483,940,949 | godot | [linux] use_llvm cross compile failure | ### Tested versions
4.3-stable
### System information
Yocto `scarthgap` running on Fedora 40
### Issue description
Using externally set `CC` and `CXX` will incorrectly be overwritten when using `use_llvm`.
Example external values for a cross compile:
```
export CC="riscv64-poky-linux-clang -target riscv64-poky-linux -mlittle-endian --dyld-prefix=/usr -Qunused-arguments -fstack-protector-strong -O2 -D_FORTIFY_SOURCE=2 -Wformat -Wformat-security -Werror=format-security --sysroot=/mnt/raid10/yocto/master/visionfive2/tmp/work/riscv64-poky-linux/godot/4.3/recipe-sysroot"
export CXX="riscv64-poky-linux-clang++ -target riscv64-poky-linux -mlittle-endian --dyld-prefix=/usr -Qunused-arguments -fstack-protector-strong -O2 -D_FORTIFY_SOURCE=2 -Wformat -Wformat-security -Werror=format-security --sysroot=/mnt/raid10/yocto/master/visionfive2/tmp/work/riscv64-poky-linux/godot/4.3/recipe-sysroot"
```
Build setup/compile
[run.do_compile.txt](https://github.com/user-attachments/files/16733803/run.do_compile.txt)
This patch resolves the issue:
```
diff --git a/platform/linuxbsd/detect.py b/platform/linuxbsd/detect.py
index d1de760f34..04f4662ed2 100644
--- a/platform/linuxbsd/detect.py
+++ b/platform/linuxbsd/detect.py
@@ -110,9 +110,6 @@ def configure(env: "SConsEnvironment"):
env["use_llvm"] = True
if env["use_llvm"]:
- if "clang++" not in os.path.basename(env["CXX"]):
- env["CC"] = "clang"
- env["CXX"] = "clang++"
env.extra_suffix = ".llvm" + env.extra_suffix
if env["linker"] != "default":
--
2.46.0
```
It seems that `os.path.basename(env["CXX"])` is not correctly reading the external environment.
### Steps to reproduce
Set `CC` and `CXX` to something other than `clang` and `clang++`. e.g.
```
export CC="riscv64-poky-linux-clang -target riscv64-poky-linux -mlittle-endian --dyld-prefix=/usr -Qunused-arguments -fstack-protector-strong -O2 -D_FORTIFY_SOURCE=2 -Wformat -Wformat-security -Werror=format-security --sysroot=/mnt/raid10/yocto/master/visionfive2/tmp/work/riscv64-poky-linux/godot/4.3/recipe-sysroot"
export CXX="riscv64-poky-linux-clang++ -target riscv64-poky-linux -mlittle-endian --dyld-prefix=/usr -Qunused-arguments -fstack-protector-strong -O2 -D_FORTIFY_SOURCE=2 -Wformat -Wformat-security -Werror=format-security --sysroot=/mnt/raid10/yocto/master/visionfive2/tmp/work/riscv64-poky-linux/godot/4.3/recipe-sysroot"
```
### Minimal reproduction project (MRP)
This is solely a build issue | platform:linuxbsd,topic:buildsystem,needs testing | low | Critical |
2,483,942,754 | rust | Wasm ABI special cases scalar pairs (against tool conventions) and is not documented | This is a bit of a hybrid bug report and documentation request. I suspect the "bug" will require a breaking change to fix so may be a non-starter.
In [Diplomat](https://github.com/rust-diplomat/diplomat) we've been writing bindings to Rust from JS/Wasm. One thing we support is passing structs over FFI, by-value.
According to the [tool conventions](https://github.com/WebAssembly/tool-conventions/blob/main/BasicCABI.md), multi-scalar-field structs are passed "indirectly", and single-scalar structs and scalars are passed directly by-value.
This is not what Rust does. This has previously come up in https://github.com/rust-lang/rust/issues/81386. What Rust does is that it always passes structs by-value, which on the JS side means that the struct is "splatted" across the arguments _including padding_.
For example, the following type and function
```rust
#[repr(C)]
pub struct Big {
a: u8,
// 1 byte padding
b: u16,
// 4 bytes padding
c: u64,
}
#[no_mangle]
pub extern "C" fn big(x: Big) {}
```
gets passed over LLVM IR as
```llvm
%Big = type { i8, [1 x i8], i16, [2 x i16], i64 }
define dso_local void @big(%Big %0) unnamed_addr #0 {...}
```
In JS, `big` needs to be called as `wasm.big(a, 0, b, 0, 0, c)`, with `0`s for the padding fields. Note that the padding fields can be different sizes, which is usually irrelevant but important here since "two i16s" and "one i32" end up meaning a different number of parameters. As far as I can tell the padding field has the same size as the alignment of the previous "real" field, but I can't find any documentation on this or even know whether this is a choice on LLVM or Rust's side.
It gets worse, however. Rust seems to special case scalar pairs:
```rust
#[repr(C)]
pub struct Small {
a: u8,
// 3 bytes padding
b: u32
}
#[no_mangle]
pub extern "C" fn small(x: Small) {}
```
```llvm
define dso_local void @small(i8 %x.0, i32 %x.1) unnamed_addr #0 {..}
```
Here, despite `Small` having padding, `small()` gets called as `wasm.small(a, b)`, because the fields got splatted out in the LLVM IR itself, without padding.
This is even stranger when comparing with the tool conventions because they have no mention of scalar pairs.
It would be really nice if Rust followed the documented tool conventions. I suspect that's not going to happen, and, besides, direct parameter passing is likely more efficient here[^1].
Failing that, it seems like it would be nice if "pairs" did not have special behavior compared to structs with more than 2 scalars in them. Ideally I'd like the scalar pair behavior to apply to all structs: always splat out structs into fields, never require padding be passed over FFI. I'm not sure if such a breaking change is possible, though.
Failing that, I think this behavior should be carefully documented. I've been discovering this by trial-and-error, and Rust's behavior contradicts the [extant documentation](https://github.com/WebAssembly/tool-conventions/blob/main/BasicCABI.md), which makes it even more crucial to have documentation on how Rust diverges.
As far as I can tell, this is not a problem for wasm-bindgen since wasm-bindgen doesn't pass structs over FFI by-value, though the failing test mentioned in https://github.com/rust-lang/rust/issues/81386 seems to indicate they care _some_ amount.
[^1]: Indirect passing means that the JS side basically has to allocate a heap object each time it wishes to call such a function. | T-compiler,A-docs,O-wasm,C-bug,A-ABI | low | Critical |
2,483,970,416 | go | hash/maphash:purego: TestSmhasherAvalanche failures | ```
#!watchflakes
default <- pkg == "hash/maphash:purego" && test == "TestSmhasherAvalanche"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8738801229214959937)):
=== RUN TestSmhasherAvalanche
=== PAUSE TestSmhasherAvalanche
=== CONT TestSmhasherAvalanche
smhasher_test.go:360: bad bias for bytes2 bit 1 -> bit 52: 59618/100000
--- FAIL: TestSmhasherAvalanche (18.52s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,483,973,359 | deno | 'deno doc --lint' nondeterministically reports spurious 'private type' errors | Version: Deno 1.46.2
To reproduce:
```
git clone https://github.com/skybrian/repeat-test
cd repeat-test
git reset --hard c4cb924c9c97230fabea945793ac03fa149545d2
deno doc --lint domain.ts
```
Sometimes it reports no errors, but other times it does. Here's is example output when it reports errors:
```
error[private-type-ref]: public type 'Domain' references private type 'Arbitrary'
--> /Users/skybrian/Projects/tmp/repeat-test/src/domain_class.ts:48:1
|
48 | export class Domain<T> extends Arbitrary<T> {
| ^
= hint: make the referenced type public or remove the reference
|
33 | export class Arbitrary<T> implements PickSet<T> {
| - this is the referenced type
|
info: to ensure documentation is complete all types that are exposed in the public API must be public
error[private-type-ref]: public type 'Domain.prototype.regenerate' references private type 'Generated'
--> /Users/skybrian/Projects/tmp/repeat-test/src/domain_class.ts:122:3
|
122 | regenerate(val: unknown): Generated<T> | Failure {
| ^
= hint: make the referenced type public or remove the reference
|
122 | export type Generated<T> = {
| - this is the referenced type
|
info: to ensure documentation is complete all types that are exposed in the public API must be public
error[private-type-ref]: public type 'Jar.prototype.take' references private type 'PickFunction'
--> /Users/skybrian/Projects/tmp/repeat-test/src/jar_class.ts:54:3
|
54 | take(pick: PickFunction): T {
| ^
= hint: make the referenced type public or remove the reference
|
45 | export interface PickFunction {
| - this is the referenced type
|
info: to ensure documentation is complete all types that are exposed in the public API must be public
error: Found 3 documentation lint errors.
``` | bug,docs | low | Critical |
2,483,978,752 | godot | Default XR action map broken in webXR on Quest 3 | ### Tested versions
Godot v4.3.stable
### System information
Manjaro Linux X11 - GLES3 (Compatibility) - NVIDIA GeForce RTX 3060 (nvidia; 550.107.02) - Intel(R) Core(TM) i5-8600K CPU @ 3.60GHz (6 Threads)
### Issue description
"primary" action (for the main thumbstick) is not working. reports 0,0
godot xr tools movement providers also affected, as they default to this action
other things work, such as the trigger
### Steps to reproduce
create a blank project and add godotxr tools, add the default startxr script to add a button to enter XR mode, add a basic player and controller setup for XR and try adding one of the movement providers (they don't work)
### Minimal reproduction project (MRP)
N/A | bug,topic:xr | low | Critical |
2,483,988,058 | deno | `require` doesn't resolve in the global cache dir if referrer is not in DENODIR | The scenario is roughly as follows:
You're using the global resolver (no node_modules dir) and you have a config file
```
// playwright.config.js (located in your project dir)
import playwrightTest from "playwright/test"; // in import map (there is no package.json)
```
Playwright (which is in the global cache dir) adds a hook that gets called right before the builtin `require`, then `requires` your config file (`require(userConfigFilePath)`).
Playwright's require hook transpiles your config file to CJS, which results in
```
// transpiled playwright.config.js ("located" in user's project dir)
const playwrightTest = require("playwright/test");
```
then calls deno's actual require function.
When deno hits `require("playwright/test");` it searches relative to the config file (but no matches), then searches in "<projectDir>/node_modules", "<projectDir>/../node_modules" (etc.) but there are no matches, because the dependency is actually in the global cache dir.
Ideally we could handle this by resolving in the global cache dir instead of in the `node_modules` dirs (which don't exist or apply).
---
Seen in https://github.com/denoland/deno/issues/16899#issuecomment-2306486683 (repro at https://github.com/jollytoad/ahx/tree/df1b391bb5adb51e879da17612570f7e7dd025f7) | bug,node compat | low | Minor |
2,484,002,813 | rust | HWasan with external clang runtime (undefined symbol: __hwasan_tls) | Hi. I want to build with HWAsan and join c / rust / python code. I prepared example based on a public project https://github.com/pyca/cryptography
## Steps to repro
- Clone cryptography
- Create `config.toml`
- Create `Dockerfile`
- Create `hello.py`
- Build it
- Run `hello.py`
### Setup cryptography
```shell
git clone --recursive --branch=43.0.0 --depth=1 --single-branch git@github.com:pyca/cryptography.git
cd cryptography
```
### config.toml
Create `config.toml` with content:
```toml
[build]
target="aarch64-unknown-linux-gnu"
rustflags = [
"-g",
"-Z", "sanitizer=hwaddress",
"-Z", "external-clangrt",
"-L", "/usr/lib/llvm-20/lib/clang/20/lib/linux/",
"-l", "clang_rt.hwasan-aarch64",
"-C", "link-arg=-fuse-ld=/usr/bin/ld.lld-20",
"-C", "linker=/usr/bin/clang-20",
"-C", "lto=no"
]
```
### Dockerfile
Create `Dockerfile` with content:
```Dockerfile
FROM ubuntu:24.04 AS env
ENV DEBIAN_FRONTEND="noninteractive"
RUN apt-get update -y && \
apt-get install -y \
autoconf \
cmake \
curl \
gnupg \
libffi-dev \
libssl-dev \
lsb-release \
ninja-build \
patchelf \
pkg-config \
python3-dbg \
python3-dev \
python3-pip \
python3-venv \
software-properties-common \
wget
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- --default-toolchain nightly -y
ENV PATH=/root/.cargo/bin:$PATH
RUN wget https://apt.llvm.org/llvm.sh && \
chmod +x llvm.sh && \
./llvm.sh 20 && \
ln -s /usr/bin/lld-20 /usr/local/bin/lld && \
ln -s /usr/bin/clang-20 /usr/local/bin/clang && \
ln -s /usr/bin/clang++-20 /usr/local/bin/clang++ && \
ln -s /usr/bin/clang-20 /usr/local/bin/cc && \
ln -s /usr/bin/clang++-20 /usr/local/bin/c++ && \
rm /usr/bin/ld && \
ln -s /usr/lib/llvm-20/bin/ld.lld /usr/bin/ld
RUN python3 -m venv /venv
RUN echo "/usr/lib/llvm-20/lib/clang/20/lib/linux" > /etc/ld.so.conf.d/clang.conf && ldconfig
ENV CC=/usr/bin/clang-20
ENV CXX=/usr/bin/clang++-20
ENV CFLAGS="-g -fsanitize=hwaddress -shared-libsan -mllvm -hwasan-globals=0 -std=c23"
ENV CCFLAGS="-g -fsanitize=hwaddress -shared-libsan -mllvm -hwasan-globals=0 -std=c23"
ENV CXXFLAGS="-g -fsanitize=hwaddress -shared-libsan -mllvm -hwasan-globals=0 -std=c++23"
# ENV CPPFLAGS="-g -fsanitize=hwaddress -shared-libsan -mllvm -hwasan-globals=0" | ?
ENV LDFLAGS="-fsanitize=hwaddress -shared-libsan"
ENV LDSHARED="/usr/bin/clang-20 -shared"
ENV RUSTFLAGS="-g -Zsanitizer=hwaddress -C linker=/usr/bin/clang-20 -C link-arg=-fuse-ld=/usr/bin/ld.lld-20 -C lto=no -Zexternal-clangrt -C target-feature=+tagged-globals"
ENV CARGO_BUILD_TARGET="aarch64-unknown-linux-gnu"
COPY config.toml /.cargo/
FROM env AS run
WORKDIR /src
COPY . .
```
Create `hello.py` (from examples) with content:
```python
from cryptography.fernet import Fernet
key = Fernet.generate_key()
f = Fernet(key)
message = b"A really secret message. Not for prying eyes."
private_data = f.encrypt(message)
public_data = f.decrypt(token)
print(f"Message: {message}")
print(f"Private data: {private_data}")
print(f"Public data: {public_data}")
```
### Build
```shell
$ docker build . -q
$ docker run --rm -it <image>
$ source /venv/bin/activate
$ cargo build
$ pip3 install setuptools
Collecting setuptools
Downloading setuptools-73.0.1-py3-none-any.whl.metadata (6.6 kB)
Downloading setuptools-73.0.1-py3-none-any.whl (2.3 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.3/2.3 MB 1.6 MB/s eta 0:00:00
Installing collected packages: setuptools
Successfully installed setuptools-73.0.1
$ pip3 install --no-binary ":all:" cffi --no-clean -vv
---------------- CUT ----------------
ld: error: undefined symbol: __hwasan_init
>>> referenced by _configtest.c
>>> _configtest.o:(hwasan.module_ctor)
clang-20: error: linker command failed with exit code 1 (use -v to see invocation)
Note: will not use '__sync_synchronize()' in the C code
***** The above error message can be safely ignored.
---------------- CUT ----------------
ld: error: undefined symbol: __hwasan_init
>>> referenced by _configtest.c
>>> _configtest.o:(hwasan.module_ctor)
clang-20: error: linker command failed with exit code 1 (use -v to see invocation)
Note: will not use '__sync_synchronize()' in the C code
***** The above error message can be safely ignored.
---------------- CUT ----------------
ld: error: undefined symbol: __hwasan_init
>>> referenced by _configtest.c
>>> _configtest.o:(hwasan.module_ctor)
clang-20: error: linker command failed with exit code 1 (use -v to see invocation)
Note: will not use '__sync_synchronize()' in the C code
***** The above error message can be safely ignored.
---------------- CUT ----------------
building '_cffi_backend' extension
creating build/temp.linux-aarch64-cpython-312
creating build/temp.linux-aarch64-cpython-312/src
creating build/temp.linux-aarch64-cpython-312/src/c
/usr/bin/clang-20 -fno-strict-overflow -Wsign-compare -DNDEBUG -g -O2 -Wall -g -fsanitize=hwaddress -shared-libsan -mllvm -hwasan-globals=0 -std=c23 -fPIC -DFFI_BUILDING=1 -DUSE__THREAD -I/venv/include -I/usr/include/python3.12 -c src/c/_cffi_backend.c -o build/temp.linux-aarch64-cpython-312/src/c/_cffi_backend.o
src/c/_cffi_backend.c:4579:22: warning: 'Py_FileSystemDefaultEncoding' is deprecated [-Wdeprecated-declarations]
4579 | Py_FileSystemDefaultEncoding, &filename_or_null, &flags))
| ^
/usr/include/python3.12/fileobject.h:22:1: note: 'Py_FileSystemDefaultEncoding' has been explicitly marked deprecated here
22 | Py_DEPRECATED(3.12) PyAPI_DATA(const char *) Py_FileSystemDefaultEncoding;
| ^
/usr/include/python3.12/pyport.h:317:54: note: expanded from macro 'Py_DEPRECATED'
317 | #define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
| ^
In file included from src/c/_cffi_backend.c:8027:
In file included from src/c/cffi1_module.c:20:
src/c/call_python.c:211:5: warning: "no definition for read_barrier(), missing synchronization for multi-thread initialization in embedded mode" [-W#warnings]
211 | # warning "no definition for read_barrier(), missing synchronization for\
| ^
2 warnings generated.
/usr/bin/clang-20 -shared -fsanitize=hwaddress -shared-libsan -g -fsanitize=hwaddress -shared-libsan -mllvm -hwasan-globals=0 -std=c23 build/temp.linux-aarch64-cpython-312/src/c/_cffi_backend.o -L/usr/lib/aarch64-linux-gnu -lffi -o build/lib.linux-aarch64-cpython-312/_cffi_backend.cpython-312-aarch64-linux-gnu.so
clang-20: warning: argument unused during compilation: '-mllvm -hwasan-globals=0' [-Wunused-command-line-argument]
---------------- CUT ----------------
Created wheel for pycparser: filename=pycparser-2.22-py3-none-any.whl size=117552 sha256=d57055b6dddc795bb4eca6fc3754bb5ed521035680dd552d86560baed33ef091
Stored in directory: /root/.cache/pip/wheels/36/53/17/c0ae2e096d359a9a8faf47fd7ded8f4c878af41a3c66cb5199
Successfully built cffi pycparser
Installing collected packages: pycparser, cffi
---------------- CUT ----------------
Successfully installed cffi-1.17.0 pycparser-2.22
Removed build tracker: '/tmp/pip-build-tracker-62wvhedt'
```
Run with pip isolation (by default):
```shell
$ pip3 install -e . --no-binary ":all:" --no-clean -vv
Using pip 24.0 from /venv/lib/python3.12/site-packages/pip (python 3.12)
---------------- CUT ----------------
= note: ld.lld-20: error: undefined symbol: __hwasan_tls
>>> referenced by mod.rs:536 (/rustc/eff09483c67e6fc96c8098ba46dce476162754c5/library/core/src/ptr/mod.rs:536)
>>> /tmp/pip-install-85vkgny3/maturin_6b5877bea7064e2bbabad687f355ccbb/target/aarch64-unknown-linux-gnu/release/deps/maturin-38e63050942023b0.maturin.5af50ad9ae4ddb95-cgu.01.rcgu.o:(core::ptr::drop_in_place$LT$$LP$core..any..TypeId$C$alloc..boxed..Box$LT$dyn$u20$core..any..Any$u2b$core..marker..Sync$u2b$core..marker..Send$GT$$RP$$GT$::h0a3864e356d1b87e)
>>> referenced by function.rs:250 (/rustc/eff09483c67e6fc96c8098ba46dce476162754c5/library/core/src/ops/function.rs:250)
>>> /tmp/pip-install-85vkgny3/maturin_6b5877bea7064e2bbabad687f355ccbb/target/aarch64-unknown-linux-gnu/release/deps/maturin-38e63050942023b0.maturin.5af50ad9ae4ddb95-cgu.00.rcgu.o:(core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::haffe049bd93073b5)
>>> referenced by mod.rs:536 (/rustc/eff09483c67e6fc96c8098ba46dce476162754c5/library/core/src/ptr/mod.rs:536)
>>> /tmp/pip-install-85vkgny3/maturin_6b5877bea7064e2bbabad687f355ccbb/target/aarch64-unknown-linux-gnu/release/deps/maturin-38e63050942023b0.maturin.5af50ad9ae4ddb95-cgu.01.rcgu.o:(core::ptr::drop_in_place$LT$$LP$core..any..TypeId$C$alloc..boxed..Box$LT$dyn$u20$core..any..Any$u2b$core..marker..Sync$u2b$core..marker..Send$GT$$RP$$GT$::h0a3864e356d1b87e)
>>> referenced 39817 more times
ld.lld-20: error: undefined symbol: __hwasan_loadN
>>> referenced by function.rs:250 (/rustc/eff09483c67e6fc96c8098ba46dce476162754c5/library/core/src/ops/function.rs:250)
>>> /tmp/pip-install-85vkgny3/maturin_6b5877bea7064e2bbabad687f355ccbb/target/aarch64-unknown-linux-gnu/release/deps/maturin-38e63050942023b0.maturin.5af50ad9ae4ddb95-cgu.00.rcgu.o:(core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::haffe049bd93073b5)
>>> referenced by intrinsics.rs:3325 (/rustc/eff09483c67e6fc96c8098ba46dce476162754c5/library/core/src/intrinsics.rs:3325)
>>> /tmp/pip-install-85vkgny3/maturin_6b5877bea7064e2bbabad687f355ccbb/target/aarch64-unknown-linux-gnu/release/deps/maturin-38e63050942023b0.maturin.5af50ad9ae4ddb95-cgu.01.rcgu.o:(_$LT$hashbrown..raw..RawTable$LT$T$C$A$GT$$u20$as$u20$core..ops..drop..Drop$GT$::drop::h059dda3c88ea07a7)
>>> referenced by intrinsics.rs:3325 (/rustc/eff09483c67e6fc96c8098ba46dce476162754c5/library/core/src/intrinsics.rs:3325)
>>> /tmp/pip-install-85vkgny3/maturin_6b5877bea7064e2bbabad687f355ccbb/target/aarch64-unknown-linux-gnu/release/deps/maturin-38e63050942023b0.maturin.5af50ad9ae4ddb95-cgu.01.rcgu.o:(_$LT$hashbrown..raw..RawTable$LT$T$C$A$GT$$u20$as$u20$core..ops..drop..Drop$GT$::drop::h059dda3c88ea07a7)
>>> referenced 3842 more times
---------------- CUT ----------------
error: `cargo build --manifest-path Cargo.toml --message-format=json-render-diagnostics --target aarch64-unknown-linux-gnu --release -v --no-default-features --locked` failed with code 101
```
We can see that ENVs and `/.cargo/config.toml` are ignored and a lot of HWasan errors showed. I dont know how to fix it correctly (it is problem one; possibly, question to python community).
Run without pip isolation (but it is unreal to do in real world application):
```shell
$ pip3 install -e . --no-binary ":all:" --no-clean -vv --no-build-isolation
---------------- CUT ----------------
Successfully installed cryptography-43.0.0
Removed build tracker: '/tmp/pip-build-tracker-bvpypdpj'
```
### Run `hello.py`
```shell
$ python3 hello.py
Traceback (most recent call last):
File "/src/hello.py", line 1, in <module>
from cryptography.fernet import Fernet
File "/src/src/cryptography/fernet.py", line 14, in <module>
from cryptography.exceptions import InvalidSignature
File "/src/src/cryptography/exceptions.py", line 9, in <module>
from cryptography.hazmat.bindings._rust import exceptions as rust_exceptions
ImportError: /src/src/cryptography/hazmat/bindings/_rust.abi3.so: undefined symbol: __hwasan_tls
```
We can see that ENVs and `/.cargo/config.toml` are not ignored, but `__hwasan_tls` (it is problem #2).
## Additional information
```shell
$ rustc --version --verbose
rustc 1.82.0-nightly (eff09483c 2024-08-22)
binary: rustc
commit-hash: eff09483c67e6fc96c8098ba46dce476162754c5
commit-date: 2024-08-22
host: aarch64-unknown-linux-gnu
release: 1.82.0-nightly
LLVM version: 19.1.0
$ clang --version
Ubuntu clang version 20.0.0 (++20240821083450+84fa7b438e1f-1~exp1~20240821203619.364)
Target: aarch64-unknown-linux-gnu
Thread model: posix
InstalledDir: /usr/lib/llvm-20/bin
```
Few things which helps me it other cases:
- disable link-time-optimisation (otherwise link fails with R_AARCH64_ADR_PREL_PG_HI21 out of range)
- use only `lld` as linker (otherwise link fails R_AARCH64_ADR_PREL_PG_HI21 out of range)
- use shared (otherwise ASan conflicts or link fails)
- use external clang runtime (otherwise ASan conflicts)
- do not use gcc (sometimes it timeouts)
- pass target-feature=+tagged-globals (otherwise R_AARCH64_ADR_PREL_PG_HI21 out of range)
- pass -mllvm -hwasan-globals=0 (otherwise FP crashes)
- ~~use global cargo config with environment variables (in case env-only, args do not passed to cargo/rustc)~~ (pip isolation)
| A-sanitizers,C-bug,O-AArch64,PG-exploit-mitigations | low | Critical |
2,484,002,953 | material-ui | [system] Missing quotes in `InitColorSchemeScript.tsx` | ### Steps to reproduce
Link to live example: https://stackblitz.com/edit/github-xk9wmf?file=src%2FApp.tsx
```jsx
import { FC } from 'react';
import InitColorSchemeScript from '@mui/material/InitColorSchemeScript';
import './style.css';
export const App: FC<{ name: string }> = ({ name }) => {
// This injects a `<script>` tag into the DOM. Right-click and inspect the
// site preview on the right, and look for the `<script>` tag under `<div id="app">`.
// You should see that on the second-last line of the script, there is a bare `%s` with
// no quotes around it. This throws a `SyntaxError` in my browser.
return <InitColorSchemeScript attribute="[data-color-scheme=%s]" />;
};
```
Steps:
1. Right-click and inspect the site preview on the right, and look for the `<script>` tag under `<div id="app">`. You should see that on the second-last line of the script, there is a bare `%s` with no quotes around it. This throws a `SyntaxError` in my browser.
### Current behavior
_No response_
### Expected behavior
_No response_
### Context
https://github.com/mui/material-ui/blob/9f4b846be96ee18733b6bd77eba2f54b1ceb9b04/packages/mui-system/src/InitColorSchemeScript/InitColorSchemeScript.tsx#L81
needs to have quotes around `${value}`.
### Your environment
System:
OS: macOS 14.6.1
Binaries:
Node: 18.19.1 - ~/.asdf/installs/nodejs/18.19.1/bin/node
npm: 10.2.4 - ~/.asdf/plugins/nodejs/shims/npm
pnpm: 9.7.1 - ~/.asdf/installs/nodejs/18.19.1/bin/pnpm
Browsers:
Chrome: 127.0.6533.120
Edge: Not Found
Safari: 17.6
npmPackages:
@emotion/react: 11.13.3 => 11.13.3
@emotion/styled: 11.13.0 => 11.13.0
@mui/icons-material: 6.0.0-rc.0 => 6.0.0-rc.0
@mui/lab: 6.0.0-beta.7 => 6.0.0-beta.7
@mui/material: 6.0.0-rc.0 => 6.0.0-rc.0
@mui/system: 6.0.0-rc.0 => 6.0.0-rc.0
@mui/x-data-grid: 7.14.0 => 7.14.0
@mui/x-date-pickers: 7.14.0 => 7.14.0
@mui/x-tree-view: 7.14.0 => 7.14.0
@types/react: 18.3.4 => 18.3.4
react: 18.3.1 => 18.3.1
react-dom: 18.3.1 => 18.3.1
typescript: ^5.2.2 => 5.3.2
**Search keywords**: css variables, InitColorSchemeScript, %s | docs,package: system,v6.x | low | Critical |
2,484,058,836 | yt-dlp | [9gag.com] Unable to download JSON metadata: HTTP Error 403: Forbidden | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Russia
### Provide a description that is worded well enough to be understood
Trying to download videos from 9gag.com results in a 403 error.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-f', 'bestvideo[height <=? 1080][fps >? 30][ext =? mp4]+bestaudio[ext =? m4a]/bestvideo[height <=? 1080]+bestaudio/best[height <= 1080]/best', '--sub-lang', 'en-US,en-GB,en,ru-RU,ru', '--write-sub', '--sub-format', 'srt/best', '--convert-subtitles', 'srt', '--write-auto-sub', '--embed-subs', '-o', '[{upld} %(upload_date)s] %(title)s [{fmt_id} %(format_id)s, {res} %(resolution)s, {fps} %(fps)d, {extrc_key} %(extractor_key)s, {id} %(id)s].%(ext)s', '-vU', 'https://9gag.com/gag/amoY6Vy']
[debug] Encodings: locale cp1251, fs utf-8, pref cp1251, out cp1251 (No VT), error cp1251 (No VT), screen cp1251 (No VT)
[debug] yt-dlp version master@2024.08.21.071743 from yt-dlp/yt-dlp-master-builds [6f9e65374] (win_exe)
[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1k 25 Mar 2021)
[debug] exe versions: ffmpeg N-116617-g9e3b5b8a26-20240813 (setts), ffprobe N-116617-g9e3b5b8a26-20240813
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.07.04, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.35.5, urllib3-2.2.2, websockets-13.0
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1830 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-master-builds/releases/latest
Latest version: master@2024.08.21.071743 from yt-dlp/yt-dlp-master-builds
yt-dlp is up to date (master@2024.08.21.071743 from yt-dlp/yt-dlp-master-builds)
[9gag] Extracting URL: https://9gag.com/gag/amoY6Vy
[9gag] amoY6Vy: Downloading JSON metadata
ERROR: [9gag] amoY6Vy: Unable to download JSON metadata: HTTP Error 403: Forbidden (caused by <HTTPError 403: Forbidden>)
File "yt_dlp\extractor\common.py", line 740, in extract
File "yt_dlp\extractor\ninegag.py", line 61, in _real_extract
File "yt_dlp\extractor\common.py", line 1139, in download_content
File "yt_dlp\extractor\common.py", line 1099, in download_handle
File "yt_dlp\extractor\common.py", line 960, in _download_webpage_handle
File "yt_dlp\extractor\common.py", line 909, in _request_webpage
File "yt_dlp\extractor\common.py", line 896, in _request_webpage
File "yt_dlp\YoutubeDL.py", line 4165, in urlopen
File "yt_dlp\networking\common.py", line 117, in send
File "yt_dlp\networking\_helper.py", line 208, in wrapper
File "yt_dlp\networking\common.py", line 340, in send
File "yt_dlp\networking\_requests.py", line 365, in _send
yt_dlp.networking.exceptions.HTTPError: HTTP Error 403: Forbidden
```
| site-bug | low | Critical |
2,484,067,809 | node | Support statement coverage | I think the Node.js coverage reporter should include support for reporting statement coverage.
My idea is to parse the source code into an Abstract Syntax Tree (AST) using `acorn-walk`, and then use the `Statement` callback to extract statements. They could then be converted to `CoverageStatement`s, (similarly to how the `getLines` function works with lines to `CoverageLines`).
These statements could then be mapped to ranges in the same way that lines are handled in `mapRangeToLines`.
I saw a (extremely complicated) way of doing this in <https://github.com/cenfun/monocart-coverage-reports/>, so it appears to be possible. | feature request,coverage,test_runner | low | Major |
2,484,071,353 | pytorch | Torch compile stochastically fails with FileNotFoundError | ### 🐛 Describe the bug
Torch compile stochastically fails during multinode training with FileNotFoundError in torch._dynamo. Full stack trace below. This is a difficult bug to provide a minimal reproduction for - it randomly occurs in torch.compile (a quite slow operation) and shows up often only in large-scale distributed multinode training, while being very rare in single-node training. The use of concurrent.futures in conjunction with the FileNotFoundError makes me strongly suspect a race condition.
[rank60]: FileNotFoundError: [Errno 2] No such file or directory: '/tmp/torchinductor_ubuntu/triton/4/5eba80077c5d5c6def3f812b8cccbc7d/triton_.cubin.tmp.pid_93828_100098'
```
torchrun --nproc_per_node 8 --nnodes 8 --rdzv-backend c10d --rdzv-endpoint foundry-1:29500 exp/codebook_collapse_trace.py --stage 1 --latent-type 'kl' --load-from quart_maze_vacation_120000 --step 158000
W0824 00:04:25.161000 140224365408704 torch/distributed/run.py:757]
W0824 00:04:25.161000 140224365408704 torch/distributed/run.py:757] *****************************************
W0824 00:04:25.161000 140224365408704 torch/distributed/run.py:757] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0824 00:04:25.161000 140224365408704 torch/distributed/run.py:757] *****************************************
All clear!
Rank 56 of 64.
Rank 63 of 64.
Rank 57 of 64.
Rank 60 of 64.
Rank 59 of 64.
Rank 62 of 64.
Rank 58 of 64.
Rank 61 of 64.
[rank60]: Traceback (most recent call last):
[rank60]: File "/home/ubuntu/si/exp/codebook_collapse_trace.py", line 435, in <module>
[rank60]: trainer_config(default(tokenizer, None)).train()
[rank60]: File "/home/ubuntu/si/si/contact/hertz.py", line 588, in train
[rank60]: self.do_step(data)
[rank60]: File "/home/ubuntu/si/si/contact/hertz.py", line 575, in do_step
[rank60]: self.eval_step()
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
[rank60]: return func(*args, **kwargs)
[rank60]: File "/home/ubuntu/si/si/contact/hertz.py", line 395, in eval_step
[rank60]: eval_recons, *_ = self.vae(eval_audio)
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[rank60]: return self._call_impl(*args, **kwargs)
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[rank60]: return forward_call(*args, **kwargs)
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 1593, in forward
[rank60]: else self._run_ddp_forward(*inputs, **kwargs)
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 1411, in _run_ddp_forward
[rank60]: return self.module(*inputs, **kwargs) # type: ignore[index]
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[rank60]: return self._call_impl(*args, **kwargs)
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[rank60]: return forward_call(*args, **kwargs)
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 451, in _fn
[rank60]: return fn(*args, **kwargs)
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
[rank60]: return self._call_impl(*args, **kwargs)
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
[rank60]: return forward_call(*args, **kwargs)
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 917, in catch_errors
[rank60]: return hijacked_callback(frame, cache_entry, hooks, frame_state)
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 786, in _convert_frame
[rank60]: result = inner_convert(
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 400, in _convert_frame_assert
[rank60]: return _compile(
[rank60]: File "/usr/lib/python3.10/contextlib.py", line 79, in inner
[rank60]: return func(*args, **kwds)
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 676, in _compile
[rank60]: guarded_code = compile_inner(code, one_graph, hooks, transform)
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 262, in time_wrapper
[rank60]: r = func(*args, **kwargs)
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 535, in compile_inner
[rank60]: out_code = transform_code_object(code, transform)
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1036, in transform_code_object
[rank60]: transformations(instructions, code_options)
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 165, in _fn
[rank60]: return fn(*args, **kwargs)
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 500, in transform
[rank60]: tracer.run()
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2149, in run
[rank60]: super().run()
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 810, in run
[rank60]: and self.step()
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 773, in step
[rank60]: getattr(self, inst.opname)(inst)
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2268, in RETURN_VALUE
[rank60]: self.output.compile_subgraph(
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1001, in compile_subgraph
[rank60]: self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root)
[rank60]: File "/usr/lib/python3.10/contextlib.py", line 79, in inner
[rank60]: return func(*args, **kwds)
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1178, in compile_and_call_fx_graph
[rank60]: compiled_fn = self.call_user_compiler(gm)
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 262, in time_wrapper
[rank60]: r = func(*args, **kwargs)
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1251, in call_user_compiler
[rank60]: raise BackendCompilerFailed(self.compiler_fn, e).with_traceback(
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 1232, in call_user_compiler
[rank60]: compiled_fn = compiler_fn(gm, self.example_inputs())
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/_dynamo/backends/distributed.py", line 606, in compile_fn
[rank60]: submod_compiler.run(*example_inputs)
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/fx/interpreter.py", line 145, in run
[rank60]: self.env[node] = self.run_node(node)
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/_dynamo/backends/distributed.py", line 348, in run_node
[rank60]: compiled_submod_real = self.compile_submod(real_mod, new_args, kwargs)
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/_dynamo/backends/distributed.py", line 263, in compile_submod
[rank60]: self.compiler(input_mod, args),
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/_dynamo/repro/after_dynamo.py", line 117, in debug_wrapper
[rank60]: compiled_gm = compiler_fn(gm, example_inputs)
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/__init__.py", line 1731, in __call__
[rank60]: return compile_fx(model_, inputs_, config_patches=self.config)
[rank60]: File "/usr/lib/python3.10/contextlib.py", line 79, in inner
[rank60]: return func(*args, **kwds)
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1330, in compile_fx
[rank60]: return aot_autograd(
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/_dynamo/backends/common.py", line 58, in compiler_fn
[rank60]: cg = aot_module_simplified(gm, example_inputs, **kwargs)
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 903, in aot_module_simplified
[rank60]: compiled_fn = create_aot_dispatcher_function(
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 262, in time_wrapper
[rank60]: r = func(*args, **kwargs)
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/_functorch/aot_autograd.py", line 628, in create_aot_dispatcher_function
[rank60]: compiled_fn = compiler_fn(flat_fn, fake_flat_args, aot_config, fw_metadata=fw_metadata)
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 443, in aot_wrapper_dedupe
[rank60]: return compiler_fn(flat_fn, leaf_flat_args, aot_config, fw_metadata=fw_metadata)
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 648, in aot_wrapper_synthetic_base
[rank60]: return compiler_fn(flat_fn, flat_args, aot_config, fw_metadata=fw_metadata)
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 119, in aot_dispatch_base
[rank60]: compiled_fw = compiler(fw_module, updated_flat_args)
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 262, in time_wrapper
[rank60]: r = func(*args, **kwargs)
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1257, in fw_compiler_base
[rank60]: return inner_compile(
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/_dynamo/repro/after_aot.py", line 83, in debug_wrapper
[rank60]: inner_compiled_fn = compiler_fn(gm, example_inputs)
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/_inductor/debug.py", line 304, in inner
[rank60]: return fn(*args, **kwargs)
[rank60]: File "/usr/lib/python3.10/contextlib.py", line 79, in inner
[rank60]: return func(*args, **kwds)
[rank60]: File "/usr/lib/python3.10/contextlib.py", line 79, in inner
[rank60]: return func(*args, **kwds)
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 262, in time_wrapper
[rank60]: r = func(*args, **kwargs)
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 438, in compile_fx_inner
[rank60]: compiled_graph = fx_codegen_and_compile(
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 714, in fx_codegen_and_compile
[rank60]: compiled_fn = graph.compile_to_fn()
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/_inductor/graph.py", line 1307, in compile_to_fn
[rank60]: return self.compile_to_module().call
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 262, in time_wrapper
[rank60]: r = func(*args, **kwargs)
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/_inductor/graph.py", line 1254, in compile_to_module
[rank60]: mod = PyCodeCache.load_by_key_path(
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 2160, in load_by_key_path
[rank60]: exec(code, mod.__dict__, mod.__dict__)
[rank60]: File "/tmp/torchinductor_ubuntu/32/c32zlm4gweniydec2oxyilmjnaayry5azbluy5dcokzx735gtcre.py", line 1806, in <module>
[rank60]: async_compile.wait(globals())
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 2715, in wait
[rank60]: scope[key] = result.result()
[rank60]: File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/_inductor/codecache.py", line 2522, in result
[rank60]: self.future.result()
[rank60]: File "/usr/lib/python3.10/concurrent/futures/_base.py", line 451, in result
[rank60]: return self.__get_result()
[rank60]: File "/usr/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
[rank60]: raise self._exception
[rank60]: torch._dynamo.exc.BackendCompilerFailed: backend='compile_fn' raised:
[rank60]: FileNotFoundError: [Errno 2] No such file or directory: '/tmp/torchinductor_ubuntu/triton/4/5eba80077c5d5c6def3f812b8cccbc7d/triton_.cubin.tmp.pid_93828_100098'
[rank60]: Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
[rank60]: You can suppress this exception and fall back to eager by setting:
[rank60]: import torch._dynamo
[rank60]: torch._dynamo.config.suppress_errors = True
W0824 00:16:38.381000 140224365408704 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 92391 closing signal SIGTERM
W0824 00:16:38.398000 140224365408704 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 92392 closing signal SIGTERM
W0824 00:16:38.405000 140224365408704 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 92393 closing signal SIGTERM
W0824 00:16:38.423000 140224365408704 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 92394 closing signal SIGTERM
W0824 00:16:38.460000 140224365408704 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 92396 closing signal SIGTERM
W0824 00:16:38.480000 140224365408704 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 92397 closing signal SIGTERM
W0824 00:16:38.500000 140224365408704 torch/distributed/elastic/multiprocessing/api.py:851] Sending process 92398 closing signal SIGTERM
E0824 00:16:44.556000 140224365408704 torch/distributed/elastic/multiprocessing/api.py:826] failed (exitcode: 1) local_rank: 4 (pid: 92395) of binary: /home/ubuntu/si/.venv/bin/python3
Traceback (most recent call last):
File "/home/ubuntu/si/.venv/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper
return f(*args, **kwargs)
File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/distributed/run.py", line 879, in main
run(args)
File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/distributed/run.py", line 870, in run
elastic_launch(
File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 132, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/ubuntu/si/.venv/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 263, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
exp/codebook_collapse_trace.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2024-08-24_00:16:38
host : foundry-8
rank : 60 (local_rank: 4)
exitcode : 1 (pid: 92395)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================`
```
### Versions
Collecting environment information...
PyTorch version: 2.3.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.12 (main, Mar 22 2024, 16:50:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.5.0-1023-aws-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: GenuineIntel
Model name: Intel Xeon Processor (SapphireRapids)
CPU family: 6
Model: 143
Thread(s) per core: 1
Core(s) per socket: 192
Socket(s): 1
Stepping: 4
BogoMIPS: 4200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 wbnoinvd arat vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b fsrm md_clear serialize tsxldtrk avx512_fp16 arch_capabilities
Virtualization: VT-x
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 6 MiB (192 instances)
L1i cache: 6 MiB (192 instances)
L2 cache: 768 MiB (192 instances)
L3 cache: 16 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-191
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Unknown: No mitigations
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI Syscall hardening, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.0.1
[pip3] torch==2.3.1
[pip3] torchaudio==2.3.1
[pip3] torchvision==0.18.1
[pip3] triton==2.3.1
[conda] Could not collect
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire | triaged,oncall: pt2,module: inductor | low | Critical |
2,484,079,962 | pytorch | FlopCounterMode doesn't support HOP | ### 🐛 Describe the bug
I'm trying to add micro-benchmark for flex attention, which is implemented by HOP. I use ```torch.utils.flop_counter.FlopCounterMode```, but it doesn't support capture FLOP for HOP.
```
def run_flex_decoding_causal(device: str = "cuda"):
dtype_flops_utilization_map = {
torch.bfloat16: "0.8",
}
B = 4
H = 16
D = 64
q_len = [32]
kv_len = [1024, 8192]
results = []
def causal_mask(batch, head, token_q, token_kv):
return token_q >= token_kv
for dtype, expected_flops_utilization in dtype_flops_utilization_map.items():
flops_utilization = 0
for seqlen_q in q_len:
for seqlen_kv in kv_len:
query = torch.randn((B, H, seqlen_q, D), dtype=dtype, device=device, requires_grad=True)
key = torch.randn((B, H, seqlen_q, D), dtype=dtype, device=device, requires_grad=True)
value = torch.randn((B, H, seqlen_q, D), dtype=dtype, device=device, requires_grad=True)
block_mask = create_block_mask(causal_mask, 1, 1, seqlen_q, seqlen_kv)
with FlopCounterMode(display=False) as mode:
flex_attention(query, key, value, block_mask=block_mask)
flops = mode.get_total_flops()
compiled_fn = torch.compile(flex_attention, dynamic=False)
for _ in range(WARMUP_ITER):
compiled_fn(query, key, value, block_mask=block_mask)
us_per_iter = benchmarker.benchmark_gpu(lambda: compiled_fn(query, key, value, block_mask=block_mask)) * 1000
flops_utilization += us_per_iter * flops / 1e9 / A100_40G_BF16_TFLOPS
flops_utilization = flops_utilization / len(input_shapes)
dtype_str = str(dtype).replace("torch.", "")
results.append(
Experiment(
"flex_decoding_causal",
"flops_utilization",
expected_flops_utilization,
f"{flops_utilization:.02f}",
dtype_str,
device,
)
)
return results
```
Error
```
Traceback (most recent call last):
File "/data/users/ybliang/pytorch/benchmarks/gpt_fast/benchmark.py", line 328, in <module>
main(output_file=args.output)
File "/data/users/ybliang/pytorch/benchmarks/gpt_fast/benchmark.py", line 309, in main
lst = func()
File "/data/users/ybliang/pytorch/benchmarks/gpt_fast/benchmark.py", line 117, in run_flex_decoding_causal
flex_attention(query, key, value, block_mask=block_mask)
File "/home/ybliang/local/pytorch/torch/nn/attention/flex_attention.py", line 987, in flex_attention
out, lse = torch.compile(
File "/home/ybliang/local/pytorch/torch/_dynamo/eval_frame.py", line 465, in _fn
return fn(*args, **kwargs)
File "/home/ybliang/local/pytorch/torch/nn/attention/flex_attention.py", line 982, in _flex_attention_hop_wrapper
return flex_attention_hop(*args, **kwargs)
File "/home/ybliang/local/pytorch/torch/_higher_order_ops/flex_attention.py", line 62, in __call__
return super().__call__(
File "/home/ybliang/local/pytorch/torch/_ops.py", line 431, in __call__
return wrapper()
File "/home/ybliang/local/pytorch/torch/_dynamo/eval_frame.py", line 632, in _fn
return fn(*args, **kwargs)
File "/home/ybliang/local/pytorch/torch/_ops.py", line 427, in wrapper
return self.dispatch(
File "/home/ybliang/local/pytorch/torch/_ops.py", line 411, in dispatch
return kernel(*args, **kwargs)
File "/home/ybliang/local/pytorch/torch/_higher_order_ops/flex_attention.py", line 654, in flex_attention_autograd
out, logsumexp = FlexAttentionAutogradOp.apply(
File "/home/ybliang/local/pytorch/torch/autograd/function.py", line 575, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/home/ybliang/local/pytorch/torch/_higher_order_ops/flex_attention.py", line 542, in forward
out, logsumexp = flex_attention(
File "/home/ybliang/local/pytorch/torch/_higher_order_ops/flex_attention.py", line 62, in __call__
return super().__call__(
File "/home/ybliang/local/pytorch/torch/_ops.py", line 431, in __call__
return wrapper()
File "/home/ybliang/local/pytorch/torch/_dynamo/eval_frame.py", line 632, in _fn
return fn(*args, **kwargs)
File "/home/ybliang/local/pytorch/torch/_ops.py", line 422, in wrapper
return torch.overrides.handle_torch_function(
File "/home/ybliang/local/pytorch/torch/overrides.py", line 1717, in handle_torch_function
result = mode.__torch_function__(public_api, types, args, kwargs)
File "/home/ybliang/local/pytorch/torch/_higher_order_ops/flex_attention.py", line 38, in __torch_function__
return func(*args, **(kwargs or {}))
File "/home/ybliang/local/pytorch/torch/_higher_order_ops/flex_attention.py", line 62, in __call__
return super().__call__(
File "/home/ybliang/local/pytorch/torch/_ops.py", line 431, in __call__
return wrapper()
File "/home/ybliang/local/pytorch/torch/_dynamo/eval_frame.py", line 632, in _fn
return fn(*args, **kwargs)
File "/home/ybliang/local/pytorch/torch/_ops.py", line 427, in wrapper
return self.dispatch(
File "/home/ybliang/local/pytorch/torch/_ops.py", line 333, in dispatch
raise NotImplementedError(
NotImplementedError: There was no rule registered for HOP flex_attention and mode <torch.utils.flop_counter.FlopCounterMode object at 0x7f00f2d76560>. We recommend filing an issue.
```
### Versions
N/A
cc @ezyang @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh | triaged,oncall: pt2,module: higher order operators,module: pt2-dispatcher,module: flop counter | low | Critical |
2,484,084,260 | godot | Input.joy_connection_changed not emitted with DS5/PS5 controller | ### Tested versions
4.3.stable
### System information
Windows 11
### Issue description
The `Input.joy_connection_changed` signal is only emitted if the PS5 controller is connected before running project. While the project runs there are no disconnect connects emitted. I don't see this happening with an Xbox 360 controller. I'm connecting the controlelr via USB (already tried different USB ports and cables).
The PS5 controller _does_ correctly send input once connected. It's just the signal that doesn't behave as expected.
### Steps to reproduce
Add code similar to
```
func _ready():
Input.joy_connection_changed.connect(_on_changed)
func _on_changed(device, connected):
var device_name = Input.get_joy_name(device)
print("Gamepad %d %s (%s)" % [device, "connected" if connected else "disconnected", device_name])
```
into any node. Run the project. Plug and unplug DS5 controller and see no connect/disconnect messages.
### Minimal reproduction project (MRP)
Use the joypads demo project from https://github.com/godotengine/godot-demo-projects/tree/master/misc/joypads.
The Output window displays connect/disconnect messages | needs testing,topic:input | low | Minor |
2,484,086,692 | go | x/mod/modfile: AddNewRequire doesn't put direct dependencies in the first block | ### Go version
go version go1.23.0 linux/amd64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE=''
GOARCH='amd64'
GOBIN=''
GOCACHE='/home/steveh/.cache/go-build'
GOENV='/home/steveh/.config/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='amd64'
GOHOSTOS='linux'
GOINSECURE=''
GOMODCACHE='/home/steveh/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='linux'
GOPATH='/home/steveh/go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/usr/local/go'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/usr/local/go/pkg/tool/linux_amd64'
GOVCS=''
GOVERSION='go1.23.0'
GODEBUG=''
GOTELEMETRY='local'
GOTELEMETRYDIR='/home/steveh/.config/go/telemetry'
GCCGO='gccgo'
GOAMD64='v1'
AR='ar'
CC='gcc'
CXX='g++'
CGO_ENABLED='1'
GOMOD='/home/steveh/code/github.com/rocketsciencegg/congestion-control/tools/go.mod'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -m64 -pthread -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=/tmp/go-build155249492=/tmp/go-build -gno-record-gcc-switches'
```
### What did you do?
Use [AddNewRequire](https://pkg.go.dev/golang.org/x/mod@v0.20.0/modfile#File.AddNewRequire) to add new requires.
```go
package main
import (
"testing"
"github.com/stretchr/testify/require"
"golang.org/x/mod/modfile"
"golang.org/x/mod/module"
)
var testFile = `module test
go 1.23.0
require (
github.com/test/test1 v0.1.0
github.com/test/test2 v0.1.0
)
require (
github.com/test/test3 v0.1.0 // indirect
)
`
var expected = `module test
go 1.23.0
require (
github.com/foo/direct1 v0.1.1
github.com/foo/direct2 v0.1.2
github.com/test/test1 v0.1.0
github.com/test/test2 v0.1.0
)
require (
github.com/foo/indirect1 v0.2.1 // indirect
github.com/foo/indirect2 v0.2.2 // indirect
github.com/test/test3 v0.1.0 // indirect
)
`
func TestAddRequire(t *testing.T) {
file, err := modfile.Parse("go.mod", []byte(testFile), nil)
require.NoError(t, err)
file.AddNewRequire("github.com/foo/indirect2", "v0.2.2", true)
file.AddNewRequire("github.com/foo/direct2", "v0.1.2", false)
file.AddNewRequire("github.com/foo/direct1", "v0.1.1", false)
file.AddNewRequire("github.com/foo/indirect1", "v0.2.1", true)
file.Cleanup()
file.SortBlocks()
data, err := file.Format()
require.NoError(t, err)
require.Equal(t, expected, string(data))
}
```
### What did you see happen?
Direct requires should be added to first require block, indirect requires should be added to second block.
### What did you expect to see?
When using `AddNewRequire`, requires are added to the last block. This is the documented behaviour, but other go tools such as `go mod tidy` maintain two blocks, so this use the same approach.
It seems possible to use `SetRequireSeparateIndirect` to replicate the desired behaviour but that was far from obvious.
If nothing else reference in `AddNewRequire` and `Addequire` to `SetRequireSeparateIndirect` would help users fine the right functionality, but ideally `AddNewRequire` and `AddRequire` should function as expected.
If a user isn't aware of the two block rule then using this will result in a file that `go mod tidy` handles badly, in some cases resulting in three blocks instead of two. | NeedsDecision,modules | medium | Critical |
2,484,097,851 | deno | Segmentation fault (core dumped) error at startup on v1.46+ | Version: Deno v1.46.1
I'm unable to import any of the files from my project starting with v1.46. Everything worked fine in v1.45.5.
To demonstrate, I've attached a quick screen recording below where I first attempt to start up the REPL and import a function named `alias`. It fails with a segfault error in v1.46.1. I then downgrade to v1.45.5 and attempt the same exact thing, and it imports it with no issues at all.
https://github.com/user-attachments/assets/fdf1c3d5-4458-4346-93cb-bdd74f398525
| bug,swc | low | Critical |
2,484,106,122 | node | node 22.x fails to build with pointer compression | ### Version
_No response_
### Platform
```text
Linux 824a726bf62a 6.10.5 #1-NixOS SMP PREEMPT_DYNAMIC Wed Aug 14 13:34:38 UTC 2024 x86_64 GNU/Linux
```
### Subsystem
_No response_
### What steps will reproduce the bug?
Checkout the `v22.x` branch, configure with `--experimental-enable-pointer-compression` and build it.
### How often does it reproduce? Is there a required condition?
The build fails consistently.
### What is the expected behavior? Why is that the expected behavior?
The build should succeed.
### What do you see instead?
```
FAILED: gen/node_snapshot.cc
cd ../../; /workspace/node/out/Release/node_mksnapshot /workspace/node/out/Release/gen/node_snapshot.cc
[4187/4192] ccache c++ -MMD -MF obj/test/cctest/cctest.node_test_fixture.o.d -D_GLIBCXX_USE_CXX11_ABI=1 -DNODE_OPENSSL_CONF_NAME=nodejs_conf -DNODE_OPENSSL_HAS_QUIC -DICU_NO_USER_DATA_OVERRIDE -DV8_COMPRESS_POINTERS -DV8_COMPRESS_POINTERS_IN_ISOLATE_CAGE -DV8_31BIT_SMIS_ON_64BIT_ARCH -DV8_ENABLE_SANDBOX -D__STDC_FORMAT_MACROS -DOPENSSL_NO_PINSHARED -DOPENSSL_THREADS '-DNODE_ARCH="x64"' '-DNODE_PLATFORM="linux"' -DNODE_WANT_INTERNALS=1 -DHAVE_OPENSSL=1 -DHAVE_INSPECTOR=1 -D__POSIX__ -DNODE_USE_V8_PLATFORM=1 -DNODE_HAVE_I18N_SUPPORT=1 -DNODE_BUNDLED_ZLIB -DOPENSSL_API_COMPAT=0x10100000L -DGTEST_HAS_POSIX_RE=0 -DGTEST_LANG_CXX11=1 -DUNIT_TEST -DUCONFIG_NO_SERVICE=1 -DU_ENABLE_DYLOAD=0 -DU_STATIC_IMPLEMENTATION=1 -DU_HAVE_STD_STRING=1 -DUCONFIG_NO_BREAK_ITERATION=0 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -D_POSIX_C_SOURCE=200112 -DNGHTTP2_STATICLIB -DNDEBUG -DOPENSSL_USE_NODELETE -DL_ENDIAN -DOPENSSL_BUILDING_OPENSSL -DAES_ASM -DBSAES_ASM -DCMLL_ASM -DECP_NISTZ256_ASM -DGHASH_ASM -DKECCAK1600_ASM -DMD5_ASM -DOPENSSL_BN_ASM_GF2m -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_MONT5 -DOPENSSL_CPUID_OBJ -DOPENSSL_IA32_SSE2 -DPADLOCK_ASM -DPOLY1305_ASM -DRC4_ASM -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DVPAES_ASM -DWHIRLPOOL_ASM -DX25519_ASM -DOPENSSL_PIC -DNGTCP2_STATICLIB -DNGHTTP3_STATICLIB -I../../src -I../../tools/msvs/genfiles -I../../deps/v8/include -I../../deps/cares/include -I../../deps/uv/include -I../../deps/sqlite -I../../test/cctest -I../../deps/googletest/include -I../../deps/histogram/src -I../../deps/histogram/include -I../../deps/simdjson -I../../deps/simdutf -I../../deps/ada -I../../deps/nbytes/include -I../../deps/ncrypto -I../../deps/icu-small/source/i18n -I../../deps/icu-small/source/common -I../../deps/zlib -I../../deps/llhttp/include -I../../deps/uvwasi/include -I../../deps/nghttp2/lib/includes -I../../deps/brotli/c/include -I../../deps/openssl/openssl/include -I../../deps/openssl/openssl/crypto/include -I../../deps/openssl/config/archs/linux-x86_64/asm/include -I../../deps/openssl/config/archs/linux-x86_64/asm -I../../deps/ngtcp2 -I../../deps/ngtcp2/ngtcp2/lib/includes -I../../deps/ngtcp2/ngtcp2/crypto/includes -I../../deps/ngtcp2/ngtcp2/crypto -I../../deps/ngtcp2/nghttp3/lib/includes -Wall -Wextra -Wno-unused-parameter -pthread -Wall -Wextra -Wno-unused-parameter -Wno-error=deprecated-declarations -m64 -O3 -fno-omit-frame-pointer -fno-rtti -fno-exceptions -std=gnu++20 -c ../../test/cctest/node_test_fixture.cc -o obj/test/cctest/cctest.node_test_fixture.o
FAILED: obj/test/cctest/cctest.node_test_fixture.o
ccache c++ -MMD -MF obj/test/cctest/cctest.node_test_fixture.o.d -D_GLIBCXX_USE_CXX11_ABI=1 -DNODE_OPENSSL_CONF_NAME=nodejs_conf -DNODE_OPENSSL_HAS_QUIC -DICU_NO_USER_DATA_OVERRIDE -DV8_COMPRESS_POINTERS -DV8_COMPRESS_POINTERS_IN_ISOLATE_CAGE -DV8_31BIT_SMIS_ON_64BIT_ARCH -DV8_ENABLE_SANDBOX -D__STDC_FORMAT_MACROS -DOPENSSL_NO_PINSHARED -DOPENSSL_THREADS '-DNODE_ARCH="x64"' '-DNODE_PLATFORM="linux"' -DNODE_WANT_INTERNALS=1 -DHAVE_OPENSSL=1 -DHAVE_INSPECTOR=1 -D__POSIX__ -DNODE_USE_V8_PLATFORM=1 -DNODE_HAVE_I18N_SUPPORT=1 -DNODE_BUNDLED_ZLIB -DOPENSSL_API_COMPAT=0x10100000L -DGTEST_HAS_POSIX_RE=0 -DGTEST_LANG_CXX11=1 -DUNIT_TEST -DUCONFIG_NO_SERVICE=1 -DU_ENABLE_DYLOAD=0 -DU_STATIC_IMPLEMENTATION=1 -DU_HAVE_STD_STRING=1 -DUCONFIG_NO_BREAK_ITERATION=0 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -D_POSIX_C_SOURCE=200112 -DNGHTTP2_STATICLIB -DNDEBUG -DOPENSSL_USE_NODELETE -DL_ENDIAN -DOPENSSL_BUILDING_OPENSSL -DAES_ASM -DBSAES_ASM -DCMLL_ASM -DECP_NISTZ256_ASM -DGHASH_ASM -DKECCAK1600_ASM -DMD5_ASM -DOPENSSL_BN_ASM_GF2m -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_MONT5 -DOPENSSL_CPUID_OBJ -DOPENSSL_IA32_SSE2 -DPADLOCK_ASM -DPOLY1305_ASM -DRC4_ASM -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DVPAES_ASM -DWHIRLPOOL_ASM -DX25519_ASM -DOPENSSL_PIC -DNGTCP2_STATICLIB -DNGHTTP3_STATICLIB -I../../src -I../../tools/msvs/genfiles -I../../deps/v8/include -I../../deps/cares/include -I../../deps/uv/include -I../../deps/sqlite -I../../test/cctest -I../../deps/googletest/include -I../../deps/histogram/src -I../../deps/histogram/include -I../../deps/simdjson -I../../deps/simdutf -I../../deps/ada -I../../deps/nbytes/include -I../../deps/ncrypto -I../../deps/icu-small/source/i18n -I../../deps/icu-small/source/common -I../../deps/zlib -I../../deps/llhttp/include -I../../deps/uvwasi/include -I../../deps/nghttp2/lib/includes -I../../deps/brotli/c/include -I../../deps/openssl/openssl/include -I../../deps/openssl/openssl/crypto/include -I../../deps/openssl/config/archs/linux-x86_64/asm/include -I../../deps/openssl/config/archs/linux-x86_64/asm -I../../deps/ngtcp2 -I../../deps/ngtcp2/ngtcp2/lib/includes -I../../deps/ngtcp2/ngtcp2/crypto/includes -I../../deps/ngtcp2/ngtcp2/crypto -I../../deps/ngtcp2/nghttp3/lib/includes -Wall -Wextra -Wno-unused-parameter -pthread -Wall -Wextra -Wno-unused-parameter -Wno-error=deprecated-declarations -m64 -O3 -fno-omit-frame-pointer -fno-rtti -fno-exceptions -std=gnu++20 -c ../../test/cctest/node_test_fixture.cc -o obj/test/cctest/cctest.node_test_fixture.o
In file included from ../../deps/googletest/include/gtest/gtest-printers.h:122,
from ../../deps/googletest/include/gtest/gtest-matchers.h:49,
from ../../deps/googletest/include/gtest/internal/gtest-death-test-internal.h:47,
from ../../deps/googletest/include/gtest/gtest-death-test.h:43,
from ../../deps/googletest/include/gtest/gtest.h:64,
from ../../test/cctest/node_test_fixture.h:6,
from ../../test/cctest/node_test_fixture.cc:1:
../../test/cctest/node_test_fixture.cc: In member function 'virtual void NodeTestEnvironment::SetUp()':
../../test/cctest/node_test_fixture.cc:24:23: error: 'InitializeSandbox' is not a member of 'v8::V8'
24 | ASSERT_TRUE(v8::V8::InitializeSandbox());
| ^~~~~~~~~~~~~~~~~
../../deps/googletest/include/gtest/internal/gtest-internal.h:1453:38: note: in definition of macro 'GTEST_TEST_BOOLEAN_'
1453 | ::testing::AssertionResult(expression)) \
| ^~~~~~~~~~
../../deps/googletest/include/gtest/gtest.h:1831:32: note: in expansion of macro 'GTEST_ASSERT_TRUE'
1831 | #define ASSERT_TRUE(condition) GTEST_ASSERT_TRUE(condition)
| ^~~~~~~~~~~~~~~~~
../../test/cctest/node_test_fixture.cc:24:3: note: in expansion of macro 'ASSERT_TRUE'
24 | ASSERT_TRUE(v8::V8::InitializeSandbox());
| ^~~~~~~~~~~
ninja: build stopped: subcommand failed.
```
### Additional information
The last known good version to build with pointer compression is `v22.5.1`. I've bisected on the `v22.x` branch and the first bad commit is https://github.com/nodejs/node/commit/ee97c045b4402734e7d18a76ca8baa5531e46dd3. | build,regression | low | Critical |
2,484,168,694 | vscode | update | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
| invalid | medium | Minor |
2,484,205,255 | ollama | add LongWrtier Llama3.1 8b and LongWrtier GLM4 9b | The LongWrtier models are good at writing long content in a single reply.I have successfully impoeted QuantFactory/LongWriter-llama3.1-8b-GGUF so it can be uploaded derectly.I tried to quantize the F32 verson in QuantPanda/LongWriter-glm4-9B-GGUF for a Q4_0 version so that I can load all layers on my GPU,but the quantization failed with "Error: quantization is only supported for F16 and F32 models", so please create a Q4_0 version and upload. | model request | low | Critical |
2,484,223,525 | pytorch | Remove redundant copy in functional collectives | Is it possible to remove the copy operation of the inputs in PyTorch's distributed functional collectives?
Is it not possible to do everything in-place given that inductor is going to support in-place OPS?
related discussion: https://discuss.pytorch.org/t/functional-collectives/206155/
cc @msaroufim @XilunWu @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | module: performance,oncall: distributed,triaged | low | Major |
2,484,223,706 | tauri | [docs] What is everything that's required to load an image in V2? | correct me if mistaken but,
the preferred and fastest way to load an image from disk for displaying is to use [convertFileSrc()](https://v2.tauri.app/reference/javascript/api/namespacecore/#convertfilesrc)
but the documentation is outdated, referencing v1 configs that don't exist anymore:

https://github.com/tauri-apps/tauri/blob/dev/tooling/api/src/core.ts#L163
it would also be good to have an explicit example of what's needed for it to work.
like:
```json
{
"app": {
"security": {
"csp": {
"default-src": "'self' asset: http://asset.localhost",
"media-src": "'self' asset: http://asset.localhost",
"img-src": "'self' asset: http://asset.localhost"
},
"assetProtocol": {
"enable": true,
"scope": [
"$DOWNLOAD/**"
]
}
}
}
}
```
| type: documentation | low | Minor |
2,484,225,276 | nvm | `nvm_resolve_local_alias default` has bad performance | <!-- Thank you for being interested in nvm! Please help us by filling out the following form if you‘re having trouble. If you have a feature request, or some other question, please feel free to clear out the form. Thanks! -->
#### Operating system and version:
macOS 14.5
#### `nvm debug` output:
<details>
<!-- do not delete the following blank line -->
```sh
nvm --version: v0.39.7
$TERM_PROGRAM: iTerm.app
$SHELL: /opt/homebrew/bin/bash
$SHLVL: 1
whoami: 'my'
${HOME}: /Users/my
${NVM_DIR}: '${HOME}/.nvm'
${PATH}: ${HOME}/.pyenv/shims:${HOME}/.pyenv/bin:/opt/homebrew/bin:/opt/homebrew/sbin:${HOME}/.local/bin:${HOME}/.gobrew/current/bin:${HOME}/.gobrew/bin:${HOME}/anaconda3/bin:${HOME}/go/bin:${HOME}/.deno/bin:${HOME}/bin/flutter/bin:${HOME}/bin:${HOME}/Library/Python/2.7/bin:${HOME}/.cargo/bin:${HOME}/go/bin:${HOME}/.asdf/shims:/opt/homebrew/opt/asdf/libexec/bin:/opt/homebrew/opt/openjdk@17/bin:${HOME}/.sdkman/candidates/gradle/current/bin:${HOME}/.pyenv/bin:${HOME}/.cargo/bin:${HOME}/go/bin:${HOME}/.gobrew/current/bin:${HOME}/.gobrew/bin:${NVM_DIR}/versions/node/v22.5.1/bin:${HOME}/.local/bin:/usr/local/bin:/System/Cryptexes/App/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/local/bin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/bin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/appleinternal/bin:/opt/puppetlabs/bin:/Applications/iTerm.app/Contents/Resources/utilities:${HOME}/.rvm/bin:/usr/local/bin
$PREFIX: ''
${NPM_CONFIG_PREFIX}: ''
$NVM_NODEJS_ORG_MIRROR: 'http://npm.taobao.org/mirrors/node'
$NVM_IOJS_ORG_MIRROR: ''
shell version: 'GNU bash, version 5.2.32(1)-release (aarch64-apple-darwin23.4.0)'
uname -a: 'Darwin 23.5.0 Darwin Kernel Version 23.5.0: Wed May 1 20:13:18 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6030 arm64'
checksum binary: 'sha256sum'
OS version: macOS 14.5 23F79
awk: /usr/bin/awk, awk version 20200816
curl: /usr/bin/curl, curl 8.6.0 (x86_64-apple-darwin23.0) libcurl/8.6.0 (SecureTransport) LibreSSL/3.3.6 zlib/1.2.12 nghttp2/1.61.0
-bash: wget: command not found
wget: (wget -c),
sed: /usr/bin/sed
cut: /usr/bin/cut
basename: /usr/bin/basename
rm: /bin/rm
mkdir: /bin/mkdir (_omb_util_alias_init_mkdir)
xargs: /usr/bin/xargs
git: /usr/bin/git, git version 2.39.3 (Apple Git-146)
grep: /usr/bin/grep (grep --color=auto --exclude-dir={.bzr,CVS,.git,.hg,.svn}), grep (BSD grep, GNU compatible) 2.6.0-FreeBSD
nvm current: v22.5.1
which node: ${NVM_DIR}/versions/node/v22.5.1/bin/node
which iojs:
which npm: ${NVM_DIR}/versions/node/v22.5.1/bin/npm
npm config get prefix: ${NVM_DIR}/versions/node/v22.5.1
npm root -g: ${NVM_DIR}/versions/node/v22.5.1/lib/node_modules
```
</details>
#### `nvm ls` output:
<details>
<!-- do not delete the following blank line -->
```sh
v18.20.4
-> v22.5.1
default -> node (-> v22.5.1)
iojs -> N/A (default)
unstable -> N/A (default)
node -> stable (-> v22.5.1) (default)
stable -> 22.5 (-> v22.5.1) (default)
lts/* -> lts/iron (-> N/A)
lts/argon -> v4.9.1 (-> N/A)
lts/boron -> v6.17.1 (-> N/A)
lts/carbon -> v8.17.0 (-> N/A)
lts/dubnium -> v10.24.1 (-> N/A)
lts/erbium -> v12.22.12 (-> N/A)
lts/fermium -> v14.21.3 (-> N/A)
lts/gallium -> v16.20.2 (-> N/A)
lts/hydrogen -> v18.20.4
lts/iron -> v20.15.1 (-> N/A)
```
</details>
#### How did you install `nvm`?
<!-- (e.g. install script in readme, Homebrew) -->
**install script in readme**
#### What steps did you perform?
1. use this script in thread: https://stackoverflow.com/a/5015179/4691964
2. use `gdate`(from `coreutils`) instead of `date` to make `%N` work
3. analyze the log
#### What happened?
Found the `nvm_resolve_local_alias default` have bad performance.
It took **2 seconds** in my M3 chip macbookPro:

Why it need 2 seconds to get default version? seems it's just a simple task.
#### What did you expect to happen?
`nvm_resolve_local_alias default` should be quick, and save 2 seconds for my bash startup.
#### Is there anything in any of your profile files that modifies the `PATH`?
<!-- (e.g. `.bashrc`, `.bash_profile`, `.zshrc`, etc) -->
~/.bashrc
| pull request wanted,performance | low | Critical |
2,484,241,874 | godot | Project manager not applying custom theme correctly | ### Tested versions
Reproducible in: 4.3 dev 3 [36e943b6b] and above. - Not reproducible in 4.3 dev 2 [352434668] and below
### System information
Godot v4.3.stable.mono - Windows 10.0.19043 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 750 Ti (NVIDIA; 31.0.15.3734) - AMD Ryzen 5 2600 Six-Core Processor (6 Threads)
### Issue description
The project manager UI (specifically the bit where you can scroll between projects) doesn't apply custom themes properly, instead it shows the last selected built-in theme, so if i select the default theme and then my custom theme i get:

if i select the black (OLED) theme and then my custom theme i get:

if i select the Godot 2 theme and then my custom theme i get:

</br>
so on and so forth...
As you can see this didn't happen in earlier versions like 4.2.2, where that area is consistent with the rest of the UI:

</br>
This bug seems to have been introduced in 4.3 dev 3, as 4.3 dev 2 and previous versions don't show signs of having this bug
4.3 dev 3:

4.3 dev 2:

### Steps to reproduce
Create a custom theme, select the default theme and then the custom theme, open a new instance of Godot (thefore opening the project manager). You should now see that area in the UI display the default theme instead of the custom theme.
### Minimal reproduction project (MRP)
[project-manager-theme-bug.zip](https://github.com/user-attachments/files/16735383/project-manager-theme-bug.zip)
| bug,topic:editor | low | Critical |
2,484,247,835 | terminal | Command History (F7) is added to when selected command from the list | ### Windows Terminal version
1.20.11781.0
### Windows build number
10.0.22631.4037
### Other Software
_No response_
### Steps to reproduce
1. Open a clean command prompt - nothing in the list in F7.
2. Type in a command, e.g. "dir"
3. Type in a second different command, e.g. "dir D*"
4. Press F7 to see the command list - it will contain two commands. Select the first one from the list.
5. Press F7 to see the command list again, it will now contain three commands. This is unexpected.
### Expected Behavior
Selecting the command from the command list was previously (in Windows 10 command prompt) the way you issued it to ensure it didn't get added to the history, thus allowing you to maintain a nice tidy command list.
### Actual Behavior
Instead it was added to the history. | Product-Conhost,Issue-Bug,Area-CookedRead | low | Minor |
2,484,278,797 | godot | `RayCast3D` cannot detect collision with CSGs in `_ready()` even with `force_raycast_update()` | ### Tested versions
v4.3.stable.mono.official [77dcf97d8]
### System information
Godot v4.3.stable.mono - macOS 14.5.0 - Vulkan (Forward+) - integrated Apple M1 - Apple M1 (8 Threads)
### Issue description
`RayCast3D` cannot properly detect collision with a CSG shape when the collision checking is performed in `_ready()`, even when `force_raycast_update()` is used. Method `is_colliding()` always returns false, despite that "Debug > Visible Collision Shapes" would indicate there is a collision and the same checking performed in `_physics_process()` behaves correctly.
This issue seems similar to [another one previously mentioned](https://github.com/godotengine/godot/issues/95354) but there it was a `RayCast3D` colliding with a `StaticBody3D` and using a `force_raycast_update()` would make it work properly. Here it seems to have no effect.
### Steps to reproduce
1. Make a 3D scene.
2. Add one of the CSG shapes and set `use_collision` to true.
3. Add a `RayCast3D` and position it so that it starts off colliding with the CSG shape.
4. Add a script to the `RayCast3D` to perform some collision checking in `_ready()`. This is what I used:
```gdscript
extends RayCast3D
func _ready() -> void:
force_raycast_update()
print("is_colliding: " + str(is_colliding()))
print("get_collider: " + str(get_collider()))
```
5. Run the game. The script will tell you that no collision is taking place. The output result for the script above is the following:
```
is_colliding: false
get_collider: <Object#null>
```
### Minimal reproduction project (MRP)
[raycast.zip](https://github.com/user-attachments/files/16735530/raycast.zip)
| documentation,topic:physics | low | Critical |
2,484,288,763 | ollama | When invoked from the command line in an active conversation session, missing model for `/load` shouldn't be fatal error | ### What is the issue?
if you try to load a nonexistent model
```
Loading model 'nonexistent.'
Error: model "nonexistent." not found, try pulling it first
```
then quits the existing session
### OS
Linux
### GPU
AMD
### CPU
AMD
### Ollama version
0.3.6 | bug | low | Critical |
2,484,291,209 | yt-dlp | if given format is not available than show the available format to download rather than quit | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm requesting a feature unrelated to a specific site
- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Provide a description that is worded well enough to be understood
If requested format is not present than it will quit. I want to give the user a option to choose other format by giving available format list and if only one audio or video format is not found than it will show only for video or only for audio format. So that user not have to run yt-dlp again and again. If this is breaking the yt-dlp behavior with -F or -f option than we can give some other option, so that if a user run yt-dlp to download than he ensure the video would be download in one go even in other possible format and not to worry giving url next time.
I suggest other option than -F or -f because when we run yt-dlp in some script than it would be great to stop if the video format is not found. But if the video downloaded manually than it would be better give the possible available format rather than exit.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [ ] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU']
[debug] Portable config "/usr/local/bin/yt-dlp.conf": ['--embed-subs', '--embed-chapters', '--compat-options', 'no-live-chat', '-o', '%(uploader_id)s/%(title).200B [%(id)s].%(ext)s']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.08.06 from yt-dlp/yt-dlp [4d9231208] (zip)
[debug] Compatibility options: no-live-chat
[debug] Python 3.12.3 (CPython x86_64 64bit) - Linux-6.8.0-40-generic-x86_64-with-glibc2.39 (OpenSSL 3.0.13 30 Jan 2024, glibc 2.39)
[debug] exe versions: ffmpeg 6.1.1 (setts), ffprobe 6.1.1
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2023.11.17, mutagen-1.46.0, requests-2.31.0, secretstorage-3.3.3, sqlite3-3.45.1, urllib3-2.0.7, websockets-10.4
[debug] Proxy map: {}
[debug] Request Handlers: urllib
[debug] Loaded 1830 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.08.06 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.08.06 from yt-dlp/yt-dlp)
```
| enhancement,triage | low | Critical |
2,484,312,190 | godot | RayCast3D debug visuals jitter and lag behind since 4.3 | ### Tested versions
v4.3.stable.mono.official [https://github.com/godotengine/godot/commit/77dcf97d82cbfe4e4615475fa52ca03da645dbd8]
### System information
Godot v4.3.stable - Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1080 (NVIDIA; 31.0.15.5123) - AMD Ryzen 7 3800X 8-Core Processor (16 Threads)
### Issue description
When Raycast3Ds are moved, the debug shape jitters and lags behind. This happens when moved in any way. The amount depends on the FPS. The collisions seem to be unaffected.
### Steps to reproduce
Create a Raycast3D and move it along side any other type of Node3D through any means. It is apparent if the Raycast3D is the child of the other Node3D because they should move in sync.
### Minimal reproduction project (MRP)
[racyat-test.zip](https://github.com/user-attachments/files/16735643/racyat-test.zip) This project displays a RayCast3D clearly lagging behind a CSGBox. There is a static body to test if collisions are consistent with the debug shape.
| topic:rendering,topic:physics,needs testing,regression,topic:3d | low | Critical |
2,484,397,102 | node | Flaky test-runner-run-watch | ### Test
test-runner-run-watch
### Platform
Windows
### Console output
```console
16:41:02 ok 848 parallel/test-vfs
16:41:02 ---
16:44:04 duration_ms: 1255.99700
16:44:04 ...
16:44:04 failed 10 out of 10
16:44:04 not ok 849 parallel/test-runner-run-watch
16:44:04 ---
16:44:04 duration_ms: 121964.85900
16:44:04 severity: fail
16:44:04 exitcode: 1
16:44:04 stack: |-
16:44:04 timeout
16:44:04 TAP version 13
16:44:04 # Subtest: test runner watch mode
16:44:04 # Subtest: should run tests repeatedly
16:44:04 not ok 1 - should run tests repeatedly
16:44:04 ---
16:44:04 duration_ms: 11682.5747
16:44:04 location: 'c:\\workspace\\node-test-binary-windows-js-suites\\node\\test\\parallel\\test-runner-run-watch.mjs:155:3'
16:44:04 failureType: 'testCodeFailure'
16:44:04 error: |-
16:44:04 The input did not match the regular expression /# tests 1/. Input:
16:44:04
16:44:04 '# Subtest: test has ran\n' +
16:44:04 'ok 1 - test has ran\n' +
16:44:04 ' ---\n' +
16:44:04 ' duration_ms: 37.95\n' +
16:44:04 ' ...\n' +
16:44:04 '# Subtest: test.js\n' +
16:44:04 'not ok 2 - test.js\n' +
16:44:04 ' ---\n' +
16:44:04 ' duration_ms: 1068.7518\n' +
16:44:04 " location: 'c:\\\\workspace\\\\node-test-binary-windows-js-suites\\\\node\\\\test\\\\.tmp.682\\\\test.js:1:1'\n" +
16:44:04 " failureType: 'testCodeFailure'\n" +
16:44:04 ' exitCode: ~\n' +
16:44:04 " signal: 'SIGTERM'\n" +
16:44:04 " error: 'test failed'\n" +
16:44:04 " code: 'ERR_TEST_FAILURE'\n" +
16:44:04 ' ...\n' +
16:44:04 '# Subtest: test.js\n' +
16:44:04 'not ok 3 - test.js\n' +
16:44:04 ' ---\n' +
16:44:04 ' duration_ms: 1050.696\n' +
16:44:04 " location: 'c:\\\\workspace\\\\node-test-binary-windows-js-suites\\\\node\\\\test\\\\.tmp.682\\\\test.js:1:1'\n" +
16:44:04 " failureType: 'testCodeFailure'\n" +
16:44:04 ' exitCode: ~\n' +
16:44:04 " signal: 'SIGTERM'\n" +
16:44:04 " error: 'test failed'\n" +
16:44:04 " code: 'ERR_TEST_FAILURE'\n" +
16:44:04 ' ...\n' +
16:44:04 '# Subtest: test.js\n' +
16:44:04 'not ok 4 - test.js\n' +
16:44:04 ' ---\n' +
16:44:04 ' duration_ms: 1012.4334\n' +
16:44:04 " location: 'c:\\\\workspace\\\\node-test-binary-windows-js-suites\\\\node\\\\test\\\\.tmp.682\\\\test.js:1:1'\n" +
16:44:04 " failureType: 'testCodeFailure'\n" +
16:44:04 ' exitCode: ~\n' +
16:44:04 " signal: 'SIGTERM'\n" +
16:44:04 " error: 'test failed'\n" +
16:44:04 " code: 'ERR_TEST_FAILURE'\n" +
16:44:04 ' ...\n' +
16:44:04 '# Subtest: test.js\n' +
16:44:04 'not ok 5 - test.js\n' +
16:44:04 ' ---\n' +
16:44:04 ' duration_ms: 974.3208\n' +
16:44:04 " location: 'c:\\\\workspace\\\\node-test-binary-windows-js-suites\\\\node\\\\test\\\\.tmp.682\\\\test.js:1:1'\n" +
16:44:04 " failureType: 'testCodeFailure'\n" +
16:44:04 ' exitCode: ~\n' +
16:44:04 " signal: 'SIGTERM'\n" +
16:44:04 " error: 'test failed'\n" +
16:44:04 " code: 'ERR_TEST_FAILURE'\n" +
16:44:04 ' ...\n' +
16:44:04 '# Subtest: test.js\n' +
16:44:04 'not ok 6 - test.js\n' +
16:44:04 ' ---\n' +
16:44:04 ' duration_ms: 999.9954\n' +
16:44:04 " location: 'c:\\\\workspace\\\\node-test-binary-windows-js-suites\\\\node\\\\test\\\\.tmp.682\\\\test.js:1:1'\n" +
16:44:04 " failureType: 'testCodeFailure'\n" +
16:44:04 ' exitCode: ~\n' +
16:44:04 " signal: 'SIGTERM'\n" +
16:44:04 " error: 'test failed'\n" +
16:44:04 " code: 'ERR_TEST_FAILURE'\n" +
16:44:04 ' ...\n' +
16:44:04 '# Subtest: test.js\n' +
16:44:04 'not ok 7 - test.js\n' +
16:44:04 ' ---\n' +
16:44:04 ' duration_ms: 1011.2067\n' +
16:44:04 " location: 'c:\\\\workspace\\\\node-test-binary-windows-js-suites\\\\node\\\\test\\\\.tmp.682\\\\test.js:1:1'\n" +
16:44:04 " failureType: 'testCodeFailure'\n" +
16:44:04 ' exitCode: ~\n' +
16:44:04 " signal: 'SIGTERM'\n" +
16:44:04 " error: 'test failed'\n" +
16:44:04 " code: 'ERR_TEST_FAILURE'\n" +
16:44:04 ' ...\n' +
16:44:04 '# Subtest: test has ran\n' +
16:44:04 'ok 8 - test has ran\n' +
16:44:04 ' ---\n' +
16:44:04 ' duration_ms: 29.1914\n' +
16:44:04 ' ...\n' +
16:44:04 '1..8\n' +
16:44:04 '# tests 8\n' +
16:44:04 '# suites 0\n' +
16:44:04 '# pass 2\n' +
16:44:04 '# fail 6\n' +
16:44:04 '# cancelled 0\n' +
16:44:04 '# skipped 0\n' +
16:44:04 '# todo 0\n' +
16:44:04 '# duration_ms 1693.5803\n'
16:44:04
16:44:04 code: 'ERR_ASSERTION'
16:44:04 name: 'AssertionError'
16:44:04 expected:
16:44:04 actual: |-
16:44:04 # Subtest: test has ran
16:44:04 ok 1 - test has ran
16:44:04 ---
16:44:04 duration_ms: 37.95
16:44:04 ...
16:44:04 # Subtest: test.js
16:44:04 not ok 2 - test.js
16:44:04 ---
16:44:04 duration_ms: 1068.7518
16:44:04 location: 'c:\\workspace\\node-test-binary-windows-js-suites\\node\\test\\.tmp.682\\test.js:1:1'
16:44:04 failureType: 'testCodeFailure'
16:44:04 exitCode: ~
16:44:04 signal: 'SIGTERM'
16:44:04 error: 'test failed'
16:44:04 code: 'ERR_TEST_FAILURE'
16:44:04 ...
16:44:04 # Subtest: test.js
16:44:04 not ok 3 - test.js
16:44:04 ---
16:44:04 duration_ms: 1050.696
16:44:04 location: 'c:\\workspace\\node-test-binary-windows-js-suites\\node\\test\\.tmp.682\\test.js:1:1'
16:44:04 failureType: 'testCodeFailure'
16:44:04 exitCode: ~
16:44:04 signal: 'SIGTERM'
16:44:04 error: 'test failed'
16:44:04 code: 'ERR_TEST_FAILURE'
16:44:04 ...
16:44:04 # Subtest: test.js
16:44:04 not ok 4 - test.js
16:44:04 ---
16:44:04 duration_ms: 1012.4334
16:44:04 location: 'c:\\workspace\\node-test-binary-windows-js-suites\\node\\test\\.tmp.682\\test.js:1:1'
16:44:04 failureType: 'testCodeFailure'
16:44:04 exitCode: ~
16:44:04 signal: 'SIGTERM'
16:44:04 error: 'test failed'
16:44:04 code: 'ERR_TEST_FAILURE'
16:44:04 ...
16:44:04 # Subtest: test.js
16:44:04 not ok 5 - test.js
16:44:04 ---
16:44:04 duration_ms: 974.3208
16:44:04 location: 'c:\\workspace\\node-test-binary-windows-js-suites\\node\\test\\.tmp.682\\test.js:1:1'
16:44:04 failureType: 'testCodeFailure'
16:44:04 exitCode: ~
16:44:04 signal: 'SIGTERM'
16:44:04 error: 'test failed'
16:44:04 code: 'ERR_TEST_FAILURE'
16:44:04 ...
16:44:04 # Subtest: test.js
16:44:04 not ok 6 - test.js
16:44:04 ---
16:44:04 duration_ms: 999.9954
16:44:04 location: 'c:\\workspace\\node-test-binary-windows-js-suites\\node\\test\\.tmp.682\\test.js:1:1'
16:44:04 failureType: 'testCodeFailure'
16:44:04 exitCode: ~
16:44:04 signal: 'SIGTERM'
16:44:04 error: 'test failed'
16:44:04 code: 'ERR_TEST_FAILURE'
16:44:04 ...
16:44:04 # Subtest: test.js
16:44:04 not ok 7 - test.js
16:44:04 ---
16:44:04 duration_ms: 1011.2067
16:44:04 location: 'c:\\workspace\\node-test-binary-windows-js-suites\\node\\test\\.tmp.682\\test.js:1:1'
16:44:04 failureType: 'testCodeFailure'
16:44:04 exitCode: ~
16:44:04 signal: 'SIGTERM'
16:44:04 error: 'test failed'
16:44:04 code: 'ERR_TEST_FAILURE'
16:44:04 ...
16:44:04 # Subtest: test has ran
16:44:04 ok 8 - test has ran
16:44:04 ---
16:44:04 duration_ms: 29.1914
16:44:04 ...
16:44:04 1..8
16:44:04 # tests 8
16:44:04 # suites 0
16:44:04 # pass 2
16:44:04 # fail 6
16:44:04 # cancelled 0
16:44:04 # skipped 0
16:44:04 # todo 0
16:44:04 # duration_ms 1693.5803
16:44:04
16:44:04 operator: 'match'
16:44:04 stack: |-
16:44:04 testUpdate (file:///c:/workspace/node-test-binary-windows-js-suites/node/test/parallel/test-runner-run-watch.mjs:81:14)
16:44:04 process.processTicksAndRejections (node:internal/process/task_queues:105:5)
16:44:04 async testWatch (file:///c:/workspace/node-test-binary-windows-js-suites/node/test/parallel/test-runner-run-watch.mjs:147:26)
16:44:04 async TestContext.<anonymous> (file:///c:/workspace/node-test-binary-windows-js-suites/node/test/parallel/test-runner-run-watch.mjs:156:5)
16:44:04 async Test.run (node:internal/test_runner/test:879:9)
16:44:04 async Promise.all (index 0)
16:44:04 async Suite.run (node:internal/test_runner/test:1239:7)
16:44:04 async Test.processPendingSubtests (node:internal/test_runner/test:590:7)
16:44:04 ...
16:44:04 # Subtest: should run tests with dependency repeatedly
16:44:04 not ok 2 - should run tests with dependency repeatedly
16:44:04 ---
16:44:04 duration_ms: 9670.1542
16:44:04 location: 'c:\\workspace\\node-test-binary-windows-js-suites\\node\\test\\parallel\\test-runner-run-watch.mjs:159:3'
16:44:04 failureType: 'testCodeFailure'
16:44:04 error: |-
16:44:04 The input did not match the regular expression /# tests 1/. Input:
16:44:04
16:44:04 '# Subtest: test has ran\n' +
16:44:04 'ok 1 - test has ran\n' +
16:44:04 ' ---\n' +
16:44:04 ' duration_ms: 10.4875\n' +
16:44:04 ' ...\n' +
16:44:04 '# Subtest: test.js\n' +
16:44:04 'not ok 2 - test.js\n' +
16:44:04 ' ---\n' +
16:44:04 ' duration_ms: 1085.3394\n' +
16:44:04 " location: 'c:\\\\workspace\\\\node-test-binary-windows-js-suites\\\\node\\\\test\\\\.tmp.682\\\\test.js:1:1'\n" +
16:44:04 " failureType: 'testCodeFailure'\n" +
16:44:04 ' exitCode: ~\n' +
16:44:04 " signal: 'SIGTERM'\n" +
16:44:04 " error: 'test failed'\n" +
16:44:04 " code: 'ERR_TEST_FAILURE'\n" +
16:44:04 ' ...\n' +
16:44:04 '# Subtest: test.js\n' +
16:44:04 'not ok 3 - test.js\n' +
16:44:04 ' ---\n' +
16:44:04 ' duration_ms: 1111.3339\n' +
16:44:04 " location: 'c:\\\\workspace\\\\node-test-binary-windows-js-suites\\\\node\\\\test\\\\.tmp.682\\\\test.js:1:1'\n" +
16:44:04 " failureType: 'testCodeFailure'\n" +
16:44:04 ' exitCode: ~\n' +
16:44:04 " signal: 'SIGTERM'\n" +
16:44:04 " error: 'test failed'\n" +
16:44:04 " code: 'ERR_TEST_FAILURE'\n" +
16:44:04 ' ...\n' +
16:44:04 '# Subtest: test.js\n' +
16:44:04 'not ok 4 - test.js\n' +
16:44:04 ' ---\n' +
16:44:04 ' duration_ms: 954.7013\n' +
16:44:04 " location: 'c:\\\\workspace\\\\node-test-binary-windows-js-suites\\\\node\\\\test\\\\.tmp.682\\\\test.js:1:1'\n" +
16:44:04 " failureType: 'testCodeFailure'\n" +
16:44:04 ' exitCode: ~\n' +
16:44:04 " signal: 'SIGTERM'\n" +
16:44:04 " error: 'test failed'\n" +
16:44:04 " code: 'ERR_TEST_FAILURE'\n" +
16:44:04 ' ...\n' +
16:44:04 '# Subtest: test.js\n' +
16:44:04 'not ok 5 - test.js\n' +
16:44:04 ' ---\n' +
16:44:04 ' duration_ms: 1070.1657\n' +
16:44:04 " location: 'c:\\\\workspace\\\\node-test-binary-windows-js-suites\\\\node\\\\test\\\\.tmp.682\\\\test.js:1:1'\n" +
16:44:04 " failureType: 'testCodeFailure'\n" +
16:44:04 ' exitCode: ~\n' +
16:44:04 " signal: 'SIGTERM'\n" +
16:44:04 " error: 'test failed'\n" +
16:44:04 " code: 'ERR_TEST_FAILURE'\n" +
16:44:04 ' ...\n' +
16:44:04 '# Subtest: test.js\n' +
16:44:04 'not ok 6 - test.js\n' +
16:44:04 ' ---\n' +
16:44:04 ' duration_ms: 926.3438\n' +
16:44:04 " location: 'c:\\\\workspace\\\\node-test-binary-windows-js-suites\\\\node\\\\test\\\\.tmp.682\\\\test.js:1:1'\n" +
16:44:04 " failureType: 'testCodeFailure'\n" +
16:44:04 ' exitCode: ~\n' +
16:44:04 " signal: 'SIGTERM'\n" +
16:44:04 " error: 'test failed'\n" +
16:44:04 " code: 'ERR_TEST_FAILURE'\n" +
16:44:04 ' ...\n' +
16:44:04 '# Subtest: test has ran\n' +
16:44:04 'ok 7 - test has ran\n' +
16:44:04 ' ---\n' +
16:44:04 ' duration_ms: 26.637\n' +
16:44:04 ' ...\n' +
16:44:04 '1..7\n' +
16:44:04 '# tests 7\n' +
16:44:04 '# suites 0\n' +
16:44:04 '# pass 2\n' +
16:44:04 '# fail 5\n' +
16:44:04 '# cancelled 0\n' +
16:44:04 '# skipped 0\n' +
16:44:04 '# todo 0\n' +
16:44:04 '# duration_ms 1325.8181\n'
16:44:04
16:44:04 code: 'ERR_ASSERTION'
16:44:04 name: 'AssertionError'
16:44:04 expected:
16:44:04 actual: |-
16:44:04 # Subtest: test has ran
16:44:04 ok 1 - test has ran
16:44:04 ---
16:44:04 duration_ms: 10.4875
16:44:04 ...
16:44:04 # Subtest: test.js
16:44:04 not ok 2 - test.js
16:44:04 ---
16:44:04 duration_ms: 1085.3394
16:44:04 location: 'c:\\workspace\\node-test-binary-windows-js-suites\\node\\test\\.tmp.682\\test.js:1:1'
16:44:04 failureType: 'testCodeFailure'
16:44:04 exitCode: ~
16:44:04 signal: 'SIGTERM'
16:44:04 error: 'test failed'
16:44:04 code: 'ERR_TEST_FAILURE'
16:44:04 ...
16:44:04 # Subtest: test.js
16:44:04 not ok 3 - test.js
16:44:04 ---
16:44:04 duration_ms: 1111.3339
16:44:04 location: 'c:\\workspace\\node-test-binary-windows-js-suites\\node\\test\\.tmp.682\\test.js:1:1'
16:44:04 failureType: 'testCodeFailure'
16:44:04 exitCode: ~
16:44:04 signal: 'SIGTERM'
16:44:04 error: 'test failed'
16:44:04 code: 'ERR_TEST_FAILURE'
16:44:04 ...
16:44:04 # Subtest: test.js
16:44:04 not ok 4 - test.js
16:44:04 ---
16:44:04 duration_ms: 954.7013
16:44:04 location: 'c:\\workspace\\node-test-binary-windows-js-suites\\node\\test\\.tmp.682\\test.js:1:1'
16:44:04 failureType: 'testCodeFailure'
16:44:04 exitCode: ~
16:44:04 signal: 'SIGTERM'
16:44:04 error: 'test failed'
16:44:04 code: 'ERR_TEST_FAILURE'
16:44:04 ...
16:44:04 # Subtest: test.js
16:44:04 not ok 5 - test.js
16:44:04 ---
16:44:04 duration_ms: 1070.1657
16:44:04 location: 'c:\\workspace\\node-test-binary-windows-js-suites\\node\\test\\.tmp.682\\test.js:1:1'
16:44:04 failureType: 'testCodeFailure'
16:44:04 exitCode: ~
16:44:04 signal: 'SIGTERM'
16:44:04 error: 'test failed'
16:44:04 code: 'ERR_TEST_FAILURE'
16:44:04 ...
16:44:04 # Subtest: test.js
16:44:04 not ok 6 - test.js
16:44:04 ---
16:44:04 duration_ms: 926.3438
16:44:04 location: 'c:\\workspace\\node-test-binary-windows-js-suites\\node\\test\\.tmp.682\\test.js:1:1'
16:44:04 failureType: 'testCodeFailure'
16:44:04 exitCode: ~
16:44:04 signal: 'SIGTERM'
16:44:04 error: 'test failed'
16:44:04 code: 'ERR_TEST_FAILURE'
16:44:04 ...
16:44:04 # Subtest: test has ran
16:44:04 ok 7 - test has ran
16:44:04 ---
16:44:04 duration_ms: 26.637
16:44:04 ...
16:44:04 1..7
16:44:04 # tests 7
16:44:04 # suites 0
16:44:04 # pass 2
16:44:04 # fail 5
16:44:04 # cancelled 0
16:44:04 # skipped 0
16:44:04 # todo 0
16:44:04 # duration_ms 1325.8181
16:44:04
16:44:04 operator: 'match'
16:44:04 stack: |-
16:44:04 testUpdate (file:///c:/workspace/node-test-binary-windows-js-suites/node/test/parallel/test-runner-run-watch.mjs:81:14)
16:44:04 process.processTicksAndRejections (node:internal/process/task_queues:105:5)
16:44:04 async testWatch (file:///c:/workspace/node-test-binary-windows-js-suites/node/test/parallel/test-runner-run-watch.mjs:147:26)
16:44:04 async TestContext.<anonymous> (file:///c:/workspace/node-test-binary-windows-js-suites/node/test/parallel/test-runner-run-watch.mjs:160:5)
16:44:04 async Test.run (node:internal/test_runner/test:879:9)
16:44:04 async Suite.processPendingSubtests (node:internal/test_runner/test:590:7)
16:44:04 ...
16:44:04 # Subtest: should run tests with ESM dependency
16:44:04 ok 3 - should run tests with ESM dependency
16:44:04 ---
16:44:04 duration_ms: 5620.9341
16:44:04 ...
16:44:04 ...
```
### Build links
- https://ci.nodejs.org/job/node-test-binary-windows-js-suites/29397/RUN_SUBSET=2,nodes=win10-COMPILED_BY-vs2022/
### Additional information
This might be related to #44898 and #49605. | windows,flaky-test | low | Critical |
2,484,487,419 | neovim | "E95: Buffer already exists" when raw manpage contents piped into :Man | ### Problem
I invoke Man as a $MANPAGER from a terminal within neovim.
I do this by using [neovim-remote](https://github.com/mhinz/neovim-remote) plugin and setting `MANPAGER="nvr +'Man!' -"`
`init_pager` in `man.lua` crashes if there's already an open buffer for the man pages.
The line responsible for the error is this one:
`vim.cmd.file({ 'man://' .. fn.fnameescape(ref):lower(), mods = { silent = true } })`
It tries to rename the buffer to a name that's already taken.
Suggested solution:
Check if there's already a buffer with that name and go to that existing buffer if so.
### Steps to reproduce
* Add [neovim-remote](https://github.com/mhinz/neovim-remote) plugin.
* Add this line to your **init.vim**: `let $MANPAGER="nvr +'Man!' -"`
* Start a terminal: `:term`
* Run `man ls`
* Go back to the terminal
* Run `man ls` again
I don't know if it's possible to use neovim's builtin remote functionality for $MANPAGER. If it is possible, then the error could be reproduced without any plugins.
### Expected behavior
Don't crash and go to the existing buffer instead
### Neovim version (nvim -v)
0.9.5
### Vim (not Nvim) behaves the same?
No Man in vim
### Operating system/version
N/A
### Terminal name/version
N/A
### $TERM environment variable
N/A
### Installation
N/A | bug,plugin,runtime,complexity:low | low | Critical |
2,484,488,330 | TypeScript | `instantiateMappedTupleType` should not use array index as mapping key, at least it's not in an appropriate way | ### 🔎 Search Terms
instantiateMappedTupleType
### 🕗 Version & Regression Information
- This is the behavior in every version I tried.
### ⏯ Playground Link
https://www.typescriptlang.org/play/?ts=5.7.0-dev.20240824#code/C4TwDgpgBA0hIGcA8AVAfFAvFA3lA2jFAJYB2UA1vAPYBmUKAugFyxQC+A3AFCiRQAxatVRQIAD2ARSAEwRQAhqRD5GGbHERJ8CYACcyAcwA0UXQdImoAOlspT5o2s5QA9K4IAiAAyfTngEY-G1tNZHRTUgBXAFsAIwg9Rm4UvmgAQSxBYW1HS2c3D3wfYMDS30jYhKTecGgAISyw3P0jB1bLU1trHQ7DRnaLfrQXdy8KqDL-ACZSgGZPZKA
### 💻 Code
```ts
type Keys<T> = { [K in keyof T]: K };
type Foo<T extends any[]> = Keys<[string, string, ...T, string]>; // ["0", "1", ...Keys<T>, number]
type A = Foo<[string]>; // ["0", "1", "0", number]
type B = Keys<[string, string, ...[string], string]>; // ["0", "1", "2", "3"]
```
### 🙁 Actual behavior
After instantiating the `Keys<[string, string, ...T, string]>` to `["0", "1", ...Keys<T>, number]` in advance, the position of `T` is lost, resulting in incorrect indexes.
### 🙂 Expected behavior
Option 1: look up real indexes during instantiating.
```ts
type Foo<T extends any[]> = Keys<[string, string, ...T, string]>; // ["0", "1", ...Keys<T>, ...Keys<[string]>]
type A = Foo<[string]>; // ["0", "1", "2", "3"]
```
Option 2: don't use indexes
```ts
type Foo<T extends any[]> = Keys<[string, string, ...T, string]>; // [number, number, ...Keys<T>, number]
type A = Foo<[string]>; // [number, number, number, number]
```
### Additional information about the issue
_No response_ | Needs Investigation | low | Minor |
2,484,500,000 | godot | Editor window error spam: "scene/3d/camera.cpp:487 - Condition "p.d == 0" is true. Returned: Point2()" | ### Tested versions
Tested on a both a Windows device and a separate Linux device.
Reproducible in 3.6 RC1
Not reproducible in 3.6 Beta5
### System information
Windows 10, Intel HD graphics 500; Linux, Intel HD graphics
### Issue description
If you add a camera to the 3D scene and align it to the current view with PERSPECTIVE > ALIGN TRANSFORM WITH VIEW, the editor will spam an error message whenever you mouse over the scene window.
Error message: "scene/3d/camera.cpp:487 - Condition "p.d == 0" is true. Returned: Point2()"
This appears to be a regression from 3.6 Beta 5, where I cannot reproduce it.
### Steps to reproduce
Load the repro project in 3.6 RC1 and:
1. Open the scene named "Spatial".
2. Click on the Editor Output window to make it visible.
3. Highlight the Camera node in the scene tree.
4. Mouse over the Editor scene window and confirm no error message spamming.
5. In top left of window click on "Perspective" then "Align transform with view"
6. Move mouse around and over window and observe error spamming.
### Minimal reproduction project (MRP)
[repro.zip](https://github.com/user-attachments/files/16736281/repro.zip)
3.6 Beta 5 - no issue

3.6 RC1 - issue

| bug,regression,topic:3d | low | Critical |
2,484,518,036 | godot | UFBX importer doesn't work in-game [Godot 4.3] | ### Tested versions
4.3
### System information
manjaro linux godot 4.3-stable (this doesnt matter since this "bug" is on all platforms)
### Issue description
the new ufbx importer for godot 4.3 cant import fbx files at runtime but gltf importer can
O. some notes:
1. the engine can import gltf at runtime but not fbx
2. the engine can import both gltf and fbx in the editor
3. at runtime loading fbx file gives an unavailable error
4. even tho now with godot 4.3 we have fbx document just like gltf document but it looks like its not working/implemented yet?
O. after some tests:
i found is that it actually (kind of) imports everything correctly but all meshes are importer mesh type with no data
so.... maybe importer mesh doesn't work in-game??
im currently looking at the source code to check whats going on with that
i have a small experience with c++ and i'd love to help (since this matters alot to me)
### Steps to reproduce
just try the MRP i gave
run the project and it wont show anything (even tho im trying to load fbx with fbxDocument just like gltfDocument)
### Minimal reproduction project (MRP)
[MRP.zip](https://github.com/user-attachments/files/16736371/MRP.zip) | needs testing,topic:import,topic:3d | low | Critical |
2,484,544,896 | TypeScript | Boolean narrowed to true only when generic type is explicitly provided while it shouldn't | ### 🔎 Search Terms
specify type narrow union
### 🕗 Version & Regression Information
All versions I checked and nothing in FAQ that seems related
### ⏯ Playground Link
https://www.typescriptlang.org/play/?#code/LAKA9GCqDOCmAEAXAFggbgQwE4EsMDsBjBA-Ae0Q0RzPyTPgDMytilkdokBPABwUQMARghz5MuAogA08WGlh0UZAK4BzZPByJ4Ad1UAbACbwKqLLs4IR8QmQl58iUIj4IAkuOyPEAHjGmKjoAKgB88AC88AAUAJSR4cEA3KCgdvjQOswMUb7BcgAeiIpGXGRCAFawhIih0eUVAFzwwbHNng5SvgDeANoAClp0ANaw3GSMLQC67V6STnkDU6EAvuER4fgqBgYAhCkg6ZnwQtiR8HmFxfil8H2DAaPjk8Ez8B3eXQTcq3W6za0AQl4FsdvtUodaMcxIxYFgsLATFFTlhot0MM1smihM1EFgVLAVrEiQcIPByRTKVSAHoAflAZKOOhRzR6DLAVIx7zmPjZ4A5VIpOO5nQWQjIZAMsAIoVJAopazl8DW8AAdOr2bYoVkJazuprKcLxZLpfglWtNXiCVouGJilheAjiiYMFwAETGqUEN0QiAAUQKvAMOEI2gM3Hg0H4ocYEZQCDUijhIfgvGwGAAtmltYUgyHtOcUXzKVyPvM-PqQILycKy7zPabZaBKRaQGs0Vysd1hVbCcTYkl4P6AErDgDyw4NFLpmqZTF1F0rZMNuPxsHNoXZ5F07CoSDXINgiK4ghOCA9Eq9+DdsgLnCG9sdsGd8Fd8DdvZ9IHZAbzocQ4aRtGOCxuwCZJrghCpumGbzlgJxnAQJjZNmGQ6LAgbBv+ABMhbYMWFKljyXSVtWJyzKKfgNjKBwtpubZ1OimISgRNbNNRZqgO23argSRIkkOYD6FgwzQL6YC-lhYYRlG1QgXGqDwIm+DJlBaZYJmcEIfBSG2K6Oi9q+XAcahxwYX+2gAMx4VgrGvhRnwLKR1a1sRYqXo2tEKvR3GdhK2K8SQxkeQQ-EDoJwmieyABCIX4NEvbxBgBjQAwkViSAQA
### 💻 Code
```ts
// Use the variance annotation to force this type to be invariant, even though it would otherwise be covariant
type Invariant<in out T> = () => T;
const foo = <T extends object>(obj: T): Invariant<{[P in keyof T]: Invariant<T[P]>}> => null!;
const bar = <T extends {[P in keyof T]: Invariant<any>}>(w: T): T => null!;
const inferred = bar({a: foo({b: true})}); // true is boolean as expected
// Explicitly specify the generic param
const explicit = bar<{
a: Invariant<{
b: Invariant<boolean>;
}>
}>({a: foo({b: true})}); // narrowed to true instead of boolean and errors
```
### 🙁 Actual behavior
Narrows to `true`
### 🙂 Expected behavior
Should stay `boolean`
### Additional information about the issue
Doing `as boolean` works... | Needs More Info | low | Critical |
2,484,566,491 | ui | [feat]: Drag n Drop | ### Feature description
I would like to request a new component: Drag and Drop.
### Affected component/components
N/A
### Additional Context
Add a drag and drop lists which might be helpful in todo lists.
Similar to React DND
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | low | Minor |
2,484,621,561 | neovim | 'guicursor' cursor color in Termux | ### Problem
Neovim doesn't seem to be able to change the cursor color of Termux.
I thought it was a Termux issue but realized that running this did change the cursor color
```lua
echo -ne "\x1b]12;#FF0000\x1b\\"
```
So, I tried using `io.write()` to manually set it and it seems to work but for some reason when exiting the Cmdline the cursor will return back to the original color.
I tried using `CmdlineLeave` autocmd to set it but then it briefly turns into red before reverting back to white.
### Steps to reproduce
nvim --clean
:hi Cursor guibg=red
### Expected behavior
Cursor turns red.
### Neovim version (nvim -v)
NVIM v0.10.1 Build type: Release LuaJIT 2.1.1720049189
### Vim (not Nvim) behaves the same?
Yes, 9.1.0500 aarch64
### Operating system/version
Android 14
### Terminal name/version
Termux 0.118.0
### $TERM environment variable
xterm-256color:
### Installation
pkg install neovim | bug,tui,needs:repro,options | low | Major |
2,484,630,347 | flutter | [Linux] `PhysicalKeyboardKey` events with `AltGr + ?` keys are wrong order | I'm trying to get the `USB HID` codes for key events. But the order is wrong.
### Steps to reproduce
1. Run the example code on Linux.
2. Switch to FR input method.
3. Press `AltGr` + `E`. Press `AltGr`, press `E`, release `E`, release `AltGr`.
### Expected results
For `PhysicalKeyboardKey`:
1. `AltGr`/`Alt Right` key down event.
2. `E` key down event.
3. `E` key up event.
4. `AltGr`/`Alt Right` key up event.
### Actual results
1. `AltGr`/`Alt Right` key down event.
2. `AltGr`/`Alt Right` key up event.
3. `E` key down event. Character `€`.
4. `E` key up event.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() {
runApp(MyApp());
}
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Keyboard Event Demo',
theme: ThemeData(
primarySwatch: Colors.blue,
),
home: const KeyboardEventPage(),
);
}
}
class KeyboardEventPage extends StatefulWidget {
const KeyboardEventPage({super.key});
@override
_KeyboardEventPageState createState() => _KeyboardEventPageState();
}
class _KeyboardEventPageState extends State<KeyboardEventPage> {
final List<String> _lastKeyEvents = [];
final FocusNode _focusNode = FocusNode();
@override
Widget build(BuildContext context) {
final child = Scaffold(
appBar: AppBar(
title: const Text('Keyboard Event Demo'),
),
body: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
const Text(
'Press any key:',
style: TextStyle(fontSize: 20),
),
const SizedBox(height: 20),
for (final keyEvent in _lastKeyEvents)
Text(
keyEvent,
style: const TextStyle(fontSize: 16),
),
],
),
),
);
return FocusScope(
autofocus: true,
child: Focus(
autofocus: true,
canRequestFocus: true,
focusNode: _focusNode,
onKeyEvent: (node, event) {
setState(() {
_lastKeyEvents.add(event.toString());
if (_lastKeyEvents.length > 10) {
_lastKeyEvents.removeAt(0);
}
});
return KeyEventResult.handled;
},
child: child,
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>

</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.24.1, on Ubuntu 22.04.4 LTS 6.5.0-44-generic,
locale en_US.UTF-8)
• Flutter version 3.24.1 on channel stable at
/home/username/workspace/flutter/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 5874a72aa4 (4 days ago), 2024-08-20 16:46:00 -0500
• Engine revision c9b9d5780d
• Dart version 3.5.1
• DevTools version 2.37.2
[✗] Android toolchain - develop for Android devices
✗ Unable to locate Android SDK.
Install Android Studio from:
https://developer.android.com/studio/index.html
On first launch it will assist you in installing the Android SDK
components.
(or visit https://flutter.dev/to/linux-android-setup for detailed
instructions).
If the Android SDK has been installed to a custom location, please use
`flutter config --android-sdk` to update to that location.
[✗] Chrome - develop for the web (Cannot find Chrome executable at
google-chrome)
! Cannot find Chrome. Try setting CHROME_EXECUTABLE to a Chrome executable.
[✓] Linux toolchain - develop for Linux desktop
• Ubuntu clang version 14.0.0-1ubuntu1.1
• cmake version 3.22.1
• ninja version 1.10.1
• pkg-config version 0.29.2
[!] Android Studio (not installed)
• Android Studio not found; download from
https://developer.android.com/studio/index.html
(or visit https://flutter.dev/to/linux-android-setup for detailed
instructions).
[!] Proxy Configuration
• HTTP_PROXY is set
! NO_PROXY is not set
[✓] Connected device (1 available)
• Linux (desktop) • linux • linux-x64 • Ubuntu 22.04.4 LTS 6.5.0-44-generic
[✓] Network resources
• All expected network resources are available.
! Doctor found issues in 4 categories.
```
</details>
| framework,a: internationalization,platform-linux,has reproducible steps,P2,team-linux,triaged-linux,found in release: 3.24,found in release: 3.25 | low | Major |
2,484,638,523 | transformers | how to fine tune TrOCR on specifique langage guide. | ### Model description
hello , just passed through issues and other , but none of them talked on how to fine-tune TrOCR on specifique langage , like how to pick encoder and decoder , model .. etc ,
can you @NielsRogge , write a simple instructions/guide on this topic ?
### Open source status
- [ ] The model implementation is available
- [ ] The model weights are available
### Provide useful links for the implementation
_No response_ | New model | low | Minor |
2,484,644,870 | stable-diffusion-webui | [Feature Request]: Ability to Split Batch Generation Across Multi-GPU | ### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What would your feature do ?
I'm aware that a single diffusion model cannot be split onto multiple GPU's. However, assuming an instance of the model is loaded onto each respective GPU, generation of image batches could be greatly sped up by splitting the batch across the available cards.
This would allow:
1. The same number of images generated in a smaller unit of time (batch size / no. of GPU's)
2. A larger batch generated within an identical span of time (batch size x no. of GPU's)
A similar workflow is implemented in SwarmUI, with ComfyUI as a backend.
This can also theoretically be done via loading two discrete instances of Stable Diffusion Web UI, but the syncing of prompts, sampler settings, and extension settings becomes an issue for workflow.
I propose multi-GPU support for Stable Diffusion Web UI via loading the selected model onto each card, and supporting generation parameter syncing through one instance of the Web UI.
### Proposed workflow
1. Add launch arguments as necessary to specify multiple CUDA devices.
2. Under "Generation Parameters" in the Web UI, add option to increase instances, loading the selected model into each card(s).
3. If Batch Count > 1, then generate a batch on each available card, only sequencing the batches that exceed the no. of cards.
4. If Batch Count = 1, then split batch size between the available cards.
### Additional information
_No response_ | enhancement | low | Minor |
2,484,651,621 | pytorch | torch.sparse.softmax missing support for sparse CSR | ### 🚀 The feature, motivation and pitch
sparse.softmax apparently only supports sparse COO format, while sparse CSR is presented as more efficient and default format for other operations.
For example in my use case I'm trying to implement an efficient sparse attention using torch.sparse.sampled_addmm and sparse.softmax, the problem is that the first only supports sparse CSR.
### Alternatives
_No response_
### Additional context
_No response_
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @jcaip | module: sparse,triaged | low | Minor |
2,484,651,656 | godot | Vector2 implicit cast to Vector2i without warning | ### Tested versions
4.3 stable
### System information
Windows 10 x64
### Issue description
Even with static typing and enabled additional warnings there are no warning about this cast

### Steps to reproduce
No warnings with settings above
Ex1:
```gdscript
var t: Vector2i = Vector2(0.5,1.5)
```
Ex2, my case:
new_position is Vector2 implicit cast to `erase_cell` argument type - Vector2i
```gdscript
@onready var chest: TileMapLayer = $"../Level/Chest"
...
func check_for_chest(new_position: Vector2) -> bool:
var tile_coords: Vector2i = get_tile_coords(chest, new_position)
var tile_source: int = get_tile_source(chest, tile_coords)
if tile_source < 0: return false
# Mistake here
chest.erase_cell(new_position)
return true
```
### Minimal reproduction project (MRP)
N/A | discussion,topic:core,topic:gdscript | low | Minor |
2,484,653,264 | excalidraw | Please whitelist this site | I would like to embed H5P content in my teaching whiteboards.
Please allow iframes from: [https://apps.zum.de/apps/24799](https://apps.zum.de/apps/) | Embeddable | low | Minor |
2,484,657,601 | godot | Exporting to output within project folder creates pck with embedded .deps.json contents if export folder is not empty. C# version | ### Tested versions
Tested in latest 4.3 C# release 64-bit.
### System information
Godot v4.3.stable.mono - Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1650 SUPER (NVIDIA; 32.0.15.5599) - AMD Ryzen 3 3100 4-Core Processor (8 Threads)
### Issue description
if i have a C# project say d:\godotprojects\helloworld and if i export to d:\godotprojects\helloworld\export\ , the first time I get proper .pck file , say 10kb if the export folder is blank. But if I export again (no changes to settings or code) to the same export folder, it will create 40kb .pck file because *.deps.json gets embedded into the .pck file.
This does not happen if i export to folders outside of d:\godotprojects\helloworld\
### Steps to reproduce
Just a simple 2D project, create a button, and a sprite2d. Attach script to scene, connect event to button to turn sprite visible/invisible by toggling the .visible property.
Must be C# code.
### Minimal reproduction project (MRP)
```cs
using Godot;
using System;
public partial class main : Node2D
{
// Called when the node enters the scene tree for the first time.
public override void _Ready()
{
}
// Called every frame. 'delta' is the elapsed time since the previous frame.
public override void _Process(double delta)
{
}
private void _on_button_pressed()
{
// Replace with function body.
Sprite2D sprite = GetNode<Sprite2D>("logo");
sprite.Visible = !sprite.Visible;
}
}
``` | needs testing,topic:dotnet,topic:export | low | Minor |
2,484,660,128 | godot | [cross compile] gcc/llvm + aarch64/riscv64 embree intrinsics | ### Tested versions
4.3-stable
### System information
Yocto `scarthgap` running on Fedora 40
### Issue description
Incorrect embree intrinsics used with RaspberryPi 5 on Yocto build system
```
-mcpu=cortex-a76+crypto
```
```
Compiling modules/navigation/2d/nav_mesh_generator_2d.cpp ...
In file included from thirdparty/embree/common/sys/../simd/arm/emulation.h:12,
from thirdparty/embree/common/sys/intrinsics.h:13,
from thirdparty/embree/common/sys/alloc.cpp:5:
thirdparty/embree/common/sys/../simd/arm/sse2neon.h: In function '__m128i _mm_aesdeclast_si128(__m128i, __m128i)':
thirdparty/embree/common/sys/../simd/arm/sse2neon.h:9918:69: error: invalid operands to binary ^ (have '__Int64x2_t' and '__Uint8x16_t')
9918 | vaesdq_u8(vreinterpretq_u8_m128i(a), vdupq_n_u8(0))) ^
| ^
Compiling modules/navigation/editor/navigation_mesh_editor_plugin.cpp ...
In file included from thirdparty/embree/common/sys/../math/../sys/../simd/arm/emulation.h:12,
from thirdparty/embree/common/sys/../math/../sys/intrinsics.h:13,
from thirdparty/embree/common/sys/../math/emath.h:7,
from thirdparty/embree/common/sys/../math/vec2.h:6,
from thirdparty/embree/common/sys/estring.h:7,
from thirdparty/embree/common/sys/estring.cpp:4:
thirdparty/embree/common/sys/../math/../sys/../simd/arm/sse2neon.h: In function '__m128i _mm_aesdeclast_si128(__m128i, __m128i)':
thirdparty/embree/common/sys/../math/../sys/../simd/arm/sse2neon.h:9918:69: error: invalid operands to binary ^ (have '__Int64x2_t' and '__Uint8x16_t')
9918 | vaesdq_u8(vreinterpretq_u8_m128i(a), vdupq_n_u8(0))) ^
| ^
Compiling modules/navigation/nav_map.cpp ...
In file included from thirdparty/embree/common/sys/../simd/arm/emulation.h:12,
from thirdparty/embree/common/sys/intrinsics.h:13,
from thirdparty/embree/common/sys/sysinfo.cpp:11:
thirdparty/embree/common/sys/../simd/arm/sse2neon.h: In function '__m128i _mm_aesdeclast_si128(__m128i, __m128i)':
thirdparty/embree/common/sys/../simd/arm/sse2neon.h:9918:69: error: invalid operands to binary ^ (have '__Int64x2_t' and '__Uint8x16_t')
9918 | vaesdq_u8(vreinterpretq_u8_m128i(a), vdupq_n_u8(0))) ^
| ^
```
Configuration
```
scons p=linuxbsd target=editor arch=arm64 use_static_cpp=yes optimize=speed progress=yes no_editor_splash=yes num_jobs=32 alsa=yes dbus=yes debug_symbols=no fontconfig=yes libdecor=yes use_llvm=no LINK='aarch64-poky-linux-g++ -mcpu=cortex-a76+crypto -mbranch-protection=standard -fstack-protector-strong -O2 -D_FORTIFY_SOURCE=2 -Wformat -Wformat-security -Werror=format-security --sysroot=/mnt/raid10/yocto/master/raspberry-pi5/tmp/work/cortexa76-poky-linux/godot/4.3/recipe-sysroot -Wl,-O1 -Wl,--hash-style=gnu -Wl,--as-needed -fcanon-prefix-map -fmacro-prefix-map=/mnt/raid10/yocto/master/raspberry-pi5/tmp/work/cortexa76-poky-linux/godot/4.3/git=/usr/src/debug/godot/4.3 -fdebug-prefix-map=/mnt/raid10/yocto/master/raspberry-pi5/tmp/work/cortexa76-poky-linux/godot/4.3/git=/usr/src/debug/godot/4.3 -fmacro-prefix-map=/mnt/raid10/yocto/master/raspberry-pi5/tmp/work/cortexa76-poky-linux/godot/4.3/git=/usr/src/debug/godot/4.3 -fdebug-prefix-map=/mnt/raid10/yocto/master/raspberry-pi5/tmp/work/cortexa76-poky-linux/godot/4.3/git=/usr/src/debug/godot/4.3 -fdebug-prefix-map=/mnt/raid10/yocto/master/raspberry-pi5/tmp/work/cortexa76-poky-linux/godot/4.3/recipe-sysroot= -fmacro-prefix-map=/mnt/raid10/yocto/master/raspberry-pi5/tmp/work/cortexa76-poky-linux/godot/4.3/recipe-sysroot= -fdebug-prefix-map=/mnt/raid10/yocto/master/raspberry-pi5/tmp/work/cortexa76-poky-linux/godot/4.3/recipe-sysroot-native= -Wl,-z,relro,-z,now' opengl3=yes openxr=no pulseaudio=yes use_sowrap=no builtin_brotli=no builtin_freetype=no builtin_graphite=no builtin_harfbuzz=no builtin_icu4c=no builtin_libogg=no builtin_libpng=no builtin_libtheora=no builtin_libvorbis=no builtin_libwebp=no builtin_zlib=no builtin_zstd=no touch=yes udev=yes vulkan=yes use_volk=yes wayland=yes x11=no CC="aarch64-poky-linux-gcc -mcpu=cortex-a76+crypto -mbranch-protection=standard -fstack-protector-strong -O2 -D_FORTIFY_SOURCE=2 -Wformat -Wformat-security -Werror=format-security --sysroot=/mnt/raid10/yocto/master/raspberry-pi5/tmp/work/cortexa76-poky-linux/godot/4.3/recipe-sysroot" cflags=" -O2 -pipe -g -feliminate-unused-debug-types -fcanon-prefix-map -fmacro-prefix-map=/mnt/raid10/yocto/master/raspberry-pi5/tmp/work/cortexa76-poky-linux/godot/4.3/git=/usr/src/debug/godot/4.3 -fdebug-prefix-map=/mnt/raid10/yocto/master/raspberry-pi5/tmp/work/cortexa76-poky-linux/godot/4.3/git=/usr/src/debug/godot/4.3 -fmacro-prefix-map=/mnt/raid10/yocto/master/raspberry-pi5/tmp/work/cortexa76-poky-linux/godot/4.3/git=/usr/src/debug/godot/4.3 -fdebug-prefix-map=/mnt/raid10/yocto/master/raspberry-pi5/tmp/work/cortexa76-poky-linux/godot/4.3/git=/usr/src/debug/godot/4.3 -fdebug-prefix-map=/mnt/raid10/yocto/master/raspberry-pi5/tmp/work/cortexa76-poky-linux/godot/4.3/recipe-sysroot= -fmacro-prefix-map=/mnt/raid10/yocto/master/raspberry-pi5/tmp/work/cortexa76-poky-linux/godot/4.3/recipe-sysroot= -fdebug-prefix-map=/mnt/raid10/yocto/master/raspberry-pi5/tmp/work/cortexa76-poky-linux/godot/4.3/recipe-sysroot-native= " CXX="aarch64-poky-linux-g++ -mcpu=cortex-a76+crypto -mbranch-protection=standard -fstack-protector-strong -O2 -D_FORTIFY_SOURCE=2 -Wformat -Wformat-security -Werror=format-security --sysroot=/mnt/raid10/yocto/master/raspberry-pi5/tmp/work/cortexa76-poky-linux/godot/4.3/recipe-sysroot" cxxflags=" -O2 -pipe -g -feliminate-unused-debug-types -fcanon-prefix-map -fmacro-prefix-map=/mnt/raid10/yocto/master/raspberry-pi5/tmp/work/cortexa76-poky-linux/godot/4.3/git=/usr/src/debug/godot/4.3 -fdebug-prefix-map=/mnt/raid10/yocto/master/raspberry-pi5/tmp/work/cortexa76-poky-linux/godot/4.3/git=/usr/src/debug/godot/4.3 -fmacro-prefix-map=/mnt/raid10/yocto/master/raspberry-pi5/tmp/work/cortexa76-poky-linux/godot/4.3/git=/usr/src/debug/godot/4.3 -fdebug-prefix-map=/mnt/raid10/yocto/master/raspberry-pi5/tmp/work/cortexa76-poky-linux/godot/4.3/git=/usr/src/debug/godot/4.3 -fdebug-prefix-map=/mnt/raid10/yocto/master/raspberry-pi5/tmp/work/cortexa76-poky-linux/godot/4.3/recipe-sysroot= -fmacro-prefix-map=/mnt/raid10/yocto/master/raspberry-pi5/tmp/work/cortexa76-poky-linux/godot/4.3/recipe-sysroot= -fdebug-prefix-map=/mnt/raid10/yocto/master/raspberry-pi5/tmp/work/cortexa76-poky-linux/godot/4.3/recipe-sysroot-native= -fvisibility-inlines-hidden" AS="aarch64-poky-linux-as " AR="aarch64-poky-linux-gcc-ar" RANLIB="aarch64-poky-linux-gcc-ranlib" import_env_vars=PATH,PKG_CONFIG_DIR,PKG_CONFIG_DISABLE_UNINSTALLED,PKG_CONFIG_LIBDIR,PKG_CONFIG_PATH,PKG_CONFIG_SYSROOT_DIR,PKG_CONFIG_SYSTEM_INCLUDE_PATH,PKG_CONFIG_SYSTEM_LIBRARY_PATH
```
Full Compile Log
[log.do_compile_rpi5.txt](https://github.com/user-attachments/files/16737027/log.do_compile_rpi5.txt)
### Steps to reproduce
Yocto scarthgap OS image using meta-raspberrypi, meta-godot: https://github.com/jwinarske/meta-godot
`MACHINE ?= "raspberrypi5"`
Run `bitbake godot` to build the recipe: https://github.com/jwinarske/meta-godot/blob/scarthgap/recipes-graphics/godot/godot_4.3.bb
### Minimal reproduction project (MRP)
build only issue | bug,topic:buildsystem,topic:thirdparty | low | Critical |
2,484,675,984 | godot | Clicking any element in certain popups will cause the body contents of the popup to visibly shift but not the interactable controls (Windows ARM) | ### Tested versions
- Reproducible in 4.3 Stable Mono at least, and all other versions that I've tried including all dev builds of 4.3 and builds out to 4.1, both native Arm64 and x86_64 builds.
### System information
Godot v4.3.stable.mono - Windows 10.0.27686 - GLES3 (Compatibility) - D3D12 (Qualcomm(R) Adreno(TM) 8cx Gen 3) - Snapdragon (TM) 8cx Gen 3 @ 3.0 GHz (8 Threads)
### Issue description
When clicking on the inner body of some windows in Godot on this laptop, the contents will visually shift but the clickable controls will not.
This is probably going to be a weird one, because it's a Windows on ARM device.

### Steps to reproduce
1. Be on a Thinkpad X13s.
2. Open Godot.
3. Create a new project.
4. Click "Browse".
5. Click anywhere in the explorer window view.
### Minimal reproduction project (MRP)
N/A | bug,platform:windows,topic:porting | low | Major |
2,484,688,331 | deno | Disallow JSR dependency importing itself in `deno publish --dry-run` | It will not work when published to JSR. Also, it would cause things to resolve more slowly, the package to list itself as a dependency for the npm compat layer, and potentially cause a duplicate package to be resolved instead since the specifiers would be going through package resolution.
Ref https://github.com/denoland/deno/issues/25191 | bug,publish | low | Major |
2,484,698,815 | flutter | Keyboard events, dead code, should not generate character | ### Steps to reproduce
1. Run the sample code.
2. Switch to FR input method.
3. Click "[".
4. Click "e".
### Expected results
The first keystroke does not generate character.
The second keystroke generates `ê`.
### Actual results
The first keystroke generates `^`.
The second keystroke generates `ê`.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() {
runApp(MyApp());
}
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Keyboard Event Demo',
theme: ThemeData(
primarySwatch: Colors.blue,
),
home: const KeyboardEventPage(),
);
}
}
class KeyboardEventPage extends StatefulWidget {
const KeyboardEventPage({super.key});
@override
_KeyboardEventPageState createState() => _KeyboardEventPageState();
}
class _KeyboardEventPageState extends State<KeyboardEventPage> {
final List<String> _lastKeyEvents = [];
final FocusNode _focusNode = FocusNode();
@override
Widget build(BuildContext context) {
final child = Scaffold(
appBar: AppBar(
title: const Text('Keyboard Event Demo'),
),
body: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
const Text(
'Press any key:',
style: TextStyle(fontSize: 20),
),
const SizedBox(height: 20),
for (final keyEvent in _lastKeyEvents)
Text(
keyEvent,
style: const TextStyle(fontSize: 16),
),
],
),
),
);
return FocusScope(
autofocus: true,
child: Focus(
autofocus: true,
canRequestFocus: true,
focusNode: _focusNode,
onKeyEvent: (node, event) {
setState(() {
_lastKeyEvents.add(event.toString());
if (_lastKeyEvents.length > 10) {
_lastKeyEvents.removeAt(0);
}
});
return KeyEventResult.handled;
},
child: child,
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>

</details>
### Logs
<details open><summary>Logs</summary>
```console
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[√] Flutter (Channel stable, 3.24.1, on Microsoft Windows [Version 10.0.22631.4037], locale en-US)
• Flutter version 3.24.1 on channel stable at D:\DevEnv\flutter\flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 5874a72aa4 (6 days ago), 2024-08-20 16:46:00 -0500
• Engine revision c9b9d5780d
• Dart version 3.5.1
• DevTools version 2.37.2
[√] Windows Version (Installed version of Windows is version 10 or higher)
[!] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at C:\Users\Administrator\AppData\Local\Android\sdk
X cmdline-tools component is missing
Run `path/to/sdkmanager --install "cmdline-tools;latest"`
See https://developer.android.com/studio/command-line for more details.
X Android license status unknown.
Run `flutter doctor --android-licenses` to accept the SDK licenses.
See https://flutter.dev/to/windows-android-setup for more details.
[√] Chrome - develop for the web
• Chrome at C:\Program Files\Google\Chrome\Application\chrome.exe
[√] Visual Studio - develop Windows apps (Visual Studio Community 2022 17.10.3)
• Visual Studio at C:\Program Files\Microsoft Visual Studio\2022\Community
• Visual Studio Community 2022 version 17.10.35013.160
• Windows 10 SDK version 10.0.26100.0
[√] Android Studio (version 2024.1)
• Android Studio at C:\Program Files\Android\Android Studio
• Flutter plugin can be installed from:
https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.10+0--11609105)
[√] VS Code (version 1.92.2)
• VS Code at C:\Users\Administrator\AppData\Local\Programs\Microsoft VS Code
• Flutter extension version 3.94.0
[√] Connected device (3 available)
• Windows (desktop) • windows • windows-x64 • Microsoft Windows [Version 10.0.22631.4037]
• Chrome (web) • chrome • web-javascript • Google Chrome 127.0.6533.120
• Edge (web) • edge • web-javascript • Microsoft Edge 128.0.2739.42
[√] Network resources
• All expected network resources are available.
! Doctor found issues in 1 category.
```
</details>
| a: text input,framework,a: internationalization,has reproducible steps,P2,team-framework,triaged-framework,fyi-text-input,found in release: 3.24,found in release: 3.25 | low | Major |
2,484,717,828 | next.js | middleware is not used in production with custom server | ### Link to the code that reproduces this issue
https://github.com/Mephiztopheles/custom-server-middleware
### To Reproduce
1. Start the application (`npm run dev`)
2. Visit localhost:3000 in the browser
3. Click both links and get redirected
4. Stop the server and run the build (`npm run build`)
5. Start the production server (`npm run start`)
6. Visit localhost:3000 again and click both links
7. **Only redirect-2 works** (does not work for me in production but might be a different problem then)
### Current vs. Expected behavior
I expect the middleware to be used in production too.
If i remove NODE_ENV=production, the middleware is used again, but then next is not running in production mode
Also, I expect the redirects in the config to work in my production environment
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: #40-Ubuntu SMP PREEMPT_DYNAMIC Fri Jul 5 10:34:03 UTC 2024
Available memory (MB): 48119
Available CPU cores: 16
Binaries:
Node: 22.3.0
npm: 10.8.1
Yarn: N/A
pnpm: 9.4.0
Relevant Packages:
next: 15.0.0-canary.128 // Latest available version is detected (15.0.0-canary.128).
eslint-config-next: N/A
react: 19.0.0-rc-eb3ad065-20240822
react-dom: 19.0.0-rc-eb3ad065-20240822
typescript: 5.3.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Middleware
### Which stage(s) are affected? (Select all that apply)
next start (local), Other (Deployed)
### Additional context
I have used the website template from payload cms, which may lead to the redirects from the config not beeing used.
But the middleware is also missing in the default setup, what you can see in my example | bug,Middleware | low | Minor |
2,484,719,102 | pytorch | Build errors while building PyTorch with BLIS | ### 🐛 Describe the bug
Build is failing when Pytorch is build with BLIS. The following commands are used to built, as mentioned in https://github.com/pytorch/pytorch/pull/54953
$export BLIS_HOME=path-to-BLIS
$export PATH=$BLIS_HOME/include/blis:$PATH LD_LIBRARY_PATH=$BLIS_HOME/lib:$LD_LIBRARY_PATH
$export BLAS=BLIS USE_MKLDNN_CBLAS=ON WITH_BLAS=blis
$python setup.py develop
The following errors are seen:
CMake Error at cmake/Modules/FindBLAS.cmake:85 (check_function_exists):
Unknown CMake command "check_function_exists".
Call Stack (most recent call first):
cmake/Modules/FindBLAS.cmake:111 (check_fortran_libraries)
cmake/Modules/FindLAPACK.cmake:22 (FIND_PACKAGE)
cmake/Dependencies.cmake:1436 (find_package)
CMakeLists.txt:867 (include)
### Versions
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 12.3.0-1ubuntu1~22.04) 12.3.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.26.4
Libc version: glibc-2.35
Python version: 3.9.19 (main, May 6 2024, 19:43:03) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-118-generic-x86_64-with-glibc2.35
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 384
On-line CPU(s) list: 0-383
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9654 96-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 2
Core(s) per socket: 96
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU max MHz: 3707.8120
CPU min MHz: 1500.0000
BogoMIPS: 4792.98
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid overflow_recov succor smca fsrm flush_l1d
Virtualization: AMD-V
L1d cache: 6 MiB (192 instances)
L1i cache: 6 MiB (192 instances)
L2 cache: 192 MiB (192 instances)
L3 cache: 768 MiB (24 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-95,192-287
NUMA node1 CPU(s): 96-191,288-383
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.0.1
[pip3] optree==0.12.1
[pip3] torch==2.5.0a0+git81e4ef6
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-include 2024.2.1 pypi_0 pypi
[conda] mkl-static 2024.2.1 pypi_0 pypi
[conda] numpy 2.0.1 pypi_0 pypi
[conda] optree 0.12.1 pypi_0 pypi
[conda] torch 2.5.0a0+git81e4ef6 dev_0 <develop>
cc @malfet @seemethere | module: build,triaged | low | Critical |
2,484,742,091 | vscode | VS Code installs en-US-10-1.bdic | Type: <b>Bug</b>
Whenever I start VS Code, the dictionary file `en-US-10-1.bdic` is written into `~/.config/Code/Dictionaries`. Since to my knowledge VS Code does not include any spell checking functionality, this seems superfluous. Moreover, it interferes with the functionality of a spell checking extension.
I am certain that this is not due to an extension; I used both "Extension Bisect" and `code --disable-extensions`. I delete the file before, and after VS Code has started, it is there again, even with all extensions disabled.
VS Code version: Code 1.92.2 (fee1edb8d6d72a0ddff41e5f71a671c23ed924b9, 2024-08-14T17:29:30.058Z)
OS version: Linux x64 6.4.0-0.deb12.2-amd64
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz (8 x 3920)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: disabled_software<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: disabled_off<br>webnn: disabled_off|
|Load (avg)|1, 1, 1|
|Memory (System)|31.29GB (26.75GB free)|
|Process Argv|/home/ca/work/Projects/CvCrossManova/paper --crash-reporter-id 7385d41d-b1f1-40cd-9e1c-2f129a2666df|
|Screen Reader|no|
|VM|0%|
|DESKTOP_SESSION|plasma|
|XDG_CURRENT_DESKTOP|KDE|
|XDG_SESSION_DESKTOP|KDE|
|XDG_SESSION_TYPE|x11|
</details><details><summary>Extensions (45)</summary>
Extension|Author (truncated)|Version
---|---|---
pandoc-defaults|all|0.1.0
spellright|ban|3.0.136
toml|be5|0.6.0
docs-view|bie|0.1.0
insert-unicode|bru|0.15.1
jsonviewer|cci|1.5.2
excel-to-markdown-table|csh|1.3.0
markdown-word-count|Cur|0.0.7
vscode-eslint|dba|3.0.10
xml|Dot|2.5.1
overtype|DrM|0.5.0
copilot|Git|1.223.0
copilot-chat|Git|0.18.2
gc-excelviewer|Gra|4.2.61
todo-tree|Gru|0.0.226
pandiff-vscode|Her|0.2.11
stan-vscode|iva|0.2.4
vscode-latex|mat|1.3.0
language-matlab|Mat|1.2.5
rainbow-csv|mec|3.12.0
autopep8|ms-|2024.0.0
debugpy|ms-|2024.10.0
flake8|ms-|2023.10.0
isort|ms-|2023.10.1
python|ms-|2024.12.3
vscode-pylance|ms-|2024.8.1
jupyter|ms-|2024.7.0
jupyter-keymap|ms-|1.1.2
jupyter-renderers|ms-|1.0.19
vscode-jupyter-cell-tags|ms-|0.1.9
remote-ssh|ms-|0.113.1
remote-ssh-edit|ms-|0.86.0
live-server|ms-|0.4.14
remote-explorer|ms-|0.4.3
gremlins|nho|0.26.0
autodocstring|njp|0.6.1
material-icon-theme|PKi|5.9.0
quarto|qua|1.114.0
vscode-yaml|red|1.15.0
r|REd|2.8.4
jinjahtml|sam|0.20.0
docxreader|Sha|1.1.3
rewrap|stk|1.16.3
lua|sum|3.10.5
pdf|tom|1.2.2
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
vscaat:30438848
c4g48928:30535728
azure-dev_surveyone:30548225
962ge761:30959799
pythongtdpath:30769146
welcomedialogc:30910334
pythonnoceb:30805159
asynctok:30898717
pythonregdiag2:30936856
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
accentitlementst:30995554
dsvsc016:30899300
dsvsc017:30899301
dsvsc018:30899302
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
9c06g630:31013171
pythoncenvpt:31062603
a69g1124:31058053
dvdeprecation:31068756
dwnewjupytercf:31046870
impr_priority:31102340
nativerepl1:31104043
refactort:31108082
pythonrstrctxt:31112756
flightc:31119335
wkspc-onlycs-t:31111718
wkspc-ranged-c:31123312
pme_test_t:31118333
ei213698:31121563
```
</details>
<!-- generated by issue reporter --> | debt,electron | low | Critical |
2,484,757,549 | ollama | Jamba 1.5 Model | Jamba 1.5 Open Model Family: The Most Powerful and Efficient Long Context Models.
**Features**
**Long context handling**: With a 256K effective context window, the longest in the market, Jamba 1.5 models can improve the quality of key enterprise applications, such as lengthy document summarization and analysis, as well as agentic and RAG workflows
**Speed:** Up to 2.5X faster on long contexts and fastest across all context lengths in their size class
**Quality:** Jamba 1.5 Mini is the strongest open model in its size class with a score of 46.1 on the Arena Hard benchmark, surpassing larger models like Mixtral 8x22B and Command-R+. Jamba 1.5 Large, with a score of 65.4, outpaces both Llama 3.1 70B and 405B
**Multilingual:** In addition to English, the models support Spanish, French, Portuguese, Italian, Dutch, German, Arabic and Hebrew
Developer ready: Jamba natively supports structured JSON output, function calling, digesting document objects, and generating citations
**Open for builders**: Both models are available for immediate download on Hugging Face (and coming soon to leading frameworks LangChain and LlamaIndex)
**Deploy anywhere**: In addition to AI21 Studio, the models are available on cloud partners Google Cloud Vertex AI, Microsoft Azure, and NVIDIA NIM and coming soon to Amazon Bedrock, Databricks Marketplace, Snowflake Cortex, Together.AI as well as for private on-prem and VPC deployment | model request | low | Major |
2,484,817,349 | pytorch | RuntimeError: cuFFT error: CUFFT_INTERNAL_ERROR | ### 🐛 Describe the bug
I am trying to run piper forked from github/rhasspy/piper in google compute engine vm with L4 GPU. I have cuda toolkit 12.6
This is my error log:
python3 -m piper_train --dataset-dir /home/stevenvana/piper/out-train/ --accelerator 'gpu' --devices 1 --batch-size 32 --validation-split 0.0 --num-test-examples 0 --max_epochs 10000 --resume_from_checkpoint /home/stevenvana/piper/out-train/epoch=2218-step=838782.ckpt?download=true --checkpoint-epochs 1 --precision 32 --quality high
DEBUG:piper_train:Namespace(dataset_dir='/home/stevenvana/piper/out-train/', checkpoint_epochs=1, quality='high', resume_from_single_speaker_checkpoint=None, logger=True, enable_checkpointing=True, default_root_dir=None, gradient_clip_val=None, gradient_clip_algorithm=None, num_nodes=1, num_processes=None, devices='1', gpus=None, auto_select_gpus=False, tpu_cores=None, ipus=None, enable_progress_bar=True, overfit_batches=0.0, track_grad_norm=-1, check_val_every_n_epoch=1, fast_dev_run=False, accumulate_grad_batches=None, max_epochs=10000, min_epochs=None, max_steps=-1, min_steps=None, max_time=None, limit_train_batches=None, limit_val_batches=None, limit_test_batches=None, limit_predict_batches=None, val_check_interval=None, log_every_n_steps=50, accelerator='gpu', strategy=None, sync_batchnorm=False, precision=32, enable_model_summary=True, weights_save_path=None, num_sanity_val_steps=2, resume_from_checkpoint='/home/stevenvana/piper/out-train/epoch=2218-step=838782.ckpt?download=true', profiler=None, benchmark=None, deterministic=None, reload_dataloaders_every_n_epochs=0, auto_lr_find=False, replace_sampler_ddp=True, detect_anomaly=False, auto_scale_batch_size=False, plugins=None, amp_backend='native', amp_level=None, move_metrics_to_cpu=False, multiple_trainloader_mode='max_size_cycle', batch_size=32, validation_split=0.0, num_test_examples=0, max_phoneme_ids=None, hidden_channels=192, inter_channels=192, filter_channels=768, n_layers=6, n_heads=2, seed=1234)
/home/stevenvana/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/checkpoint_connector.py:52: LightningDeprecationWarning: Setting `Trainer(resume_from_checkpoint=)` is deprecated in v1.5 and will be removed in v1.7. Please pass `Trainer.fit(ckpt_path=)` directly instead.
rank_zero_deprecation(
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
DEBUG:piper_train:Checkpoints will be saved every 1 epoch(s)
DEBUG:vits.dataset:Loading dataset: /home/stevenvana/piper/out-train/dataset.jsonl
/home/stevenvana/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py:731: LightningDeprecationWarning: `trainer.resume_from_checkpoint` is deprecated in v1.5 and will be removed in v2.0. Specify the fit checkpoint path with `trainer.fit(ckpt_path=)` instead.
ckpt_path = ckpt_path or self.resume_from_checkpoint
Restoring states from the checkpoint path at /home/stevenvana/piper/out-train/epoch=2218-step=838782.ckpt?download=true
DEBUG:fsspec.local:open file: /home/stevenvana/piper/out-train/epoch=2218-step=838782.ckpt?download=true
/home/stevenvana/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py:1659: UserWarning: Be aware that when using `ckpt_path`, callbacks used to create the checkpoint need to be provided during `Trainer` instantiation. Please add the following callbacks: ["ModelCheckpoint{'monitor': None, 'mode': 'min', 'every_n_train_steps': 0, 'every_n_epochs': 1, 'train_time_interval': None}"].
rank_zero_warn(
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
DEBUG:fsspec.local:open file: /home/stevenvana/piper/out-train/lightning_logs/version_13/hparams.yaml
Restored all states from the checkpoint file at /home/stevenvana/piper/out-train/epoch=2218-step=838782.ckpt?download=true
/home/stevenvana/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/utilities/data.py:153: UserWarning: Total length of `DataLoader` across ranks is zero. Please make sure this was your intention.
rank_zero_warn(
/home/stevenvana/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:236: PossibleUserWarning: The dataloader, train_dataloader, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 8 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.
rank_zero_warn(
/home/stevenvana/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py:1892: PossibleUserWarning: The number of training batches (2) is smaller than the logging interval Trainer(log_every_n_steps=50). Set a lower value for log_every_n_steps if you want to see logs for the training epoch.
rank_zero_warn(
Traceback (most recent call last):
File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/home/stevenvana/piper/src/python/piper_train/__main__.py", line 147, in <module>
main()
File "/home/stevenvana/piper/src/python/piper_train/__main__.py", line 124, in main
trainer.fit(model)
File "/home/stevenvana/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 696, in fit
self._call_and_handle_interrupt(
File "/home/stevenvana/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 650, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "/home/stevenvana/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 735, in _fit_impl
results = self._run(model, ckpt_path=self.ckpt_path)
File "/home/stevenvana/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1166, in _run
results = self._run_stage()
File "/home/stevenvana/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1252, in _run_stage
return self._run_train()
File "/home/stevenvana/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1283, in _run_train
self.fit_loop.run()
File "/home/stevenvana/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/loop.py", line 200, in run
self.advance(*args, **kwargs)
File "/home/stevenvana/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/fit_loop.py", line 271, in advance
self._outputs = self.epoch_loop.run(self._data_fetcher)
File "/home/stevenvana/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/loop.py", line 200, in run
self.advance(*args, **kwargs)
File "/home/stevenvana/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 203, in advance
batch_output = self.batch_loop.run(kwargs)
File "/home/stevenvana/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/loop.py", line 200, in run
self.advance(*args, **kwargs)
File "/home/stevenvana/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/batch/training_batch_loop.py", line 87, in advance
outputs = self.optimizer_loop.run(optimizers, kwargs)
File "/home/stevenvana/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/loop.py", line 200, in run
self.advance(*args, **kwargs)
File "/home/stevenvana/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 201, in advance
result = self._run_optimization(kwargs, self._optimizers[self.optim_progress.optimizer_position])
File "/home/stevenvana/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 248, in _run_optimization
self._optimizer_step(optimizer, opt_idx, kwargs.get("batch_idx", 0), closure)
File "/home/stevenvana/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 358, in _optimizer_step
self.trainer._call_lightning_module_hook(
File "/home/stevenvana/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1550, in _call_lightning_module_hook
output = fn(*args, **kwargs)
File "/home/stevenvana/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/core/module.py", line 1705, in optimizer_step
optimizer.step(closure=optimizer_closure)
File "/home/stevenvana/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/core/optimizer.py", line 168, in step
step_output = self._strategy.optimizer_step(self._optimizer, self._optimizer_idx, closure, **kwargs)
File "/home/stevenvana/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/strategies/strategy.py", line 216, in optimizer_step
return self.precision_plugin.optimizer_step(model, optimizer, opt_idx, closure, **kwargs)
File "/home/stevenvana/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 153, in optimizer_step
return optimizer.step(closure=closure, **kwargs)
File "/home/stevenvana/piper/src/python/.venv/lib/python3.10/site-packages/torch/optim/lr_scheduler.py", line 68, in wrapper
return wrapped(*args, **kwargs)
File "/home/stevenvana/piper/src/python/.venv/lib/python3.10/site-packages/torch/optim/optimizer.py", line 140, in wrapper
out = func(*args, **kwargs)
File "/home/stevenvana/piper/src/python/.venv/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/stevenvana/piper/src/python/.venv/lib/python3.10/site-packages/torch/optim/adamw.py", line 120, in step
loss = closure()
File "/home/stevenvana/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 138, in _wrap_closure
closure_result = closure()
File "/home/stevenvana/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 146, in __call__
self._result = self.closure(*args, **kwargs)
File "/home/stevenvana/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 132, in closure
step_output = self._step_fn()
File "/home/stevenvana/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 407, in _training_step
training_step_output = self.trainer._call_strategy_hook("training_step", *kwargs.values())
File "/home/stevenvana/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1704, in _call_strategy_hook
output = fn(*args, **kwargs)
File "/home/stevenvana/piper/src/python/.venv/lib/python3.10/site-packages/pytorch_lightning/strategies/strategy.py", line 358, in training_step
return self.model.training_step(*args, **kwargs)
File "/home/stevenvana/piper/src/python/piper_train/vits/lightning.py", line 191, in training_step
return self.training_step_g(batch)
File "/home/stevenvana/piper/src/python/piper_train/vits/lightning.py", line 230, in training_step_g
y_hat_mel = mel_spectrogram_torch(
File "/home/stevenvana/piper/src/python/piper_train/vits/mel_processing.py", line 120, in mel_spectrogram_torch
torch.stft(
File "/home/stevenvana/piper/src/python/.venv/lib/python3.10/site-packages/torch/functional.py", line 632, in stft
return _VF.stft(input, n_fft, hop_length, win_length, window, # type: ignore[attr-defined]
RuntimeError: cuFFT error: CUFFT_INTERNAL_ERROR
These are my installed dependencies:
Package Version Editable project location
------------------------ ------------ ---------------------------------
absl-py 2.1.0
aiohappyeyeballs 2.4.0
aiohttp 3.10.5
aiosignal 1.3.1
async-timeout 4.0.3
attrs 24.2.0
audioread 3.0.1
build 1.2.1
certifi 2024.7.4
cffi 1.17.0
charset-normalizer 3.3.2
coloredlogs 15.0.1
Cython 0.29.37
decorator 5.1.1
flatbuffers 24.3.25
frozenlist 1.4.1
fsspec 2024.6.1
grpcio 1.66.0
humanfriendly 10.0
idna 3.8
joblib 1.4.2
lazy_loader 0.4
librosa 0.10.2.post1
lightning-utilities 0.11.6
llvmlite 0.43.0
Markdown 3.7
MarkupSafe 2.1.5
mpmath 1.3.0
msgpack 1.0.8
multidict 6.0.5
numba 0.60.0
numpy 1.26.4
nvidia-cublas-cu11 11.10.3.66
nvidia-cuda-nvrtc-cu11 11.7.99
nvidia-cuda-runtime-cu11 11.7.99
nvidia-cudnn-cu11 8.5.0.96
onnxruntime 1.19.0
packaging 24.1
pip 24.0
piper-phonemize 1.1.0
piper_train 1.0.0 /home/stevenvana/piper/src/python
piper-tts 1.2.0
platformdirs 4.2.2
pooch 1.8.2
protobuf 5.27.3
pycparser 2.22
pyDeprecate 0.3.2
pyproject_hooks 1.1.0
pytorch-lightning 1.7.7
PyYAML 6.0.2
requests 2.32.3
scikit-learn 1.5.1
scipy 1.14.1
setuptools 73.0.1
six 1.16.0
soundfile 0.12.1
soxr 0.4.0
sympy 1.13.2
tensorboard 2.17.1
tensorboard-data-server 0.7.2
threadpoolctl 3.5.0
tomli 2.0.1
torch 1.13.1
torchmetrics 0.11.4
tqdm 4.66.5
typing_extensions 4.12.2
urllib3 2.2.2
Werkzeug 3.0.4
wheel 0.44.0
### Versions
ubuntu 22.04
python 3.10 venv
cc @ptrblck @msaroufim @mruberry | module: cuda,triaged,module: fft | low | Critical |
2,484,827,453 | godot | False velocity values on moving elements when camera reaches a stop | ### Tested versions
- Reproductible in 4.3
### System information
Windows 10
### Issue description
When active camera reaches a stop, objects' velocities in the velocity buffer appear to be moving according to their previous frame's transform difference, the most obvious case of which is on objects that are children of the camera and stay stationary in relation to it, still showcasing false velocities the moment the camera stops, i.e. the the first frame the camera's current frame's and last frame's transforms match after it has been moving. It seems to be an issue specifically around the camera reaching a stop, as otherwise it would occur on any change of direction that it makes
### Steps to reproduce
download the small minimum reproduction project.
play motion_blur_test.tscn
press up and down arrows to cycle through different framerates, ideally press up once to be at the lowest 2 fps.
move with WASD, and then stop moving, you will see the cube that moves with the camera gets blurred on the frame the camera stopped.
press z to open the debug view, and you will see at the bottom right the velocity buffer and a non blurred color buffer at the top left. Repeat the same steps to see the velocity buffer in those instances. you may need to change directions for the velocities to be positive and show up.
note how the extra velocities only happen when the camera stops, not when changing directions abruptly
### Minimal reproduction project (MRP)
[godot-velocity-bug-minimum-reproduction.zip](https://github.com/user-attachments/files/16737779/godot-velocity-bug-minimum-reproduction.zip)
| bug,topic:rendering,topic:3d | low | Critical |
2,484,847,248 | godot | UFBX runtime loader not working | ### Tested versions
Godot v4.3.stable
### System information
Linux Mint 21 (Vanessa) - X11 - GLES3 (Compatibility) - NVIDIA GeForce RTX 3060 (nvidia; 535.183.01) - AMD Ryzen 5 2600X Six-Core Processor (12 Threads)
### Issue description
It doesn't work to load a FBX model at runetime in gdscript. Not correct, the loading works partial as you could see in the attached debugger images. But nothing is displayed in the 3D view.
### Steps to reproduce
1) unzip the MRP project
2) change "PREFIX_DIR" in Viewer.gd to an absolute path in your system where you unzipped the MRP project
3) start the project and see that only the GLTF file is loaded and the FBX files produces an error
4) look into the debugger for the FBX object and see that there're "ImportedMeshInstance3D" where in the GLTF loaded file are "MeshInstance3D"
### Minimal reproduction project (MRP)
[github_ufbx_bug.zip](https://github.com/user-attachments/files/16737913/github_ufbx_bug.zip)
Error messages:
> E 0:00:00:0934 Viewer.gd:28 @ _load_gltf_or_fbx(): No loader found for resource: /home/andreas/Games/Assets/3d/kaykit/KayKit_Adventurers_1.0_SOURCE/Characters/fbx/knight_texture.png (expected type: Texture2D)
> <C++-Fehler> Method/function failed. Returning: Ref<Resource>()
> <C++-Quelle> core/io/resource_loader.cpp:291 @ _load()
> <Stacktrace> Viewer.gd:28 @ _load_gltf_or_fbx()
> Viewer.gd:19 @ load_fbx()
> Viewer.gd:14 @ _ready()
>
> W 0:00:01:0019 Viewer.gd:30 @ _load_gltf_or_fbx(): Adding 'Knight_ArmLeft' as child to 'Skeleton3D' will make owner '' inconsistent. Consider unsetting the owner beforehand.
> <C++-Quelle> scene/main/node.cpp:1579 @ add_child()
> <Stacktrace> Viewer.gd:30 @ _load_gltf_or_fbx()
> Viewer.gd:19 @ load_fbx()
> Viewer.gd:14 @ _ready()
>
> W 0:00:01:0019 Viewer.gd:30 @ _load_gltf_or_fbx(): Adding 'Knight_ArmRight' as child to 'Skeleton3D' will make owner '' inconsistent. Consider unsetting the owner beforehand.
> <C++-Quelle> scene/main/node.cpp:1579 @ add_child()
> <Stacktrace> Viewer.gd:30 @ _load_gltf_or_fbx()
> Viewer.gd:19 @ load_fbx()
> Viewer.gd:14 @ _ready()
.......... (each bone is reported)

| bug,topic:import,topic:3d | low | Critical |
2,484,862,127 | ollama | Scheduler should respect main_gpu on multi-gpu setup | ### What is the issue?
The main_gpu option is not working as expected.
My system has two GPUs. I've sent the request to `/api/chat`
```
{
"model": "llama3.1:8b-instruct-q8_0",
"messages": [
{
"role": "user",
"content": "What is the color of our sky?"
}
],
"stream": false,
"keep_alive": -1,
"options": {
"use_mmap": false,
"main_gpu": 1
}
}
```
Expected behavior: the model is loaded on my second gpu (i.e. gpu 1)
Actual behavior: the model is always loaded on my first gpu (i.e. gpu 0), no matter how the `main_gpu` is (whether it's 0 or 1)
P.S. This model could be fit itself in any of my gpu, one gpu is enough to load all the weights.
I do know that I could set the `CUDA_VISIBLE_DEVICES` to specify which gpu to use, as #1813 suggested.
But using an environment variable is not as flexible as the parameter in the request (which could be adjust in each different request)
Maybe this parameter is not correctly passed to llama.cpp or llama.cpp is not selecting the gpu as we expected?
### OS
Docker
### GPU
Nvidia
### CPU
AMD
### Ollama version
0.3.6 | bug | low | Minor |
2,484,881,923 | neovim | tmux paste buffer gets cut off at around 16kB | ### Problem
Since nvim 0.9.0 pasting from tmux cuts the content at around 16kB (it varies a bit by a couple of hundreds of bytes). Works in Vim, nano, older Neovims up to (including) 0.8.3. Broken since 0.9.0. Setting paste mode in Neovim does not have any impact on this behavior. Tried with alacritty, gnome-terminal, xterm on Ubuntu 22.04/CentOS 7. Tried also with latest stable NVIM v0.10.2-dev-20+g1fd86be15) - still broken. Tried different tmux versions (makes no difference).
### Steps to reproduce
<copy a few hundred lines into tmux buffer (more than 16kB of content)>
nvim --clean
i
<paste from tmux, either by shortcut or :paste-buffer tmux command>
<content gets truncated in the middle of the line at around 16kB)
### Expected behavior
Pasting the whole tmux paste buffer content.
### Neovim version (nvim -v)
0.9.0 or higher
### Vim (not Nvim) behaves the same?
no, Vim 9.1
### Operating system/version
Ubuntu 22.04, CentOS 7
### Terminal name/version
xterm/gnome-terminal/alacritty
### $TERM environment variable
xterm-256color
### Installation
from source, also binaries from github for older versions | bug,clipboard,tui,has:workaround | low | Critical |
2,484,889,515 | godot | JNI Error, cant call Android plugin method with ByteArray (equivalent of java's byte[]) as return | ### Tested versions
- Reproduced in 4.3-stable (mono)
### System information
Android ARM64
### Issue description
Any method that returns a ByteArray (kotlin, java's byte[]) can't be called from C#:
```kotlin
@UsedByGodot
fun testByteArray(): ByteArray {
...
}
```
No crash will occur but the pcall don't work too:
```
.reprobugandroidjni/com.godot.game.GodotApp]#1(BLAST Consumer)1: File name too long
08-24 18:46:53.227 26176 26216 V Godot : OnGodotSetupCompleted
08-24 18:46:53.322 26176 26216 E godot : USER ERROR: Method/function failed. Returning: Variant()
08-24 18:46:53.322 26176 26216 E godot : at: callp (platform/android/api/jni_singleton.h:185)
08-24 18:46:53.326 26176 26216 I godot : System.Byte[]
```
Changing the method for the following API without these types works:
```kotlin
@UsedByGodot
fun testByteArray(): String {
...
}
```
### Steps to reproduce
Create an android plugin with an exposed method that has Dictionary as argument or ByteArray (byte[]) as return and call it from C#.
### Minimal reproduction project (MRP)
[repro.zip](https://github.com/user-attachments/files/16738071/repro.zip)
| bug,platform:android,topic:plugin,topic:dotnet | low | Critical |
2,484,891,118 | godot | viewport texture pre-multiplied alpha values differ between compatibility and forward+ (compatibility is correct) | ### Tested versions
reproducible in 4.3 stable
### System information
Godot v4.3.stable - Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1650 (NVIDIA; 32.0.15.5612) - Intel(R) Core(TM) i3-10100 CPU @ 3.60GHz (8 Threads)
### Issue description
when using a viewport texture with transparency, on a material with pre-multiplied alpha enabled, colours are not correctly pre-multiplied in forward+ mode, but are correct in compatibility.

if this is considered a duplicate of #88603, please keep this report as it is more accurate/concise
### Steps to reproduce
example shows a viewport texture rendering a flat grey colour with gradient transparency, then overlaid back over the same grey.
view the example in 2d mode. in forward+ there is a visible gradient where there should be a flat grey. switch to compatibility and it is flat
### Minimal reproduction project (MRP)
[premult_issue_v2.zip](https://github.com/user-attachments/files/16738075/premult_issue_v2.zip)
| topic:rendering,needs testing | low | Minor |
2,484,891,746 | godot | [Asset library] My uploaded project's page broke (?) and is inaccessible after a weird asset edit behaviour. | ### Tested versions
In the asset library. The project's page got messed up by an edit, and the library just hides it because it detected it being broken I think?
### System information
asset library
### Issue description
When making an edit to my project's page, the screenshot displays were resetting seemingly at random, so I made a second and third edit to fix those issues but ultimately one of the 6 screenshots (the 5th display, no matter which file i put a link to there) was always appearing as a blank (never saw it happen before). The last edit also showed a version "change" from 0.6.93 to 0.6.93 (the same as before) and another one but for the commit hash (once again marked green, despite being the same one). I believe that after this edit got accepted, the page broke and the asset is invisible on the site, and when opened directly with a link, it just shows some html code. The broken asset page - https://godotengine.org/asset-library/asset/3247


### Steps to reproduce
The screenshot displays were always inconsistent for me when editing assets, and this one was just a new type of wacky random thing i have no idea what caused. 6 screenshots in total, links are from opening the image from github's file viewer and opening in a new page the pasting that link into the screenshot display's both boxes. The 6th box broke on the first time I changed the amount of displays from 3 to 6.
### Minimal reproduction project (MRP)
https://godotengine.org/asset-library/asset/3247 | bug,topic:assetlib | low | Critical |
2,484,907,513 | next.js | Parallel server action calls | ### Link to the code that reproduces this issue
https://github.com/yehonatanyosefi/parallel-routes-example
### To Reproduce
1. start the application
2. click start button
3. see console logs come out as ABAB on the server side with two post requests, and on the client side it taking 2 seconds instead of one even though using promise.all
### Current vs. Expected behavior
Current behavior of nextjs server actions are they are executed in sequential fashion, however there is no flag or option or a config to let them be executed in parallel. Causing them to block some ux. For example for ai image generation, you'd have to wait for the last call to finish before making your next one, causing you to need to wait over 15 seconds before you can even generate more images instead of the Expected behavior of being able to send them in parallel and stream in finished generations by awaiting them.
In the example I sent you can see even though I used Promise.all on 2 promises that each take a second that should be expected take only 1 second together but because of the sequential nature the current behavior is it takes 2.
### Provide environment information
```bash
when hosted on vercel it happens, locally it happens too:
Operating System:
Platform: win32
Arch: x64
Version: Windows 11 Pro
Available memory (MB): 32393
Available CPU cores: 20
Binaries:
Node: 20.10.0
npm: N/A
Yarn: N/A
pnpm: N/A
Relevant Packages:
next: 15.0.0-canary.128 // Latest available version is detected (15.0.0-canary.128).
eslint-config-next: N/A
react: 19.0.0-rc-eb3ad065-20240822
react-dom: 19.0.0-rc-eb3ad065-20240822
typescript: 5.3.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
create-next-app, Parallel & Intercepting Routes, Performance
### Which stage(s) are affected? (Select all that apply)
next dev (local), Vercel (Deployed)
### Additional context
Edit: https://x.com/cramforce/status/1733240566954230063
Above's link to a reply by Vercel's CTO agreeing it should have control if we want it parallel | create-next-app,bug,Performance,linear: next,Parallel & Intercepting Routes | low | Major |
2,484,911,964 | godot | PropertyInfo Usage has no effect in GDExtension | ### Tested versions
- Reproducible in 4.3
### System information
Windows 11 - Godot 4.3 - Forward+
### Issue description
While developing a plugin using GDExtension for godot I've noticed `usage` has no effect when adding a custom EditorSettings. You can enter a random int and it doesn't affect anything the main reason I need this is `PROPERTY_USAGE_RESTART_IF_CHANGED` which doesn't work.
It's necessary for GDExtension editor settings to warn user to restart editor on certain critical settings.
```CPP
PropertyInfo propertyGen(Variant::INT, "gdrtx/compute_model", PropertyHint::PROPERTY_HINT_ENUM, "Cuda,OpenCL",
PROPERTY_USAGE_DEFAULT | PROPERTY_USAGE_RESTART_IF_CHANGED);
editor_settings->add_property_info(propertyGen);
```
Same issue was addressed here https://github.com/godotengine/godot/pull/66079 which was fixed for ProjectSettings but not EditorSettings.
### Steps to reproduce
1. Create a simple EditorPlugin and register it.
2. Get Editor Settings Instance and Register a new PropertyInfo with `PROPERTY_USAGE_RESTART_IF_CHANGED` flag
3. Run editor.
4. Change the setting and you will be not prompted for restart.
### Minimal reproduction project (MRP)
No Project | bug,topic:editor,topic:gdextension | low | Minor |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.