id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k ⌀ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,569,627,355 | ollama | Linux install - better support multi-vendor systems | ### What is the issue?
When the script finds NVIDIA GPU software, it just exits.
https://github.com/ollama/ollama/blob/defbf9425af8228f3420d567e9eeaa29d8ac87e3/scripts/install.sh#L189-L192
But in theory, a system can have multiple GPU's from multiple vendors.
In my example. I removed `exit 0` from the script so it would install additional stuff for my AMD GPU.
### OS
Linux
### GPU
Nvidia, AMD
### CPU
AMD
### Ollama version
0.3.12 | feature request,linux,install | low | Minor |
2,569,637,159 | Python | Transposition cipher | ### Feature description
Would like to add transposition cipher in cipher folder.The Transposition Cipher Technique is an encryption method used to encrypt a message or information. This encryption method is done by playing with the position of letters of the plain text. | enhancement | medium | Minor |
2,569,669,988 | flutter | Prebuilt artifacts for linux-arm64 contain an x86-64 frontend_server_aot.dart.snapshot | ### Steps to reproduce
1. try using [meta-flutter](https://github.com/meta-flutter/meta-flutter) to build Flutter apps for a linux arm64 target on an arm64 host
2. at some point, flutter-sdk_git.bb downloads https://storage.googleapis.com/flutter_infra_release/releases/stable/linux/flutter_linux_3.24.2-stable.tar.xz (but the same appies to https://storage.googleapis.com/flutter_infra_release/flutter/a6bd3f1de158bb61090e0c8053df93a10cb548e1/linux-arm64/artifacts.zip)
### Actual results
inside the archive, bin/cache/artifacts/engine/linux-x64/frontend_server_aot.dart.snapshot is in the wrong architecture:
```
file flutter/bin/cache/artifacts/engine/linux-x64/frontend_server_aot.dart.snapshot
flutter/bin/cache/artifacts/engine/linux-x64/frontend_server_aot.dart.snapshot: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[md5/uuid]=7d7673e2df1d68aa1222110aef3343b6, with debug_info, not stripped
```
This prevents building flutter apps on the host.
The archive for 3.24.3 has the same problem.
### Logs
_No response_
### Flutter Doctor output
<details open>
<summary>Doctor output</summary>
```console
[!] Flutter (Channel stable, 3.24.2, on Ubuntu 22.04.5 LTS 5.15.0-122-generic, locale en_US.UTF-8)
• Flutter version 3.24.2 on channel stable at
/home/simone/yocto/kirkstone/build/tmp/sysroots-components/aarch64/flutter-sdk-native/usr/share/flutter/sdk
! The flutter binary is not on your path. Consider adding
/home/simone/yocto/kirkstone/build/tmp/sysroots-components/aarch64/flutter-sdk-native/usr/share/flutter/sdk/bin to your path.
! The dart binary is not on your path. Consider adding
/home/simone/yocto/kirkstone/build/tmp/sysroots-components/aarch64/flutter-sdk-native/usr/share/flutter/sdk/bin to your path.
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 4cf269e36d (5 weeks ago), 2024-09-03 14:30:00 -0700
• Engine revision a6bd3f1de1
• Dart version 3.5.2
• DevTools version 2.37.2
• If those were intentional, you can disregard the above warnings; however it is recommended to use "git" directly to perform update checks and
upgrades.
[✗] Android toolchain - develop for Android devices
✗ Unable to locate Android SDK.
Install Android Studio from: https://developer.android.com/studio/index.html
On first launch it will assist you in installing the Android SDK components.
(or visit https://flutter.dev/to/linux-android-setup for detailed instructions).
If the Android SDK has been installed to a custom location, please use
`flutter config --android-sdk` to update to that location.
[✗] Chrome - develop for the web (Cannot find Chrome executable at google-chrome)
! Cannot find Chrome. Try setting CHROME_EXECUTABLE to a Chrome executable.
[✓] Linux toolchain - develop for Linux desktop
• Ubuntu clang version 14.0.0-1ubuntu1.1
• cmake version 3.22.1
• ninja version 1.10.1
• pkg-config version 0.29.2
[!] Android Studio (not installed)
• Android Studio not found; download from https://developer.android.com/studio/index.html
(or visit https://flutter.dev/to/linux-android-setup for detailed instructions).
[✓] Connected device (1 available)
• Linux (desktop) • linux • linux-arm64 • Ubuntu 22.04.5 LTS 5.15.0-122-generic
[✓] Network resources
• All expected network resources are available.
! Doctor found issues in 4 categories.
</details>
| engine,platform-linux,a: build,P2,platform-host-arm,platform-target-arm,team-engine,triaged-engine | low | Critical |
2,569,677,883 | PowerToys | New+ - can we input a variable when creating folders? | ### Description of the new feature / enhancement
Rather than just duplicating the folder names in the templates area, can New+ be enhanced to allow a variable name to be added at the beginning or end of each folder? So for example, if I have a folder called "Project", it would allow me to enter a variable name called "Holiday in Turkey" and the resultant folder would be "Holiday in Turkey Project". This variable should also propogate to any sub folders underneath "Project". e.g. "Project has sub folders called JPG, RAW and Finished, each of them would have the variable appended or prepended to them to make them unique.
### Scenario when this would be used?
I would otherwise end up with a lot of folders with the same name and would need to edit them to make them unique.
### Supporting information
See above. | Needs-Triage | low | Major |
2,569,690,817 | puppeteer | Support ShadowRoots in MutationPoller | Currently, MutationPoller only creates a mutation observer for the document and the observer does not track mutations inside shadow roots.
We should implement tracking of existing and new shadow roots in Mutation poller to detect mutations inside shadow roots by creating an observer instance for each shadow root.
Related https://github.com/puppeteer/puppeteer/pull/13153 | feature,confirmed,P2 | low | Minor |
2,569,751,352 | kubernetes | bug(fakeclient): use fakeclient to create resource objects with GenerateName multiple times | ### What happened?
When we are doing unit testing, we often use fakeclient to simulate the behavior of the client creating resources. However, when creating resources with the GenerateName name multiple times, an error occurs.
### What did you expect to happen?
creating resources with the GenerateName name multiple times, and success
### How can we reproduce it (as minimally and precisely as possible)?
code like this:
```go
func TestExample(t *testing.T) {
// use real client set, it works to create two pod with GenerateName
// kubeClient := client.ClientSet.Client
kubeClient := fake.NewSimpleClientset()
pod := &v1.Pod{
ObjectMeta: metav1.ObjectMeta{
GenerateName: "my-pod-", // generateName
Namespace: "default",
},
Spec: v1.PodSpec{
Containers: []v1.Container{
{
Name: "container1",
Image: "nginx:latest",
},
},
},
}
ctx := context.Background()
createdPod, err := kubeClient.CoreV1().Pods("default").Create(ctx, pod, metav1.CreateOptions{})
if err != nil {
t.Fatalf("Failed to create pod: %v", err)
}
fmt.Printf("Created Pod: %s\n", createdPod.GenerateName)
pod2 := &v1.Pod{
ObjectMeta: metav1.ObjectMeta{
GenerateName: "my-pod-",
Namespace: "default",
},
Spec: v1.PodSpec{
Containers: []v1.Container{
{
Name: "container1",
Image: "nginx:latest",
},
},
},
}
createdPod2, err := kubeClient.CoreV1().Pods("default").Create(ctx, pod2, metav1.CreateOptions{})
if err != nil {
t.Fatalf("Failed to create pod: %v", err)
}
fmt.Printf("Created Pod: %s\n", createdPod2.GenerateName)
//expectedPrefix := "my-pod-"
//if !strings.HasPrefix(createdPod.Name, expectedPrefix) {
// t.Errorf("Expected pod name to start with %q, got %q", expectedPrefix, createdPod.Name)
//}
}
```
we got:
```
=== RUN TestExample
Created Pod: my-pod-
plugin_test.go:316: Failed to create pod: pods "" already exists
--- FAIL: TestExample (0.00s)
```
### Anything else we need to know?
I'm not sure if this is a bug or a feature.
### Kubernetes version
<details>
```console
$ kubectl version
# paste output here
```
</details>
### Cloud provider
<details>
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/api-machinery,kind/feature,triage/accepted | low | Critical |
2,569,758,251 | PowerToys | Feature Request: Cascade Windows | ### Description of the new feature / enhancement
I would very much like to see a "Cascade Windows" feature, similar to the one built-into Windows 10.
### Scenario when this would be used?
I work with multiple monitors at multiple locations (I dock my laptop) and oftentimes windows open on one external monitor remain there and I have to painstakingly drag them about.
A shortcut to bring all open applications to the main screen would be godsend!
### Supporting information
Have you ever had to hover over an application screen, then Right-click -> Select "Move" and hold onto arrow keys on your keyboard, in order to bring the window onto your laptop main screen, because the external one is no longer connected? This is an everyday hell for me... | Needs-Triage | low | Minor |
2,569,811,121 | electron | [Bug]: Debugger misses WebSocket packets when page loads external URL/script | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
32.1.2
### What operating system(s) are you using?
Windows
### Operating System Version
Windows 11
### What arch are you using?
x64
### Last Known Working Electron version
_No response_
### Expected Behavior
The `webContents.debugger` should not "miss" packets, regardless of whatever happens in the page.
### Actual Behavior
The `webContents.debugger` misses packets that are sent by a WebSocket server that load external scripts/UI/iframe (or specifically, reCAPTCHA)
### Testcase Gist URL
https://gist.github.com/adigerber/fd0df6a1f70c133dd46bc26cad4848d5
### Additional Information
Hi,
In the gist I've provided:
* The application creates a WebSocketServer (**note: you need to add the `ws` module to the fiddle**) that sends packets of an ever-increasing counter.
* The application creates a basic window and uses `webContents.debugger` with network enabled to listen to received packets. When a packet is received, it prints its payload. When the connection closes, it prints the total amount of packets that it caught.
* The renderer - a basic web application that connects to the aforementioned WebSocketServer and displays the data it receives. When it receives "10", it renders a reCAPTCHA v2 challenge explicitly (note: the configured site key does not have to be valid for the example). When it receives "15", it closes the connection and does nothing further.
Basically, there's an application that monitors the WebSocket data that is received by the webapp. However, when "10" is received by the webapp it renders the reCAPTCHA challenge (a bogus one but that is irrelevant; it's rendered at the bottom right hand of the window) BUT the debugger does NOT emit a `Network.webSocketFrameReceived` for that "10" message... which is the bug I'm reporting.
At the end of it, the application closes the WebSocket when it receives "15" from the server, at which point:
* The webapp has logged 15 packets that were received
* The WebSocketServer has logged that it has sent 15 packets
* The debugger has logged that it received 14 packets, and "10" is missing. | platform/windows,bug :beetle:,has-repro-gist,stale,32-x-y | low | Critical |
2,569,812,680 | flutter | Option to disable tooltips bypassing wait duration when cursor moves from one tooltip to another | ### Use case
My own use-case is a bit limited, we currently have an Emoji and an Icon picker in our application, each option has a tooltip on hover with a 300ms wait duration to show. It's a scrollable container, so if you show the tooltip for an option, and then move your cursor/scroll, the tooltip flashes in and out as it instantly displays tootlips when a tooltip was triggered and moving the cursor to another tooltip container.
The tooltip blocks scroll behavior, so if you show the tooltip on an option, you have to completely leave the container with the options for the tooltip to reset and stop showing, but if you rest your cursor for eg. 300 ms on an option, and start scrolling and moving your cursor slightly, there's a good chance your cursor will end up on a random tooltip blocking the scrolling.
### Proposal
There's nowhere in the Material 3 specification where it mentions this behavior, and neither is there in the Flutter documentation. I can "experience" the behavior in eg. Google Meets, but the use-case is wildly different, from a few minor actions with tooltips to a popover filled with options each with a tooltip in a scrollable environment.
I'd love an option to disable this behavior, either in the TooltipThemeData or wrapping in some kind of TooltipContainer or similar.
It would also be nice if this behavior could be documented, at least in the Flutter documentation for the Tooltip.
Simple reproducible:
```
import 'package:flutter/material.dart';
void main() => runApp(const TooltipExampleApp());
class TooltipExampleApp extends StatelessWidget {
const TooltipExampleApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
theme:
ThemeData(tooltipTheme: const TooltipThemeData(preferBelow: false)),
home: Scaffold(
appBar: AppBar(title: const Text('Tooltip Sample')),
body: const Center(
child: Column(
children: [TooltipSample(), TooltipSample()],
),
),
),
);
}
}
class TooltipSample extends StatelessWidget {
const TooltipSample({super.key});
@override
Widget build(BuildContext context) {
return Tooltip(
message: 'I am a Tooltip',
decoration: BoxDecoration(
borderRadius: BorderRadius.circular(25),
gradient:
const LinearGradient(colors: <Color>[Colors.amber, Colors.red]),
),
height: 50,
padding: const EdgeInsets.all(8.0),
preferBelow: true,
textStyle: const TextStyle(
fontSize: 24,
),
waitDuration: const Duration(seconds: 1),
child: const Text('Tap this text and hold down to show a tooltip.'),
);
}
}
```
Video of our emoji picker and the tooltip behavior:
https://github.com/user-attachments/assets/f30e947f-fa03-4a1f-b20f-c9b9e94412e8
| c: new feature,framework,f: material design,c: proposal,a: desktop,P3,team-design,triaged-design | low | Minor |
2,569,838,365 | godot | Script editor cursor is randomly 1px white or 2px grey | ### Tested versions
- Only tested in Godot v4.3.stable
### System information
Godot v4.3.stable - macOS 15.0.1 - Vulkan (Mobile) - integrated Apple M3 Max - Apple M3 Max (14 Threads)
### Issue description
https://github.com/user-attachments/assets/dcf8a4dd-f43f-4f7b-a9b4-2ed18ecd4f75
### Steps to reproduce
Click next to different characters in the script editor and witness the cursor being either 1px wide or 2px wide.
### Minimal reproduction project (MRP)
n/a | bug,topic:editor,needs testing,topic:gui | low | Minor |
2,569,840,109 | vscode | SCM - Source control UI improvements | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
The text labels in the source control view are a little verbose and repetitive because the phrase "Source Control" appears in every section (see the screenshot below). What do you think about simplifying the labels? For example:
- Source Control Repositories → Workspace Repositories
- Source Control → Repository
- Source Control Graph → Graph
 | scm,under-discussion | low | Minor |
2,569,857,989 | puppeteer | [Feature]: Ability to Save and Share Cache in `createBrowserContext`. | ### Feature description
Currently, when using `createBrowserContext`, Puppeteer does not provide built-in functionality to save and share the cache between different browser contexts. This feature would be helpful in scenarios where we need to persist cache across sessions or share it with other browser contexts for improved performance and efficiency.
Proposed Solution: Introduce methods to:
Save the cache (HTTP resources, scripts, images, etc.) of a browser context to a file or storage.
Load the saved cache into a new or existing BrowserContext.
Use Case: This feature would be beneficial for applications where:
Browser contexts are used in multiple sessions but need the same cached resources.
Reducing network calls and speeding up tests or web scraping by reusing cached resources between contexts. | feature,P3 | low | Major |
2,570,010,604 | vscode | Blank cell in Jupyter Notebook | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.93.1
- OS Version: Ubuntu 22.04

The cell cannot be edited in any way and can only be deleted.
Steps to Reproduce: Appears occasionally during normal work.
```
Version: 1.93.1
Commit: 38c31bc77e0dd6ae88a4e9cc93428cc27a56ba40
Date: 2024-09-11T17:20:05.685Z
Electron: 30.4.0
ElectronBuildId: 10073054
Chromium: 124.0.6367.243
Node.js: 20.15.1
V8: 12.4.254.20-electron.0
OS: Linux x64 6.5.0-21-generic
``` | bug,notebook-layout | low | Critical |
2,570,039,570 | node | Internal node assertion caused by js copy mechanism | ### Version
22.9.0
### Platform
```text
image node:22.9.0-bullseye-slim
```
### Subsystem
_No response_
### What steps will reproduce the bug?
Run in node cli:
```js
const undici = require('undici')
const { default: fastCopy } = require('fast-copy')
const a = new undici.Agent()
a.request({ method: 'GET', origin: 'https://google.com', path: '/' }).then(r => r.body.text())
fastCopy(a)
```
### How often does it reproduce? Is there a required condition?
everytime
### What is the expected behavior? Why is that the expected behavior?
Have meaningful protection and error message on JS level
### What do you see instead?
Process crash
```
│ node[80]: static void node::TCPWrap::New(const v8::FunctionCallbackInfo<v8::Value>&) at ../src/tcp_wrap.cc:155
│ # Assertion failed: args[0]->IsInt32()
│ │
│ ----- Native stack trace -----
│ 1: 0xf462ec node::Assert(node::AssertionInfo const&) [node]
│ 2: 0x1088d7c node::TCPWrap::New(v8::FunctionCallbackInfo<v8::Value> const&) [node]
│ 3: 0x1239b24 [node]
│ 4: 0x1239dcc v8::internal::Builtin_HandleApiConstruct(int, unsigned long*, v8::internal::Isolate*) [node]
│ 5: 0x1cfb8f4 [node] │
│ ----- JavaScript stack trace -----
│ 1: getCleanClone (/home/app/node_modules/fast-copy/dist/cjs/index.cjs:52:20)
│ 2: copyObjectLooseModern (/home/app/node_modules/fast-copy/dist/cjs/index.cjs:214:17)
│ 3: copier (/home/app/node_modules/fast-copy/dist/cjs/index.cjs:371:20)
│ 4: copyObjectLooseModern (/home/app/node_modules/fast-copy/dist/cjs/index.cjs:226:35)
│ 5: copier (/home/app/node_modules/fast-copy/dist/cjs/index.cjs:371:20)
│ 6: copyObjectLooseModern (/home/app/node_modules/fast-copy/dist/cjs/index.cjs:226:35)
│ 7: copier (/home/app/node_modules/fast-copy/dist/cjs/index.cjs:371:20)
│ 8: copyArrayLoose (/home/app/node_modules/fast-copy/dist/cjs/index.cjs:147:30)
│ 9: copier (/home/app/node_modules/fast-copy/dist/cjs/index.cjs:367:20)
│ 10: copyObjectLooseModern (/home/app/node_modules/fast-copy/dist/cjs/index.cjs:226:35)
```
### Additional information
Using pure js library, **without native code manipulation** causes internal nodejs error and process crash
here is the library
https://github.com/planttheidea/fast-copy | net | medium | Critical |
2,570,050,419 | tensorflow | Error building speech_commands with the ARM platform | ### Issue type
Build/Install
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
source
### TensorFlow version
2.17.0
### Custom code
No
### OS platform and distribution
Ubuntu 24.04
### Mobile device
Armv7
### Python version
3.12.3
### Bazel version
6.5.0
### GCC/compiler version
_No response_
### CUDA/cuDNN version
none
### GPU model and memory
none
### Current behavior?
The following error occurs when building
`external/gemmlowp/meta/streams_arm_32.h:1535:3: error: 'asm' operand has impossible constraints`
### Standalone code to reproduce the issue
```shell
bazel build --config=elinux_armhf //tensorflow/examples/speech_commands:recognize_commands
```
### Relevant log output
```shell
~/tensorflow_src$ bazel build --config=elinux_armhf //tensorflow/examples/speech_commands:recognize_commands
INFO: Reading 'startup' options from /home/linus/tensorflow_src/.bazelrc: --windows_enable_symlinks
INFO: Options provided by the client:
Inherited 'common' options: --isatty=1 --terminal_columns=115
INFO: Reading rc options for 'build' from /home/linus/tensorflow_src/.bazelrc:
Inherited 'common' options: --experimental_repo_remote_exec
INFO: Reading rc options for 'build' from /home/linus/tensorflow_src/.bazelrc:
'build' options: --define framework_shared_object=true --define tsl_protobuf_header_only=true --define=use_fast_cpp_protos=true --define=allow_oversize_protos=true --spawn_strategy=standalone -c opt --announce_rc --define=grpc_no_ares=true --noincompatible_remove_legacy_whole_archive --features=-force_no_whole_archive --enable_platform_specific_config --define=with_xla_support=true --config=short_logs --config=v2 --experimental_cc_shared_library --experimental_link_static_libraries_once=false --incompatible_enforce_config_setting_visibility
INFO: Found applicable config definition build:short_logs in file /home/linus/tensorflow_src/.bazelrc: --output_filter=DONT_MATCH_ANYTHING
INFO: Found applicable config definition build:v2 in file /home/linus/tensorflow_src/.bazelrc: --define=tf_api_version=2 --action_env=TF2_BEHAVIOR=1
INFO: Found applicable config definition build:elinux_armhf in file /home/linus/tensorflow_src/.bazelrc: --config=elinux --cpu=armhf --copt -mfp16-format=ieee
INFO: Found applicable config definition build:elinux in file /home/linus/tensorflow_src/.bazelrc: --crosstool_top=@local_config_embedded_arm//:toolchain --host_crosstool_top=@bazel_tools//tools/cpp:toolchain
INFO: Found applicable config definition build:linux in file /home/linus/tensorflow_src/.bazelrc: --host_copt=-w --copt=-Wno-all --copt=-Wno-extra --copt=-Wno-deprecated --copt=-Wno-deprecated-declarations --copt=-Wno-ignored-attributes --copt=-Wno-array-bounds --copt=-Wunused-result --copt=-Werror=unused-result --copt=-Wswitch --copt=-Werror=switch --define=PREFIX=/usr --define=LIBDIR=$(PREFIX)/lib --define=INCLUDEDIR=$(PREFIX)/include --define=PROTOBUF_INCLUDE_PATH=$(PREFIX)/include --cxxopt=-std=c++17 --host_cxxopt=-std=c++17 --config=dynamic_kernels --experimental_guard_against_concurrent_changes
INFO: Found applicable config definition build:dynamic_kernels in file /home/linus/tensorflow_src/.bazelrc: --define=dynamic_loaded_kernels=true --copt=-DAUTOLOAD_DYNAMIC_KERNELS
INFO: Analyzed target //tensorflow/examples/speech_commands:recognize_commands (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
ERROR: /home/linus/tensorflow_src/tensorflow/core/kernels/BUILD:6327:11: Compiling tensorflow/core/kernels/meta_support.cc failed: (Exit 1): arm-none-linux-gnueabihf-gcc failed: error executing command (from target //tensorflow/core/kernels:meta_support) /home/linus/.cache/bazel/_bazel_linus/dfed2ce84bdb0ad21d45a92561825dca/external/armhf_linux_toolchain/bin/arm-none-linux-gnueabihf-gcc -fstack-protector -g0 -O2 -DNDEBUG -ffunction-sections ... (remaining 135 arguments skipped)
In file included from external/gemmlowp/meta/streams.h:307,
from external/gemmlowp/meta/quantized_mul_kernels.h:22,
from ./tensorflow/core/kernels/meta_support.h:21,
from tensorflow/core/kernels/meta_support.cc:18:
external/gemmlowp/meta/streams_arm_32.h: In static member function 'static void gemmlowp::meta::GemmExecutorPackRHS::ExecuteDispatch3D(const P&) [with P = gemmlowp::meta::GemmParams<unsigned char, int, gemmlowp::meta::RowMajorWithSum, gemmlowp::meta::RowMajorWithSum, gemmlowp::meta::QuantizedStaticPreprocessedAsInt32, gemmlowp::meta::RowMajor>; int m = 2; int n = 4; int k = 8; int m_leftovers = 0; int n_leftovers = 0; int k_leftovers = 0]':
external/gemmlowp/meta/streams_arm_32.h:1535:3: error: 'asm' operand has impossible constraints
1535 | asm volatile(
| ^~~
Target //tensorflow/examples/speech_commands:recognize_commands failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 107.917s, Critical Path: 91.70s
INFO: 13 processes: 7 internal, 6 local.
FAILED: Build did NOT complete successfully
```
| type:build/install,subtype: ubuntu/linux,2.17 | low | Critical |
2,570,074,763 | react | [DevTools Bug]: Profiler fail at createing profile for large pages | ### Website or app
My Project
### Repro steps
If you have a large enough page the profiler fails to create a profile even for a small time window and the page freezes
I need a way to profile my pages. I think a file output instead of the visualizer can be helpful.
### How often does this bug happen?
Every time
### DevTools package (automated)
_No response_
### DevTools version (automated)
_No response_
### Error message (automated)
_No response_
### Error call stack (automated)
_No response_
### Error component stack (automated)
_No response_
### GitHub query string (automated)
_No response_ | Type: Bug,Status: Unconfirmed,Component: Developer Tools | medium | Critical |
2,570,089,672 | bitcoin | build: RFC Coverage build type | Porting this issue from: https://github.com/hebasto/bitcoin/issues/341.
It's not clear if using our Coverage build type works with Clang, or not. In the linked thread there are simultaneous claims that it "works", but also that it does not work. It seems from the discussion that even using GCC with the Coverage build type is flaky.
There's been suggestions to add a Coverage mode for Clang: https://github.com/hebasto/bitcoin/pull/233, however adding a second way of doing things, when the current way may not work properly, and (likely?) currently isn't being used by anyone, doesn't seem like a good approach, and adds even more complication to the build system. Note that devs are already using Clang for coverage regardless of if it currently exists in the build system (cc @dergoegge, @marcofleon, @vasild).
If using Clangs native coverage mode is what is preferred/is "better", then it'd seem better for us to replace the current, more GCC focussed implementation, with one that is geared towards LLVM/Clang, if the idea is to provide something that works out-of-the-box, and is generally used by the developers working on the project. | Brainstorming,Build system | low | Major |
2,570,113,740 | godot | [uid] UIDs generated for a .png are the same for 2 files | ### Tested versions
- Reproduceable in v4.4.dev3.mono.official [f4af8201b]
### System information
Windows 11
### Issue description
` editor/editor_file_system.cpp:1259 - UID duplicate detected between res://textures/special/clip.png and res://textures/shaders/tangent-test.png.`
### Steps to reproduce
This is where I am unsure about the repro steps, could the UID function be generating ID's that are very similar or too similar to be considered a UID?
My project was created 2 days ago, I imported the Qodot plugin and dragged the textures folder from res://addons/qodot/textures to res://textures/
I did nothing else with these files.
### Minimal reproduction project (MRP)
I've attached the bugged assets and various bits that might explain the issue.
I put the .godot/imported files in a directory called .imported/ to try to help you in debugging.
[bugged_files_example.zip](https://github.com/user-attachments/files/17277475/bugged_files_example.zip)
```[tasklist]
### Tasks
```
| bug,topic:editor | low | Critical |
2,570,147,103 | langchain | embedding_key param does not work in MongoDBAtlasVectorSearch | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Description
This next code throws an error related to the embedding_key field :
`inference_api_key = os.getenv("OPENAI_API_KEY")
mongodb_cluster_uri = os.getenv("MONGODB_ATLAS_CLUSTER_URI")
db_name = os.getenv("DB_NAME")
collection_name = os.getenv("COLLECTION_NAME")
atlas_vector_search_index_name = os.getenv("ATLAS_VECTOR_SEARCH_INDEX_NAME")
embeddings = OpenAIEmbeddings(openai_api_key=inference_api_key,model="text-embedding-ada-002")
vector = embeddings.embed_query("hello")
print(vector[:3])
client = MongoClient(mongodb_cluster_uri)
collection = client[db_name][collection_name]
vector_store = MongoDBAtlasVectorSearch(
collection=collection,
embedding=embeddings,
embedding_key="log_message_embedding",
relevance_score_fn="cosine",
)
query = "Has any goal been aborted?"
results = vector_store.similarity_search(query)
pprint.pprint(results) `
It returns this error
`pymongo.errors.OperationFailure: PlanExecutor error during aggregation :: caused by :: embedding is not indexed as vector, full error: {'ok': 0.0, 'errmsg': 'PlanExecutor error during aggregation :: caused by :: embedding is not indexed as vector', 'code': 8, 'codeName': 'UnknownError', '$clusterTime': {'clusterTime': Timestamp(1728299912, 6), 'signature': {'hash': b"X\xe6!2\xb9\xc7P\xd66Q\x85g\x98'\x91\xe9\xfa\x0cH ", 'keyId': 7384849210639646731}}, 'operationTime': Timestamp(1728299912, 6)}`
The code is trying to use embedding instead of the embedding key value I indicated.
Thanks in advance
### Error Message and Stack Trace (if applicable)
`pymongo.errors.OperationFailure: PlanExecutor error during aggregation :: caused by :: embedding is not indexed as vector, full error: {'ok': 0.0, 'errmsg': 'PlanExecutor error during aggregation :: caused by :: embedding is not indexed as vector', 'code': 8, 'codeName': 'UnknownError', '$clusterTime': {'clusterTime': Timestamp(1728299912, 6), 'signature': {'hash': b"X\xe6!2\xb9\xc7P\xd66Q\x85g\x98'\x91\xe9\xfa\x0cH ", 'keyId': 7384849210639646731}}, 'operationTime': Timestamp(1728299912, 6)}`
### System Info
System Information
------------------
> OS: Linux
> OS Version: #45~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Wed Sep 11 15:25:05 UTC 2
> Python Version: 3.10.0 (default, Mar 3 2022, 09:58:08) [GCC 7.5.0]
Package Information
-------------------
langchain_core: 0.3.9
langchain: 0.3.1
langchain_community: 0.3.1
langsmith: 0.1.131
langchain_mongodb: 0.2.0
langchain_openai: 0.2.2
langchain_text_splitters: 0.3.0
Optional packages not installed
-------------------------------
> langgraph
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.9
> async-timeout: 4.0.3
> dataclasses-json: 0.6.7
> httpx: 0.27.2
> jsonpatch: 1.33
> numpy: 1.26.4
> openai: 1.51.0
> orjson: 3.10.7
> packaging: 24.1
> pydantic: 2.9.2
> pydantic-settings: 2.5.2
> pymongo: 4.9.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.35
> tenacity: 8.5.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
| Ɑ: vector store | low | Critical |
2,570,167,958 | ui | [feat]: Request to Add Locale-Specific Date-Time Input Component | ### Feature description
You guys have created whole library but there's no component that can replace the need to use native input type = "datetime-locale". There should be a similar component that let the user easily enter both a date and a time, including the year, month, and day as well as the time in hours and minutes.
### Affected component/components
_No response_
### Additional Context
Additional details here...
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | low | Major |
2,570,244,732 | godot | Singleton identifiers cannot appear in MainLoop script | ### Tested versions
Reproduceable in: v4.4.dev [2a2e6213c3beefd6511a0a8cb5f3c81e1d0b19c6]
Theoretically can be reproduced in any version that has autoload singletons.
### System information
Godot v4.4.dev (e3213aaef) - Windows 10.0.22631 - Single-window, 1 monitor - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 4060 Laptop GPU (NVIDIA; 31.0.15.4660) - AMD Ryzen 7 8845H w/ Radeon 780M Graphics (16 threads)
### Issue description
You can't use autoload singleton identifiers in MainLoop scripts.
### Steps to reproduce
Add any autoload script and identify a singleton name.
Add a MainLoop script, mention that singleton name anywhere in the MainLoop script. Set the script as MainLoop in project settings.
Run any scene.
For MRP: Run main.tscn
### Minimal reproduction project (MRP)
[SingletonInMainLoop.zip](https://github.com/user-attachments/files/17278000/SingletonInMainLoop.zip) | bug,topic:gdscript | low | Minor |
2,570,290,151 | go | crypto/rsa: implement optional SP 800-89 public key validation (with bigmod) | SP 800-89 Section 5.3.3 has a convoluted and generally pointless, but mandatory, process to "partially validate" RSA public keys. For https://github.com/golang/go/issues/69536, we'll need to be able to optionally perform it using crypto/internal/bigmod (since we don't want math/big within the module boundary). | NeedsFix | low | Minor |
2,570,319,346 | neovim | quickfix "(1 of X): ..." message gets immediately overwritten when using `nvim -q` | ### Problem
Hi,
whenever I open neovim with the `-q` flag, the quickfix message "(1 of X): ..." gets immediately overwritten with a blank line. Using :cnext/:cprev shows the messages and they won't get overwritten. I bisected and it happens to me since e41368f3bc1d08d900425608bd199f585d6fce59.
### Steps to reproduce
```bash
echo some text > my_file
echo 'my_file:1:1:some_text' > qf_file
nvim --clean -q qf_file
```
### Expected behavior
The "(1 of X): ..." message should stay visible and not be overwritten when the editor opens.
### Nvim version (nvim -v)
NVIM v0.11.0-dev-921+g2377443cd2
### Vim (not Nvim) behaves the same?
No, vim doesn't have this issue
### Operating system/version
Arch Linux
### Terminal name/version
kitty 0.36.4
### $TERM environment variable
xterm-kitty
### Installation
Arch User Repository (AUR) | bug,startup,messages | low | Minor |
2,570,342,330 | vscode | Can't create Breakpoint with custom mode from extension | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.94.0
- OS Version: Debian 12
Steps to Reproduce:
I have a Debug Adapter which defines several DAP Breakpoint Modes (https://microsoft.github.io/debug-adapter-protocol//specification.html#Types_BreakpointMode) and I'm trying to set one of these modes from my extension integrating that adapter.
I can set the mode (i.e. see it in the DAP message received in my DA) but can't see the mode in the Breakpoints View.
I'm trying to create the breakpoint like this
```ts
const bp = new vscode.SourceBreakpoint(...)
(bp as any).mode = "foo";
(bp as any).modeLabel = "Foo";
vscode.debug.addBreakpoints([bp])
```
While the `mode` property is not exposed into the official extension API I think it was intended to be supported because the method implementing breakpoint creation from data passed in the `addBreakpoints` does handle it https://github.com/microsoft/vscode/blob/485a7ede3bb566ec275089ae51d0e38a3e7ef8f1/src/vs/workbench/api/browser/mainThreadDebugService.ts#L222 however it only extracts the `mode` property, not `modeLabel` so the BreakpoinsView then can't render it https://github.com/microsoft/vscode/blob/485a7ede3bb566ec275089ae51d0e38a3e7ef8f1/src/vs/workbench/contrib/debug/browser/breakpointsView.ts#L555
So while I technically can set the mode from my extension the user does not see what mode is set (unless they open the `Edit Breakpoint` "dialog" where it is shown correctly) this is not really usable as having the mode set but user not being able to see it would only cause confusion

| feature-request,debug | low | Critical |
2,570,445,063 | react-native | [iOS][Old Arch][Codegen] - Codegen breaks pod install | ### Description
Hi everyone! 👋
We are currently upgrading our application **from version .73.8 to .75.4** and we've noticed something weird.
With the new version, if we run `pod install`, **_now Codegen gets executed differently than before._**
This would be fine if we actually enabled the new architecture, but we didn't and **it breaks the pods install phase** because some libraries are compatible with the new architecture but present **issues in the Spec** file that Codegen refers to.
> So we can't install pods using the old architecture with libraries that present issues on Spec files because Codegen can't properly parse them (I guess)
I took the time to look into the changes between version .73.x and .75.x and **I've noticed that this commit** was included: [Defragment Codegen in OSS between Old and New Architecture](https://github.com/facebook/react-native/commit/1204696f08c91e17154d5b9c946d7fe8532510de) and **it changes the following file**: [codegen.rb](https://github.com/facebook/react-native/blob/main/packages/react-native/scripts/cocoapods/codegen.rb) by removing the `new_arch_enabled` check in the `run_codegen` function.
> Doing so means that if we run the same project on version .73 it works fine, but if we run it from version .74 onward it breaks on the old architecture with any library that has a Spec file that is broken
I don't have a complete overview of how Codegen gets executed, by I think that we should revert this check for the sake of any project that can't migrate right away due to lock-ins: may them be by libraries or vendor code such as third party sdks.
### Steps to reproduce
1. Init a new project with `npx @react-native-community/cli@latest init RN75 --version 0.74 --pm npm`;
2. Install any library such as `npm install --save-dev react-native-webview`;
3. Break the Spec file in any way, in our case we can break: `react-native-webview`'s Spec file by adding a Tuple that isn't supported yet; (Example down below)
4. Run `pod install`;
5. See **Codegen** error;
Example:
```ts
// node_modules/react-native-webview/src/NativeRNCWebViewModule.ts
import type { TurboModule } from 'react-native';
import { TurboModuleRegistry } from 'react-native';
import { Double } from 'react-native/Libraries/Types/CodegenTypes';
export interface Spec extends TurboModule {
isFileUploadSupported(): Promise<boolean>;
shouldStartLoadWithLockIdentifier(
shouldStart: boolean,
lockIdentifier: [Double] // We change this type from Double to [Double]
): void;
}
export default TurboModuleRegistry.getEnforcing<Spec>('RNCWebViewModule');
```
### React Native Version
0.74.0+
### Affected Platforms
Build - MacOS
### Output of `npx react-native info`
```text
System:
OS: macOS 14.6
CPU: (8) arm64 Apple M3
Memory: 108.92 MB / 16.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 22.8.0
path: ~/.nvm/versions/node/v22.8.0/bin/node
Yarn:
version: 1.22.22
path: ~/.nvm/versions/node/v22.8.0/bin/yarn
npm:
version: 10.8.2
path: ~/.nvm/versions/node/v22.8.0/bin/npm
Watchman:
version: 2024.09.09.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods:
version: 1.15.2
path: /Users/user/.gem/ruby/3.3.0/bin/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 23.5
- iOS 17.5
- macOS 14.5
- tvOS 17.5
- visionOS 1.2
- watchOS 10.5
Android SDK: Not Found
IDEs:
Android Studio: 2024.1 AI-241.18034.62.2412.12266719
Xcode:
version: 15.4/15F31d
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.12
path: /usr/bin/javac
Ruby:
version: 3.3.5
path: /Users/user/.rbenv/shims/ruby
npmPackages:
"@react-native-community/cli": Not Found
react:
installed: 18.2.0
wanted: 18.2.0
react-native:
installed: 0.74.5
wanted: 0.74.5
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: false
iOS:
hermesEnabled: true
newArchEnabled: false
info React Native v0.75.4 is now available (your project is running on v0.74.5).
info Changelog: https://github.com/facebook/react-native/releases/tag/v0.75.4
info Diff: https://react-native-community.github.io/upgrade-helper/?from=0.74.5
info For more info, check out "https://reactnative.dev/docs/upgrading?os=macos".
```
### Stacktrace or Logs
```text
/Users/user/Documents/Projects/ReactNative/RN74/vendor/bundle/ruby/3.3.0/gems/concurrent-ruby-1.3.4/lib/concurrent-ruby/concurrent/concern/deprecation.rb:1: warning: logger was loaded from the standard library, but will no longer be part of the default gems starting from Ruby 3.5.0.
You can add logger to your Gemfile or gemspec to silence this warning.
/Users/user/Documents/Projects/ReactNative/RN74/vendor/bundle/ruby/3.3.0/gems/activesupport-7.0.8.4/lib/active_support/core_ext/big_decimal.rb:3: warning: bigdecimal was loaded from the standard library, but will no longer be part of the default gems starting from Ruby 3.4.0.
You can add bigdecimal to your Gemfile or gemspec to silence this warning.
/Users/user/Documents/Projects/ReactNative/RN74/vendor/bundle/ruby/3.3.0/gems/activesupport-7.0.8.4/lib/active_support/notifications.rb:4: warning: mutex_m was loaded from the standard library, but will no longer be part of the default gems starting from Ruby 3.4.0.
You can add mutex_m to your Gemfile or gemspec to silence this warning.
(node:64099) [DEP0040] DeprecationWarning: The `punycode` module is deprecated. Please use a userland alternative instead.
(Use `node --trace-deprecation ...` to show where the warning was created)
Auto-linking React Native module for target `RN74`: react-native-webview
Framework build type is static library
[Codegen] Adding script_phases to React-Codegen.
[Codegen] Generating ./build/generated/ios/React-Codegen.podspec.json
[!] Invalid `Podfile` file: [!] /Users/user/.nvm/versions/node/v22.8.0/bin/node ./../node_modules/react-native/scripts/generate-codegen-artifacts.js -p /Users/user/Documents/Projects/ReactNative/RN74/ios/.. -o /Users/user/Documents/Projects/ReactNative/RN74/ios -t ios
[Codegen] Analyzing /Users/user/Documents/Projects/ReactNative/RN74/package.json
[Codegen] Searching for codegen-enabled libraries in the app.
[Codegen] The "codegenConfig" field is not defined in package.json. Assuming there is nothing to generate at the app level.
[Codegen] Searching for codegen-enabled libraries in the project dependencies.
[Codegen] Found react-native
[Codegen] Found react-native-webview
[Codegen] >>>>> Searching for codegen-enabled libraries in react-native.config.js
[Codegen] Processing FBReactNativeSpec
[Codegen] Searching for podspec in the project dependencies.
[Codegen] Processing rncore
[Codegen] Searching for podspec in the project dependencies.
[Codegen] Processing RNCWebViewSpec
[Codegen] Searching for podspec in the project dependencies.
[Codegen] Supported Apple platforms: ios, macos, visionos for RNCWebViewSpec
[Codegen] Done.
UnsupportedTypeAnnotationParserError: Module NativeRNCWebViewModule: TypeScript type annotation 'TSTupleType' is unsupported in NativeModule specs.
at translateTypeAnnotation (/Users/user/Documents/Projects/ReactNative/RN74/node_modules/@react-native/codegen/lib/parsers/typescript/modules/index.js:376:15)
at /Users/user/Documents/Projects/ReactNative/RN74/node_modules/@react-native/codegen/lib/parsers/parsers-commons.js:373:11
at guard (/Users/user/Documents/Projects/ReactNative/RN74/node_modules/@react-native/codegen/lib/parsers/utils.js:26:14)
at translateFunctionTypeAnnotation (/Users/user/Documents/Projects/ReactNative/RN74/node_modules/@react-native/codegen/lib/parsers/parsers-commons.js:367:25)
at buildPropertySchema (/Users/user/Documents/Projects/ReactNative/RN74/node_modules/@react-native/codegen/lib/parsers/parsers-commons.js:484:7)
at /Users/user/Documents/Projects/ReactNative/RN74/node_modules/@react-native/codegen/lib/parsers/parsers-commons.js:705:24
at guard (/Users/user/Documents/Projects/ReactNative/RN74/node_modules/@react-native/codegen/lib/parsers/utils.js:26:14)
at /Users/user/Documents/Projects/ReactNative/RN74/node_modules/@react-native/codegen/lib/parsers/parsers-commons.js:702:14
at Array.map (<anonymous>)
at buildModuleSchema (/Users/user/Documents/Projects/ReactNative/RN74/node_modules/@react-native/codegen/lib/parsers/parsers-commons.js:699:6) {
nodes: [
Node {
type: 'TSTupleType',
start: 342,
end: 350,
loc: [SourceLocation],
elementTypes: [Array]
}
],
typeAnnotationType: 'TSTupleType'
}
.
# from /Users/user/Documents/Projects/ReactNative/RN74/ios/Podfile:20
# -------------------------------------------
#
> use_react_native!(
# :path => config[:reactNativePath],
# -------------------------------------------
[!] [Codegen] warn: using experimental new codegen integration
```
### Reproducer
https://github.com/gladiuscode/codegen-breaks-pod-install-on-old-arch
### Screenshots and Videos
_No response_ | Platform: iOS,Newer Patch Available | low | Critical |
2,570,454,040 | bitcoin | build: macOS fuzz instructions broken using latest macOS linker | Testing master at 62e4516722115c2d5aeb6c197abc73ca7c078b23 and the fuzzing.md instructions:
```bash
cmake --preset=libfuzzer \
-DCMAKE_C_COMPILER="$(brew --prefix llvm)/bin/clang" \
-DCMAKE_CXX_COMPILER="$(brew --prefix llvm)/bin/clang++" \
-DAPPEND_LDFLAGS=-Wl,-no_warn_duplicate_libraries
cmake --build build_fuzz
<snip>
[100%] Linking CXX executable fuzz
ld: multiple errors: invalid r_symbolnum=1 in '/Users/michael/fanquake-bitcoin/build_fuzz/src/test/fuzz/CMakeFiles/fuzz.dir/addition_overflow.cpp.o'; invalid r_symbolnum=1 in '/Users/michael/fanquake-bitcoin/build_fuzz/src/test/fuzz/CMakeFiles/fuzz.dir/fees.cpp.o'; invalid r_symbolnum=1 in '/Users/michael/fanquake-bitcoin/build_fuzz/src/test/fuzz/CMakeFiles/fuzz.dir/float.cpp.o'; invalid r_symbolnum=1 in '/Users/michael/fanquake-bitcoin/build_fuzz/src/test/fuzz/CMakeFiles/fuzz.dir/multiplication_overflow.cpp.o'; invalid r_symbolnum=1 in '../../libbitcoin_cli.a[2](stdin.cpp.o)'; invalid r_symbolnum=1 in '../../../libcrc32c.a[3](crc32c_portable.cc.o)'; invalid r_symbolnum=1 in '../../../libcrc32c_arm64.a[2](crc32c_arm64.cc.o)'; invalid r_symbolnum=1 in '../../libbitcoin_consensus.a[11](script_error.cpp.o)'; invalid r_symbolnum=1 in '../../crypto/libbitcoin_crypto.a[15](sha3.cpp.o)'; invalid r_symbolnum=5 in '../../util/libbitcoin_util.a[29](randomenv.cpp.o)'; invalid r_symbolnum=1 in '../../crypto/libbitcoin_crypto.a[10](poly1305.cpp.o)'; invalid r_symbolnum=18 in '../../crypto/libbitcoin_crypto_arm_shani.a[2](sha256_arm_shani.cpp.o)'; invalid r_symbolnum=1 in '../../crypto/libbitcoin_crypto.a[5](hex_base.cpp.o)'; invalid r_symbolnum=1 in '../../util/libbitcoin_util.a[27](logging.cpp.o)'; invalid r_symbolnum=1 in '../../util/libbitcoin_util.a[24](threadnames.cpp.o)'; invalid r_symbolnum=1 in '../../libbitcoin_consensus.a[5](hash.cpp.o)'; invalid r_symbolnum=1 in '../../../libleveldb.a[37](logging.cc.o)'; invalid r_symbolnum=1 in '../../../libleveldb.a[35](hash.cc.o)'; invalid r_symbolnum=1 in '../../../libleveldb.a[31](crc32c.cc.o)'; invalid r_symbolnum=1 in '../../../libleveldb.a[27](bloom.cc.o)'; invalid r_symbolnum=1 in '../../util/libbitcoin_util.a[16](serfloat.cpp.o)'; invalid r_symbolnum=1 in '../../util/libbitcoin_util.a[15](readwritefile.cpp.o)'; invalid r_symbolnum=1 in '../../util/libbitcoin_util.a[14](rbf.cpp.o)'; invalid r_symbolnum=1 in '../../libbitcoin_common.a[47](parsing.cpp.o)'; invalid r_symbolnum=1 in '../../util/libbitcoin_util.a[9](feefrac.cpp.o)'; invalid r_symbolnum=1 in '../../util/libbitcoin_util.a[6](chaintype.cpp.o)'; invalid r_symbolnum=1 in '../../../libcrc32c.a[2](crc32c.cc.o)'; invalid r_symbolnum=1 in '../../../libleveldb.a[8](filename.cc.o)'; invalid r_symbolnum=1 in '../../../libleveldb.a[2](builder.cc.o)'; invalid r_symbolnum=1 in '../util/libtest_util.a[12](str.cpp.o)'; invalid r_symbolnum=1 in '../../libbitcoin_common.a[42](request.cpp.o)'; invalid r_symbolnum=1 in '../../libbitcoin_common.a[41](rawtransaction_util.cpp.o)'; invalid r_symbolnum=1 in '../util/libtest_util.a[4](index.cpp.o)'; invalid r_symbolnum=1 in '../util/libtest_util.a[3](coins.cpp.o)'; invalid r_symbolnum=1 in '../../libbitcoin_node.a[84](torcontrol.cpp.o)'; invalid r_symbolnum=1 in '../../libbitcoin_node.a[79](server_util.cpp.o)'; invalid r_symbolnum=1 in '../../libbitcoin_common.a[30](merkleblock.cpp.o)'; invalid r_symbolnum=1 in '../../libbitcoin_common.a[29](key_io.cpp.o)'; invalid r_symbolnum=1 in '../../libbitcoin_common.a[24](deploymentinfo.cpp.o)'; invalid r_symbolnum=1 in '../../libbitcoin_common.a[21](compressor.cpp.o)'; invalid r_symbolnum=1 in '../../libbitcoin_common.a[20](url.cpp.o)'; invalid r_symbolnum=1 in '../../libbitcoin_common.a[16](run_command.cpp.o)'; invalid r_symbolnum=1 in '../../libbitcoin_node.a[68](pow.cpp.o)'; invalid r_symbolnum=1 in '../../libbitcoin_node.a[63](fees_args.cpp.o)'; invalid r_symbolnum=1 in '../../libbitcoin_node.a[55](psbt.cpp.o)'; invalid r_symbolnum=1 in '../../libbitcoin_node.a[54](peerman_args.cpp.o)'; invalid r_symbolnum=1 in '../../libbitcoin_node.a[51](miner.cpp.o)'; invalid r_symbolnum=1 in '../../libbitcoin_node.a[50](mempool_persist_args.cpp.o)'; invalid r_symbolnum=1 in '../../libbitcoin_node.a[43](database_args.cpp.o)'; invalid r_symbolnum=1 in '../../libbitcoin_node.a[41](connection_types.cpp.o)'; invalid r_symbolnum=1 in '../../libbitcoin_node.a[40](coins_view_args.cpp.o)'; invalid r_symbolnum=1 in '../../libbitcoin_node.a[39](coin.cpp.o)'; invalid r_symbolnum=1 in '../../libbitcoin_node.a[36](caches.cpp.o)'; invalid r_symbolnum=1 in '../../libbitcoin_node.a[34](blockmanager_args.cpp.o)'; invalid r_symbolnum=1 in '../../libbitcoin_node.a[31](net_processing.cpp.o)'; invalid r_symbolnum=1 in '../../libbitcoin_node.a[28](mempool_removal_reason.cpp.o)'; invalid r_symbolnum=1 in '../../libbitcoin_node.a[23](checks.cpp.o)'; invalid r_symbolnum=1 in '../../libbitcoin_node.a[22](chain.cpp.o)'
clang++: error: linker command failed with exit code 1 (use -v to see invocation)
gmake[2]: *** [src/test/fuzz/CMakeFiles/fuzz.dir/build.make:2186: src/test/fuzz/fuzz] Error 1
gmake[1]: *** [CMakeFiles/Makefile2:1722: src/test/fuzz/CMakeFiles/fuzz.dir/all] Error 2
```
I'm not sure when this broke, but my assumption is that it's an issue with a newer version of `ld`. Used here is:
```bash
ld -v
@(#)PROGRAM:ld PROJECT:ld-1115.7.3
BUILD 13:29:00 Aug 9 2024
configured to support archs: armv6 armv7 armv7s arm64 arm64e arm64_32 i386 x86_64 x86_64h armv6m armv7k armv7m armv7em
will use ld-classic for: armv6 armv7 armv7s i386 armv6m armv7k armv7m armv7em
LTO support using: LLVM version 16.0.0 (static support for 29, runtime is 29)
TAPI support using: Apple TAPI version 16.0.0 (tapi-1600.0.11.8)
```
I tried compiling with `-Wl,-ld_classic`, which should use the older verison of the linker, but it hit an assertion:
```bash
[100%] Linking CXX executable fuzz
ld: warning: -ld_classic is deprecated and will be removed in a future release
0 0x100483ee4 __assert_rtn + 160
1 0x100485804 ld::tool::LinkEditAtom::size() const (.cold.1) + 0
2 0x10035c200 ld::tool::OutputFile::addressOf(ld::Internal const&, ld::Fixup const*, ld::Atom const**) + 244
3 0x10036b58c ld::tool::OutputFile::buildChainedFixupInfo(ld::Internal&) + 1196
4 0x1003702f4 ___ZN2ld4tool10OutputFile20buildLINKEDITContentERNS_8InternalE_block_invoke.408 + 28
5 0x191fc28f8 _dispatch_call_block_and_release + 32
6 0x191fc4658 _dispatch_client_callout + 20
7 0x191fd6570 _dispatch_root_queue_drain + 996
8 0x191fd6b20 _dispatch_worker_thread2 + 156
9 0x19217339c _pthread_wqthread + 228
A linker snapshot was created at:
/tmp/fuzz-2024-10-07-142750.ld-snapshot
ld: Assertion failed: (_mode == modeFinalAddress), function finalAddress, file ld.hpp, line 1462.
clang++: error: linker command failed with exit code 1 (use -v to see invocation)
``` | macOS,Build system,Tests | low | Critical |
2,570,522,074 | terminal | Spurious OSC 11 when using tmux over Windows' SSH | ### Windows Terminal version
1.23.2771.0
### Windows build number
10.0.26100.0
### Other Software
* OpenSSH_for_Windows_9.5p1, LibreSSL 3.8.2
* tmux 3.5a
### Steps to reproduce
* Either
* ssh to a Linux box
* ssh to WSL (you can get the WSL2 VM IP by running e.g. `ip a` inside WSL)
* Run tmux
### Expected Behavior
```
Welcome to fish, the friendly interactive shell
Type help for instructions on how to use fish
lhecker@server ~>
```
### Actual Behavior
```
Welcome to fish, the friendly interactive shell
Type help for instructions on how to use fish
lhecker@server ~> ]11;rgb:0000/0000/0000\
```
The issue does not occur when ssh'ing to localhost inside WSL. | Area-VT,Issue-Bug,Product-Terminal | low | Minor |
2,570,554,051 | rust | Tracking Issue for Ipv[46]Address::from_octets, Ipv6Address::from_segments | <!--
Thank you for creating a tracking issue!
Tracking issues are for tracking a feature from implementation to stabilization.
Make sure to include the relevant RFC for the feature if it has one.
If the new feature is small, it may be fine to skip the RFC process. In that
case, you can use `issue = "none"` in your initial implementation PR. The
reviewer will ask you to open a tracking issue if they agree your feature can be
added without an RFC.
-->
Feature gate: `#![feature(ip_from)]`
This is a tracking issue for `core::net::Ipv[46]Addr::from_octets`, `core::net::Ipv6Addr::from_segments`.
<!--
Include a short description of the feature.
-->
### Public API
<!--
For most library features, it'd be useful to include a summarized version of the public API.
(E.g. just the public function signatures without their doc comments or implementation.)
-->
```rust
// core::net
impl Ipv4Addr {
pub const fn from_octets(octets: [u8; 4]) -> Ipv4Addr;
}
impl Ipv6Addr {
pub const fn from_octets(octets: [u8; 16]) -> Ipv6Addr;
pub const fn from_segments(segments: [u16; 8]) -> Ipv6Addr;
}
```
### Steps / History
<!--
For larger features, more steps might be involved.
If the feature is changed later, please add those PRs here as well.
-->
- [x] API Change Proposal (ACP) https://github.com/rust-lang/libs-team/issues/447
- [x] Implementation: #130629
- [ ] Final comment period (FCP)[^1]
- [ ] Stabilization PR
<!--
Once the feature has gone through a few release cycles and there are no
unresolved questions left, the feature might be ready for stabilization.
If this feature didn't go through the RFC process, a final comment period
(FCP) is always needed before stabilization. This works as follows:
A library API team member can kick off the stabilization process, at which point
the rfcbot will ask all the team members to verify they agree with
stabilization. Once enough members agree and there are no concerns, the final
comment period begins: this issue will be marked as such and will be listed
in the next This Week in Rust newsletter. If no blocking concerns are raised in
that period of 10 days, a stabilization PR can be opened by anyone.
-->
### Unresolved Questions
<!--
Include any open questions that need to be answered before the feature can be
stabilised. If multiple (unrelated) big questions come up, it can be a good idea
to open a separate issue for each, to make it easier to keep track of the
discussions.
It's useful to link any relevant discussions and conclusions (whether on GitHub,
Zulip, or the internals forum) here.
-->
- None yet.
[^1]: https://std-dev-guide.rust-lang.org/feature-lifecycle/stabilization.html
| T-libs-api,C-tracking-issue | low | Minor |
2,570,555,343 | PowerToys | The Powershell 7.4 installer provided by Command Not Found is useless and causes the window to be unresponsive | ### Microsoft PowerToys version
0.85.0
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
Command not found
### Steps to reproduce
Go to Settings, select Command Not Found, click Install Powershell 7.4 or later, a Powershell window pops up, the original window is not responsive, and it cannot be installed normally.
### ✔️ Expected Behavior
The command and execution result are displayed in the Powershell window, and the response of the original window is ensured
### ❌ Actual Behavior
Nothing happens, just a blue box with no words, and the original window is not responding
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,570,701,372 | next.js | Static Pages with dynamicParams Not Streaming & PPR Prefetch Not Caching Static Pages Properly | ### Link to the code that reproduces this issue
https://codesandbox.io/p/devbox/static-page-render-zl2634
### To Reproduce
1) Create a `[slug]/page.jsx` route. (In repo already done it)
2) Make a link for it. (In repo already done it)
3) `npm run build` -> `npm run start`
`[slug]/page.jsx`
```jsx
import { Suspense } from 'react';
export const dynamic = 'force-static';
export const dynamicParams = true;
export const revalidate = 20;
export default async function SuspenseTest(props) {
const slug = (await props.params).slug;
return (
<div>
<h1>This is a static content</h1>
<Suspense fallback="Loading...">
<LongRunning slug={slug}/>
</Suspense>
</div>
);
}
async function LongRunning(props) {
await new Promise((resolve) => setTimeout(resolve, 5000));
return <span>Success! ({props.slug})</span>
}
```
in other a route e.g `app/page.jsx`
```jsx
import Link from "next/link";
import styles from "./page.module.css";
export default function Home() {
return (
<div className={styles.page}>
<main className={styles.main}>
<Link href="/suspenseTest" prefetch={false}>
SuspenseTest
</Link>
</main>
</div>
);
}
```
### Current vs. Expected behavior
If `experimental.ppr` is set to `false`:
**Expected:** When clicking the link, the static part of the page should load instantly, and the Suspensed part should load via streaming. Than cache it with the standard mode (ISR, SWR)
**Actual:** When clicking the link, the page doesn't load until the entire SSG rendering is complete, causing a 5-second delay before anything happens.
If `experimental.ppr` is set to `true`:
**Expected:** When clicking the link, the static part of the page should load instantly, followed by the Suspensed part via streaming. The rendered result should then be cached so that reloading or navigating to the page again loads it instantly without any loading indicator, whenever i open this page (via reload, paste the url, or client side navigation). After the revalidation is needed the standard ISR flow happens (Stale-While-Revalidate).
**Actual:** When clicking the link, the page loads with streaming, but the rendered result is only cached on the client side. On reload, the page is re-rendered via SSG (resulting in a 5-second delay), and it only loads once rendering is complete. After the SSG finished, reloading the page should load it instantly. However, navigating to the page again without a client cache should load it with streaming once more and not reusing the ISR cache.
### Provide environment information
```bash
Operating System:
Platform: PopOs Linux
Binaries:
Node: 20.17.0
npm: 10.8.2
Relevant Packages:
next: 15.0.0-canary.179
react: ^18
react-dom: ^18
```
### Which area(s) are affected? (Select all that apply)
Navigation, Partial Prerendering (PPR)
### Which stage(s) are affected? (Select all that apply)
next start (local)
### Additional context
This also occurs when `Link` has `prefetch={true}`, resulting in only client-side caching.
With the environment variable `NEXT_PRIVATE_DEBUG_CACHE=1` enabled
The cache is only utilized when visiting the page directly. During client-side navigation, the following log is not generated:
```
using filesystem cache handler
get /suspenseTest undefined APP_PAGE false
set /suspenseTest
``` | bug,Navigation,Partial Prerendering (PPR) | low | Critical |
2,570,719,845 | bitcoin | macOS 13.7 depends build can't find qt (symlink) | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current behaviour
```
cd depends
make NO_BDB=1
...
copying packages: boost libevent qt qrencode sqlite miniupnpc zeromq
to: /Volumes/SSD/Dev/bitcoin/depends/x86_64-apple-darwin22.6.0
cd ..
cmake -B build --toolchain depends/x86_64-apple-darwin22.6.0/toolchain.cmake
```
This fails, see below
(building BDB hasn't work for me for a while, but that's going away anyway)
### Expected behaviour
To build
### Steps to reproduce
.
### Relevant log output
```
xcode-select -p
/Applications/Xcode15.2.app/Contents/Developer
```
```
-- The CXX compiler identification is AppleClang 15.0.0.15000100
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /Applications/Xcode15.2.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found SQLite3: /Volumes/SSD/Dev/bitcoin/depends/x86_64-apple-darwin22.6.0/include (found suitable version "3.38.5", minimum required is "3.7.17")
-- Found PkgConfig: /usr/local/bin/pkg-config (found version "0.29.2")
-- Found MiniUPnPc: /Volumes/SSD/Dev/bitcoin/depends/x86_64-apple-darwin22.6.0/lib/libminiupnpc.a
-- Checking for module 'libzmq>=4'
-- Found libzmq, version 4.3.5
-- Checking for module 'libqrencode'
-- Found libqrencode, version 4.1.1
-- Could NOT find Qt5Core (missing: Qt5Core_DIR)
-- Could NOT find Qt5Gui (missing: Qt5Gui_DIR)
-- Could NOT find Qt5Widgets (missing: Qt5Widgets_DIR)
-- Could NOT find Qt5LinguistTools (missing: Qt5LinguistTools_DIR)
-- Could NOT find Qt5Network (missing: Qt5Network_DIR)
-- Could NOT find Qt5Test (missing: Qt5Test_DIR)
CMake Error at depends/x86_64-apple-darwin22.6.0/lib/cmake/Qt5/Qt5Config.cmake:51 (_qt5_Core_check_file_exists):
Unknown CMake command "_qt5_Core_check_file_exists".
Call Stack (most recent call first):
cmake/module/FindQt.cmake:43 (find_package)
CMakeLists.txt:180 (find_package)
CMake Warning at cmake/module/FindQt.cmake:43 (find_package):
Found package configuration file:
/Volumes/SSD/Dev/bitcoin/depends/x86_64-apple-darwin22.6.0/lib/cmake/Qt5/Qt5Config.cmake
but it set Qt5_FOUND to FALSE so package "Qt5" is considered to be NOT
FOUND. Reason given by package:
Failed to find Qt5 component "Core" config file at
"/Users/sjors/dev/bitcoin/depends/x86_64-apple-darwin22.6.0/lib/cmake/Qt5Core/Qt5CoreConfig.cmake"
Failed to find Qt5 component "Gui" config file at
"/Users/sjors/dev/bitcoin/depends/x86_64-apple-darwin22.6.0/lib/cmake/Qt5Gui/Qt5GuiConfig.cmake"
Failed to find Qt5 component "Widgets" config file at
"/Users/sjors/dev/bitcoin/depends/x86_64-apple-darwin22.6.0/lib/cmake/Qt5Widgets/Qt5WidgetsConfig.cmake"
Failed to find Qt5 component "LinguistTools" config file at
"/Users/sjors/dev/bitcoin/depends/x86_64-apple-darwin22.6.0/lib/cmake/Qt5LinguistTools/Qt5LinguistToolsConfig.cmake"
Failed to find Qt5 component "Network" config file at
"/Users/sjors/dev/bitcoin/depends/x86_64-apple-darwin22.6.0/lib/cmake/Qt5Network/Qt5NetworkConfig.cmake"
Failed to find Qt5 component "Test" config file at
"/Users/sjors/dev/bitcoin/depends/x86_64-apple-darwin22.6.0/lib/cmake/Qt5Test/Qt5TestConfig.cmake"
Call Stack (most recent call first):
CMakeLists.txt:180 (find_package)
```
### How did you obtain Bitcoin Core
Compiled from source
### What version of Bitcoin Core are you using?
PR 31048 which builds on master @ 05d25304bc4e0c3058c8ee8a89448ce63ac77304
### Operating system and version
macOS 13.7
### Machine specifications
_No response_ | macOS,Build system | medium | Critical |
2,570,721,647 | rust | Rustc tries to link the Linux ASAN runtime when linker flavour is MSVC+LLD | When linking with `lld-link`, which provides the MSVC flavour linker with LLD subtype, the ASAN runtime selection falls through to the Linux branch and tries to link the non-existent `librustc-dev_rt.asan.a`library while targeting Windows.
A repro can be seen on Linux, cross-compiling for Windows. It needs a Windows sysroot and a clang/lld build:
```
% cargo init
Creating binary (application) package
note: see more `Cargo.toml` keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
```
```
% RUSTFLAGS="-Zsanitizer=address -Clinker=../c/src/third_party/llvm-build/Release+Asserts/bin/lld-link -Clink-arg=/libpath:../c/src/third_party/llvm-build/Release+Asserts/lib/clang/20/lib/windows -Clink-arg=/winsysroot:../c/src/third_party/depot_tools/win_toolchain/vs_files/7393122652" cargo build -Zbuild-std --target x86_64-pc-windows-msvc
Compiling compiler_builtins v0.1.109
Compiling core v0.0.0 (/home/danakj/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core)
Compiling std v0.0.0 (/home/danakj/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/std)
Compiling rustc-std-workspace-core v1.99.0 (/home/danakj/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/rustc-std-workspace-core)
Compiling alloc v0.0.0 (/home/danakj/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/alloc)
Compiling cfg-if v1.0.0
Compiling rustc-demangle v0.1.24
Compiling unwind v0.0.0 (/home/danakj/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/unwind)
Compiling rustc-std-workspace-alloc v1.99.0 (/home/danakj/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/rustc-std-workspace-alloc)
Compiling panic_unwind v0.0.0 (/home/danakj/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/panic_unwind)
Compiling panic_abort v0.0.0 (/home/danakj/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/panic_abort)
Compiling hashbrown v0.14.5
Compiling std_detect v0.1.5 (/home/danakj/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/stdarch/crates/std_detect)
Compiling proc_macro v0.0.0 (/home/danakj/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/proc_macro)
Compiling win-asan-repro v0.1.0 (/home/danakj/s/win-asan-repro)
error: linking with `../c/src/third_party/llvm-build/Release+Asserts/bin/lld-link` failed: exit status: 1
|
= note: LC_ALL="C" PATH="/home/danakj/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/bin:/home/danakj/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/bin:/home/danakj/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/bin:/home/danakj/local/bin:/home/danakj/s/gsutil:/home/danakj/s/depot_tools:/home/danakj/s/ninja:/home/danakj/s/arcanist/bin:/home/danakj/s/brew/bin:/opt/firefox:/home/danakj/.cargo/bin:/bin:/usr/bin:/home/danakj/s/quickopen:/home/danakj/.cargo/bin" VSLANG="1033" "../c/src/third_party/llvm-build/Release+Asserts/bin/lld-link" "-flavor" "link" "/NOLOGO" "/tmp/rustcCJDwzU/symbols.o" "/WHOLEARCHIVE:/home/danakj/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-pc-windows-msvc/lib/librustc-nightly_rt.asan.a" "/home/danakj/s/win-asan-repro/target/x86_64-pc-windows-msvc/debug/deps/win_asan_repro.0tdtfeujdyhjtaeasbm0d4fkx.rcgu.o" "/home/danakj/s/win-asan-repro/target/x86_64-pc-windows-msvc/debug/deps/win_asan_repro.199i99wnv1nalvuu3t1v5kp3t.rcgu.o" "/home/danakj/s/win-asan-repro/target/x86_64-pc-windows-msvc/debug/deps/win_asan_repro.20h9jqk82sswuuobak2qzucv5.rcgu.o" "/home/danakj/s/win-asan-repro/target/x86_64-pc-windows-msvc/debug/deps/win_asan_repro.3t56hhym87ffrajopna60eke0.rcgu.o" "/home/danakj/s/win-asan-repro/target/x86_64-pc-windows-msvc/debug/deps/win_asan_repro.6vz5im55mbl3ovk6o6y9izcq6.rcgu.o" "/home/danakj/s/win-asan-repro/target/x86_64-pc-windows-msvc/debug/deps/win_asan_repro.8i9ptnb83exgtut8kf28gftgy.rcgu.o" "/home/danakj/s/win-asan-repro/target/x86_64-pc-windows-msvc/debug/deps/win_asan_repro.0vi8a9ni8tzhrhb9j22qz2uyj.rcgu.o" "/LIBPATH:/home/danakj/s/win-asan-repro/target/x86_64-pc-windows-msvc/debug/deps" "/LIBPATH:/home/danakj/s/win-asan-repro/target/debug/deps" "/LIBPATH:/home/danakj/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-pc-windows-msvc/lib" "/home/danakj/s/win-asan-repro/target/x86_64-pc-windows-msvc/debug/deps/libstd-34f4d1dc3afbf24b.rlib" "/home/danakj/s/win-asan-repro/target/x86_64-pc-windows-msvc/debug/deps/libpanic_unwind-edca54b671061ee7.rlib" "/home/danakj/s/win-asan-repro/target/x86_64-pc-windows-msvc/debug/deps/librustc_demangle-05840c8895460fd0.rlib" "/home/danakj/s/win-asan-repro/target/x86_64-pc-windows-msvc/debug/deps/libstd_detect-fb81dcb01cb8e1fb.rlib" "/home/danakj/s/win-asan-repro/target/x86_64-pc-windows-msvc/debug/deps/libhashbrown-2af4722702436af9.rlib" "/home/danakj/s/win-asan-repro/target/x86_64-pc-windows-msvc/debug/deps/librustc_std_workspace_alloc-0c0095e7aff20699.rlib" "/home/danakj/s/win-asan-repro/target/x86_64-pc-windows-msvc/debug/deps/libunwind-dc807fd73b64a6ed.rlib" "/home/danakj/s/win-asan-repro/target/x86_64-pc-windows-msvc/debug/deps/libcfg_if-300e1ea37d4d7a24.rlib" "/home/danakj/s/win-asan-repro/target/x86_64-pc-windows-msvc/debug/deps/liballoc-e1954f2d1c71f747.rlib" "/home/danakj/s/win-asan-repro/target/x86_64-pc-windows-msvc/debug/deps/librustc_std_workspace_core-a5e34549c1c04a14.rlib" "/home/danakj/s/win-asan-repro/target/x86_64-pc-windows-msvc/debug/deps/libcore-259c5404709a9645.rlib" "/home/danakj/s/win-asan-repro/target/x86_64-pc-windows-msvc/debug/deps/libcompiler_builtins-dda445e51fa2ca93.rlib" "kernel32.lib" "advapi32.lib" "kernel32.lib" "ntdll.lib" "userenv.lib" "ws2_32.lib" "kernel32.lib" "ws2_32.lib" "kernel32.lib" "/defaultlib:msvcrt" "/NXCOMPAT" "/LIBPATH:/home/danakj/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-pc-windows-msvc/lib" "/OUT:/home/danakj/s/win-asan-repro/target/x86_64-pc-windows-msvc/debug/deps/win_asan_repro.exe" "/OPT:REF,NOICF" "/DEBUG" "/PDBALTPATH:%_PDB%" "/libpath:../c/src/third_party/llvm-build/Release+Asserts/lib/clang/20/lib/windows" "/winsysroot:../c/src/third_party/depot_tools/win_toolchain/vs_files/7393122652"
= note: lld-link: error: could not open '/home/danakj/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-pc-windows-msvc/lib/librustc-nightly_rt.asan.a': No such file or directory
```
The relevant parts of the command line are:
* Rustc flags: `-Zsanitizer=address -Clinker=../c/src/third_party/llvm-build/Release+Asserts/bin/lld-link"`
* Cargo flags: `--target x86_64-pc-windows-msvc`
The library choice is made here: https://github.com/rust-lang/rust/blob/0b16baa570d26224612ea27f76d68e4c6ca135cc/compiler/rustc_codegen_ssa/src/back/link.rs#L1349-L1351
The branch above would normally be used for MSVC flavour, but it's not used for LLD, so it falls down to the else branch which is for Linux.
Edited to hide the sysroot stuff, this seems okay and unrelated:
<details>
## --sysroot
The code here is _also ignoring `--sysroot`_. If we include a bogus but relative path `--sysroot=a/b/c` in `RUSTFLAGS`, the ASAN library should be specified as`a/b/c/lib/rustlib/x86_64-pc-windows-msvc/lib/` but it is always using the absolute path to the default sysroot.
</details>
Chromium tracking issue: crbug.com/371512562
cc: @rcvalle | T-compiler,O-windows-msvc,A-sanitizers,C-bug | low | Critical |
2,570,734,306 | rust | `unnameable_test_items` does not detect tests in inner modules | ### Code
```rust
fn _outer_function() {
mod tests {
#[test]
fn inner_test() {}
}
}
```
### Current output
```text
warning: function `inner_test` is never used
--> src/lib.rs:4:12
|
4 | fn inner_test() {}
| ^^^^^^^^^^
|
= note: `#[warn(dead_code)]` on by default
```
### Desired output
```text
warning: cannot test inner items
--> src/lib.rs:3:9
|
3 | #[test]
| ^^^^^^^
|
= note: `#[warn(unnameable_test_items)]` on by default
= note: this warning originates in the attribute macro `test` (in Nightly builds, run with -Z macro-backtrace for more info)
```
### Rationale and extra context
The desired warning is produced if `mod tests` is removed so that the `inner_test()` is directly in the function. The warning should be produced in both cases, since both cases are tests that will not be run.
The warning was last changed in #114414.
### Rust Version
Stable 1.81.0 and nightly 1.83.0-nightly (2024-10-05 9096f4fafa2ac2d771f8)
| A-lints,T-compiler,C-bug,L-unnameable_test_items | low | Minor |
2,570,748,341 | next.js | tsconfig > compilerOptions.paths > `${configDir}` template variable fails the app load and build | ### Link to the code that reproduces this issue
https://codesandbox.io/p/devbox/next-tsconfig-configdir-l7jgmy?file=%2Ftsconfig.json%3A23%2C15
### To Reproduce
1. Fork the sandbox
1. Run `pnpm install`
1. Run `pnpm dev`
1. The app fails to load due to import path alias issue
### Current vs. Expected behavior
## Current
TypeScript has added the [${configDir} Template Variable for Configuration Files](https://www.typescriptlang.org/docs/handbook/release-notes/typescript-5-5.html#the-configdir-template-variable-for-configuration-files) in TypeScript 5.5.
This template variable is essential for creating extendable configs, e.g.:
https://github.com/alexilyaev/configs/blob/main/tsconfig/next.json
But it doesn't work in Next.js when used in `tsconfig.json` > `compilerOptions.paths` values.
We can verify that the config works because navigating imports that use the path aliases works as expected.
Also running `pnpm tsc --noEmit` doesn't fail (`pnpm` prefix to run the project TypeScript version and not the global one).
## Expected
Using `${configDir}` Template Variable in `tsconfig.json` should not fail the Next.js app load and build.
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: #1 SMP PREEMPT_DYNAMIC Sun Aug 6 20:05:33 UTC 2023
Available memory (MB): 4102
Available CPU cores: 2
Binaries:
Node: 20.9.0
npm: 9.8.1
Yarn: 1.22.19
pnpm: 8.10.2
Relevant Packages:
next: 15.0.0-canary.179 // Latest available version is detected (15.0.0-canary.179).
eslint-config-next: N/A
react: 19.0.0-rc-2d16326d-20240930
react-dom: 19.0.0-rc-2d16326d-20240930
typescript: 5.6.2
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Developer Experience, Module Resolution, TypeScript
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local)
### Additional context
I've encountered this issue when I tried to move my `tsconfig.json` settings to an external repo and use it in my repo via `extends` in `tsconfig.json`.
On the TypeScript side, everything worked. But Next.js failed to build. | bug,TypeScript,Module Resolution | low | Critical |
2,570,813,785 | ui | [bug]: Calendar component Icon's props are not used | ### Describe the bug
```js
components={{
IconLeft: ({ ...props }) => <ChevronLeft className="h-4 w-4" />,
IconRight: ({ ...props }) => <ChevronRight className="h-4 w-4" />,
}}
```
So the props that are defined in `Calendar` component for `IconLeft` and `IconRight` are not used.
### Affected component/components
Calendar
### How to reproduce
Install the calendar component from shadcn ui
### How to fix this
Go to `apps/www/registry/default/ui/calendar.tsx` & `apps/www/registry/new-york/ui/calendar.tsx` and make the following changes (use the props):
```js
components={{
IconLeft: ({ ...props }) => (
<ChevronLeft className="h-4 w-4" {...props} />
),
IconRight: ({ ...props }) => (
<ChevronRight className="h-4 w-4" {...props} />
),
}}
```
### System Info
```bash
Chrome Browser, Mac OS
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,570,815,471 | godot | "--doctool .. --gdextension-docs" randomly deletes documentation | ### Tested versions
Tried in 4.3-stable, 4.4-dev, and [master](https://github.com/godotengine/godot/tree/db66bd35af704fe0d83ba9348b8c50a48e51b2ba)
### System information
Godot v4.4.dev3 - Debian GNU/Linux trixie/sid trixie on X11 - X11 display driver, Multi-window, 1 monitor - OpenGL 3 (Compatibility) - Mesa Intel(R) Graphics (RPL-P) - 13th Gen Intel(R) Core(TM) i7-1360P (16 threads)
### Issue description
I have a script that (as part of my gdextension build) runs these commands on a project that uses the gdextension in some trivial way (calling `new` on a class, for example).
```bash
godot --headless --import
godot --doctool .. --gdextension-docs
```
Occasionally (or frequently, depending on the complexity of the extension), it will delete all of the `doc_classes/*.xml` files and then exit with success (code 0).
I suspect this is something threading-related based on this crash I managed to catch on one attempt during the import step:

### Steps to reproduce
1. Compile the MRP (`scons`)
2. Change directory to the demo project (`cd demo`)
3. `godot --headless --import` (to initialize the documentation)
4. `godot --doctool .. --gdextension-docs` (to update (or destroy due to this bug) the documentation files)
steps 3 and 4 may need to be repeated
### Minimal reproduction project (MRP)
This project is fairly complicated and exhibits the bug more often than it does not:
https://github.com/BenLubar/godot4-spy-cards-online/tree/49d494266edea28649e714d021af4a3d283ae6d2
This one is much simpler, only defining a single singleton class, but it also exhibits the bug occasionally: https://github.com/BenLubar/godot4-opus/tree/472984098c55d3a0f0ba21d3cc926c441b636f04
I suspect that the demo gdextensions in godot-cpp and godot-cpp-template will also exhibit this behavior. | bug,topic:editor,topic:gdextension | low | Critical |
2,570,863,556 | rust | Misspelled module names should look for similar modules | ### Code
use std::collection::HashMap;
### Current output
```
error[E0432]: unresolved import `std::collection`
--> src/main.rs:1:10
|
1 | use std::collection::HashMap;
| ^^^^^^^^^^ could not find `collection` in `std`
```
### Desired output
rustc should do the same similarity search it currently does for other identifiers, to be able to say "did you mean `std::collections::HashMap`".
### Rationale and extra context
_No response_
### Other cases
_No response_
### Rust Version
rustc 1.81.0 (eeb90cda1 2024-09-04)
binary: rustc
commit-hash: eeb90cda1969383f56a2637cbd3037bdf598841c
commit-date: 2024-09-04
host: x86_64-unknown-linux-gnu
release: 1.81.0
LLVM version: 18.1.7
### Anything else?
_No response_ | A-diagnostics,T-compiler,A-suggestion-diagnostics | low | Critical |
2,570,872,120 | ui | [feat]: Example of custom registry usage | ### Feature description
Hey can we get an example or documentation about how to use a custom registry with the CLI to install components from a custom design system?
### Affected component/components
_No response_
### Additional Context
Inspired by Jack Herrington's latest video: https://www.youtube.com/watch?v=NP2ULDZxpd0
Basically, he created a tool that semi-automatically creates a registry of components based on a git repo history
I would like to know if we can have an official way of doing this, I'm pretty sure a LOT of people would benefit from this.
I believe it would be similar to what is being done on v0 with the generated components.
Let me know if I can help on this!
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | low | Minor |
2,570,901,934 | deno | `--watch` with `deno serve --parallel` works inconsistently | ```typescript
// mod.ts
export default {
fetch() {
return new Response("Hello 1\n");
},
} satisfies Deno.ServeDefaultExport;
```
Start the script with `deno serve --parallel --watch mod.ts`, and then edit the response text to `"Hello 2\n"`.
If I now do bunch of requests I get a mix of the old and new responses (tested in both Firefox and Curl).
If I use `DENO_JOBS=1` it works correctly. And strangely if the value is any higher, for example 2, it will serve more than 2 different responses (if there were enough many edits made to serve).
```
deno 1.46.3 (stable, release, x86_64-unknown-linux-gnu)
v8 12.9.202.5-rusty
typescript 5.5.2
```
| bug,--watch,serve | low | Minor |
2,570,913,161 | langchain | Milvus - illegal connection params or server unavailable | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
vector_db = Milvus(
embeddings,
connection_args={"uri": URI},
collection_name="langchain_example",
)
### Error Message and Stack Trace (if applicable)
2024-10-07 12:06:00 vectorDB = Milvus(
2024-10-07 12:06:00 File "/usr/local/lib/python3.10/site-packages/langchain_milvus/vectorstores/milvus.py", line 326, in __init__
2024-10-07 12:06:00 self.alias = self._create_connection_alias(connection_args)
2024-10-07 12:06:00 File "/usr/local/lib/python3.10/site-packages/langchain_milvus/vectorstores/milvus.py", line 410, in _create_connection_alias
2024-10-07 12:06:00 raise e
2024-10-07 12:06:00 File "/usr/local/lib/python3.10/site-packages/langchain_milvus/vectorstores/milvus.py", line 405, in _create_connection_alias
2024-10-07 12:06:00 connections.connect(alias=alias, **connection_args)
2024-10-07 12:06:00 File "/usr/local/lib/python3.10/site-packages/pymilvus/orm/connections.py", line 449, in connect
2024-10-07 12:06:00 connect_milvus(**kwargs, user=user, password=password, token=token, db_name=db_name)
2024-10-07 12:06:00 File "/usr/local/lib/python3.10/site-packages/pymilvus/orm/connections.py", line 400, in connect_milvus
2024-10-07 12:06:00 gh._wait_for_channel_ready(timeout=timeout)
2024-10-07 12:06:00 File "/usr/local/lib/python3.10/site-packages/pymilvus/client/grpc_handler.py", line 150, in _wait_for_channel_ready
2024-10-07 12:06:00 raise MilvusException(
2024-10-07 12:06:00 pymilvus.exceptions.MilvusException: <MilvusException: (code=2, message=Fail connecting to server on 76.208.102.187:19530, illegal connection params or server unavailable)>
### Description
I am using langchain-milvus 0.1.5 and when trying to connect is failing.
### System Info
langchain==0.3.0
langchain-core==0.3.5
langchain-milvus==0.1.5 | Ɑ: vector store | low | Critical |
2,570,913,882 | tensorflow | Failed to build `tensorflow_cc` in Windows when linking | ### Issue type
Build/Install
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
2.18.0-rc0
### Custom code
No
### OS platform and distribution
Windows 11
### Mobile device
_No response_
### Python version
3.12.7
### Bazel version
6.5.0
### GCC/compiler version
Clang 18.1.7
### CUDA/cuDNN version
No
### GPU model and memory
_No response_
### Current behavior?
I built the C++ library in Windows with LLVM/Clang 18.1.7 by command:
```bash
bazel build --config=release_cpu_windows --config=win_clang //tensorflow:tensorflow_cc
```
All compilation works well, but linking at the final task failed. It seems a lot of symbols lost when linking, such as `Session`, `SavedModelBundleInterface`, et al. They were all basic functions or classes and should not be missed.
CUDA was excluded.
I have tried many LLVM/Clang versions from 17 to 19. Looks like it has nothing to do with the compiler version.
### Standalone code to reproduce the issue
```shell
bazel build --config=release_cpu_windows --config=win_clang //tensorflow:tensorflow_cc
```
### Relevant log output
```shell
D:/tensorflow/tensorflow/BUILD:1316:21: Linking tensorflow/tensorflow_cc.dll failed: (Exit 1): lld-link.exe failed: error executing command (from target //tensorflow:tensorflow_cc.dll)
cd /d D:/output_base/execroot/org_tensorflow
SET CLANG_COMPILER_PATH=C:Program FilesLLVMbinclang.exe
SET LIB=C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.41.34120\ATLMFC\lib\x64;C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.41.34120\lib\x64;C:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\lib\um\x64;C:\Program Files (x86)\Windows Kits\10\lib\10.0.26100.0\ucrt\x64;C:\Program Files (x86)\Windows Kits\10\\lib\10.0.26100.0\\um\x64;C:\Program Files\LLVM\lib\clang\18\lib\windows
SET PATH=C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.41.34120\bin\HostX64\x64;C:\Program Files\Microsoft Visual Studio\2022\Community\Common7\IDE\VC\VCPackages;C:\Program Files\Microsoft Visual Studio\2022\Community\Common7\IDE\CommonExtensions\Microsoft\TestWindow;C:\Program Files\Microsoft Visual Studio\2022\Community\Common7\IDE\CommonExtensions\Microsoft\TeamFoundation\Team Explorer;C:\Program Files\Microsoft Visual Studio\2022\Community\MSBuild\Current\bin\Roslyn;C:\Program Files (x86)\Microsoft SDKs\Windows\v10.0A\bin\NETFX 4.8 Tools\x64\;C:\Program Files\Microsoft Visual Studio\2022\Community\Common7\IDE\CommonExtensions\Microsoft\FSharp\Tools;C:\Program Files\Microsoft Visual Studio\2022\Community\Team Tools\DiagnosticsHub\Collector;C:\Program Files (x86)\Windows Kits\10\bin\10.0.26100.0\\x64;C:\Program Files (x86)\Windows Kits\10\bin\\x64;C:\Program Files\Microsoft Visual Studio\2022\Community\\MSBuild\Current\Bin\amd64;C:\Windows\Microsoft.NET\Framework64\v4.0.30319;C:\Program Files\Microsoft Visual Studio\2022\Community\Common7\IDE\;C:\Program Files\Microsoft Visual Studio\2022\Community\Common7\Tools\;;C:\WINDOWS\system32;C:\Program Files\Microsoft Visual Studio\2022\Community\Common7\IDE\VC\Linux\bin\ConnectionManagerExe;C:\Program Files\Microsoft Visual Studio\2022\Community\VC\vcpkg
SET PWD=/proc/self/cwd
SET PYTHON_BIN_PATH=C:/Python312/python.exe
SET PYTHON_LIB_PATH=C:/Python312/Lib/site-packages
SET TEMP=E:\tmp
SET TF2_BEHAVIOR=1
SET TMP=E:\tmp
C:\Program Files\LLVM\bin\lld-link.exe @bazel-out/x64_windows-opt/bin/tensorflow/tensorflow_cc.dll-2.params
# Configuration: ee29c35b2efb4ddb1fe39799ba1e7aae463cd78c5b2bb106c9b875ad299c989f
# Execution platform: //tensorflow/tools/toolchains/win:x64_windows-clang-cl
lld-link: warning: ignoring unknown argument '-lm'
lld-link: warning: ignoring unknown argument '-lpthread'
lld-link: warning: ignoring unknown argument '-lm'
lld-link: warning: ignoring unknown argument '-lpthread'
lld-link: warning: ignoring unknown argument '-lm'
lld-link: warning: duplicate symbol: TF_DataTypeSize
>>> defined at tf_datatype.lo.lib(tf_datatype.obj)
>>> defined at libtensorflow_framework.so.2.18.0
lld-link: warning: duplicate symbol: TF_NewBuffer
>>> defined at tf_buffer.lib(tf_buffer.obj)
>>> defined at libtensorflow_framework.so.2.18.0
lld-link: warning: duplicate symbol: TF_NewStatus
>>> defined at tf_status.lib(tf_status.obj)
>>> defined at libtensorflow_framework.so.2.18.0
lld-link: warning: duplicate symbol: TF_GetCode
>>> defined at tf_status.lib(tf_status.obj)
>>> defined at libtensorflow_framework.so.2.18.0
lld-link: warning: duplicate symbol: TF_Message
>>> defined at tf_status.lib(tf_status.obj)
>>> defined at libtensorflow_framework.so.2.18.0
lld-link: warning: duplicate symbol: TF_DeleteStatus
>>> defined at tf_status.lib(tf_status.obj)
>>> defined at libtensorflow_framework.so.2.18.0
lld-link: warning: duplicate symbol: TF_NewBufferFromString
>>> defined at tf_buffer.lib(tf_buffer.obj)
>>> defined at libtensorflow_framework.so.2.18.0
lld-link: warning: duplicate symbol: TF_SetStatus
>>> defined at tf_status.lib(tf_status.obj)
>>> defined at libtensorflow_framework.so.2.18.0
lld-link: warning: duplicate symbol: TF_DeleteTensor
>>> defined at tf_tensor.lib(tf_tensor.obj)
>>> defined at libtensorflow_framework.so.2.18.0
lld-link: warning: duplicate symbol: TF_DeleteBuffer
>>> defined at tf_buffer.lib(tf_buffer.obj)
>>> defined at libtensorflow_framework.so.2.18.0
lld-link: warning: duplicate symbol: TF_TensorData
>>> defined at tf_tensor.lib(tf_tensor.obj)
>>> defined at libtensorflow_framework.so.2.18.0
lld-link: warning: duplicate symbol: TF_NewTensor
>>> defined at tf_tensor.lib(tf_tensor.obj)
>>> defined at libtensorflow_framework.so.2.18.0
lld-link: warning: duplicate symbol: TF_SetPayload
>>> defined at tf_status.lib(tf_status.obj)
>>> defined at libtensorflow_framework.so.2.18.0
lld-link: warning: duplicate symbol: TF_NumDims
>>> defined at tf_tensor.lib(tf_tensor.obj)
>>> defined at libtensorflow_framework.so.2.18.0
lld-link: warning: duplicate symbol: TF_Dim
>>> defined at tf_tensor.lib(tf_tensor.obj)
>>> defined at libtensorflow_framework.so.2.18.0
lld-link: warning: duplicate symbol: TF_AllocateTensor
>>> defined at tf_tensor.lib(tf_tensor.obj)
>>> defined at libtensorflow_framework.so.2.18.0
lld-link: warning: duplicate symbol: TF_TensorBitcastFrom
>>> defined at tf_tensor.lib(tf_tensor.obj)
>>> defined at libtensorflow_framework.so.2.18.0
lld-link: warning: duplicate symbol: TF_TensorElementCount
>>> defined at tf_tensor.lib(tf_tensor.obj)
>>> defined at libtensorflow_framework.so.2.18.0
lld-link: warning: duplicate symbol: TF_ForEachPayload
>>> defined at tf_status.lib(tf_status.obj)
>>> defined at libtensorflow_framework.so.2.18.0
lld-link: error: <root>: undefined symbol: class tensorflow::Session * __cdecl tensorflow::NewSession(struct tensorflow::SessionOptions const &)
lld-link: error: <root>: undefined symbol: public: virtual __cdecl tensorflow::SavedModelBundleInterface::~SavedModelBundleInterface(void)
lld-link: error: <root>: undefined symbol: bool __cdecl tensorflow::MaybeSavedModelDirectory(class std::basic_string<char, struct std::char_traits<char>, class std::allocator<char>> const &)
lld-link: error: <root>: undefined symbol: int `private: static class lts_20230802::container_internal::btree<struct absl::lts_20230802::container_internal::set_params<class std::basic_string<char, struct std::char_traits<char>, class std::allocator<char>>, struct std::less<class std::basic_string<char, struct std::char_traits<char>, class std::allocator<char>>>, class std::allocator<class std::basic_string<char, struct std::char_traits<char>, class std::allocator<char>>>, 256, 0>>::btree_node<struct absl::lts_20230802::container_internal::set_params<class std::basic_string<char, struct std::char_traits<char>, class std::allocator<char>>, struct std::less<class std::basic_string<char, struct std::char_traits<char>, class std::allocator<char>>>, class std::allocator<class std::basic_string<char, struct std::char_traits<char>, class std::allocator<char>>>, 256, 0>> * __cdecl absl::lts_20230802::container_internal::btree<struct absl::lts_20230802::container_internal::set_params<class std::basic_string<char, struct std::char_traits<char>, class std::allocator<char>>, struct std::less<class std::basic_string<char, struct std::char_traits<char>, class std::allocator<char>>>, class std::allocator<class std::basic_string<char, struct std::char_traits<char>, class std::allocator<char>>>, 256, 0>>::EmptyNode(void)'::`2'::$TSS0
lld-link: error: <root>: undefined symbol: `public: class std::unique_ptr<class tensorflow::RunHandler, struct std::default_delete<class tensorflow::RunHandler>> __cdecl tensorflow::RunHandlerPool::Impl::Get(__int64, __int64, class tensorflow::RunOptions_Experimental_RunHandlerPoolOptions const &)'::`2'::`local static thread guard'{2}
lld-link: error: <root>: undefined symbol: class std::unordered_map<enum tensorflow::DataType, enum tensorflow::FullTypeId, struct tensorflow::DataTypeHasher, struct std::equal_to<enum tensorflow::DataType>, class std::allocator<struct std::pair<enum tensorflow::DataType const, enum tensorflow::FullTypeId>>> *tensorflow::DT_TO_FT
lld-link: error: <root>: undefined symbol: private: static class std::vector<class tsl::core::RefCountPtr<class tensorflow::Rendezvous>, class std::allocator<class tsl::core::RefCountPtr<class tensorflow::Rendezvous>>> &tensorflow::LocalRendezvous::aborted_rendezs_
lld-link: error: <root>: undefined symbol: private: static class tsl::mutex &tensorflow::LocalRendezvous::aborted_rendezs_mu_
lld-link: error: <root>: undefined symbol: class tsl::monitoring::Counter<2> *tensorflow::metrics::eager_client_error_counter
lld-link: error: <root>: undefined symbol: struct absl::lts_20230802::container_internal::btree<struct absl::lts_20230802::container_internal::set_params<class std::basic_string<char, struct std::char_traits<char>, class std::allocator<char>>, struct std::less<class std::basic_string<char, struct std::char_traits<char>, class std::allocator<char>>>, class std::allocator<class std::basic_string<char, struct std::char_traits<char>, class std::allocator<char>>>, 256, 0>>::EmptyNodeType *`private: static class absl::lts_20230802::container_internal::btree_node<struct absl::lts_20230802::container_internal::set_params<class std::basic_string<char, struct std::char_traits<char>, class std::allocator<char>>, struct std::less<class std::basic_string<char, struct std::char_traits<char>, class std::allocator<char>>>, class std::allocator<class std::basic_string<char, struct std::char_traits<char>, class std::allocator<char>>>, 256, 0>> * __cdecl absl::lts_20230802::container_internal::btree<struct absl::lts_20230802::container_internal::set_params<class std::basic_string<char, struct std::char_traits<char>, class std::allocator<char>>, struct std::less<class std::basic_string<char, struct std::char_traits<char>, class std::allocator<char>>>, class std::allocator<class std::basic_string<char, struct std::char_traits<char>, class std::allocator<char>>>, 256, 0>>::EmptyNode(void)'::`2'::empty_node
lld-link: error: <root>: undefined symbol: private: static class tsl::mutex tensorflow::tfdbg::DebugEventsWriter::factory_mu_
lld-link: error: <root>: undefined symbol: char const *tensorflow::kDisableJitKernelsEnvVar
lld-link: error: <root>: undefined symbol: public: static __int64 tensorflow::CollectiveExecutor::kInvalidId
lld-link: error: <root>: undefined symbol: char const *tensorflow::kJitKernelLabel
lld-link: error: <root>: undefined symbol: public: static class std::basic_string<char, struct std::char_traits<char>, class std::allocator<char>> const tensorflow::LogMemory::kLogMemoryLabel
lld-link: error: <root>: undefined symbol: class tsl::monitoring::Counter<5> *tensorflow::metrics::mlir_bridge_first_phase_counter
lld-link: error: <root>: undefined symbol: class tsl::monitoring::Counter<1> *tensorflow::metrics::mlir_second_phase_count
lld-link: error: <root>: undefined symbol: void * (__cdecl *nsync::nsync_malloc_ptr_)(unsigned __int64)
lld-link: error: <root>: undefined symbol: struct nsync::lock_type_s *nsync::nsync_reader_type_
lld-link: error: <root>: undefined symbol: struct nsync::lock_type_s *nsync::nsync_writer_type_
lld-link: error: too many errors emitted, stopping now (use /errorlimit:0 to see all errors)
```
| stat:awaiting tensorflower,type:build/install,subtype:windows,2.18.rc | medium | Critical |
2,570,992,692 | three.js | BatchedMesh Example much slower on WebGPU than WebGL on Android | ### Description
On Android ( Samsung Galaxy S20 FE ) BatchedMesh Example WebGPU is much slower :
WebGPU : `~13FPS`
WebGL : `~25FPS`


### Reproduction steps
1. load on Android https://threejs.org/examples/?q=bat#webgpu_mesh_batch
2. enabled/disable WebGPU
### Code
```js
```
### Live example
``
### Screenshots
_No response_
### Version
r169
### Device
Mobile
### Browser
Chrome
### OS
Android | Device Issue,WebGPU | low | Major |
2,570,994,165 | godot | GraphNode Size bugging out when re-creating every port on the node. | ### Tested versions
- Reproducible in v4.4.dev3.official [f4af8201b]
### System information
Godot v4.4.dev3 - Arch Linux #1 SMP PREEMPT_DYNAMIC Fri, 04 Oct 2024 21:51:11 +0000 on X11 - X11 display driver, Multi-window, 1 monitor - Vulkan (Mobile) - integrated Intel(R) UHD Graphics (JSL) - Intel(R) Pentium(R) Silver N6000 @ 1.10GHz (4 threads)
### Issue description
when re-calculating ports using custom method. GraphNode's vertical size doubled. using remote to set offset_bottom to 0 fixes. but trying to set same property via code does nothing.
### Steps to reproduce
I honestly dont know. something about adding labels and removing them causes something to break. anything I tried to remove appeared to fix the issue. and trying to make it from scratch caused nothing to happen
### Minimal reproduction project (MRP)
[test-project.zip](https://github.com/user-attachments/files/17282173/test-project.zip)
| bug,topic:gui | low | Critical |
2,571,004,186 | storybook | [Bug]: non-unique control input ids | ### Describe the bug
When writing custom component documentation (ie not autodocs) with multiple `<Controls />` blocks on a page, changing the value of a control in one instance of `<Controls />` can manipulate the control in another instance instead. The issue is that the input elements in each `<Controls />` block share the same `id` and `name` attributes per arg. For example, for the arg `size` with 3 options, each instance of `<Controls />` will render a radio group using the `id`s `control-size-0`, `control-size-1`, and `control-size-2`.
Here's a recording from the reproduction link below:
https://github.com/user-attachments/assets/4aaa57b2-7213-4652-820b-718188949d24
### Reproduction link
https://stackblitz.com/edit/github-ygcv3v?file=src%2Fstories%2Fbutton.mdx
### Reproduction steps
1. Go to the above link
2. Click on one of the size options in the controls for section 2
3. Notice that manipulating the controls in section 2 changes the controls in section 1
### System
Storybook Environment Info:
System:
OS: Linux 5.0 undefined
CPU: (8) x64 Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz
Shell: 1.0 - /bin/jsh
Binaries:
Node: 18.20.3 - /usr/local/bin/node
Yarn: 1.22.19 - /usr/local/bin/yarn
npm: 10.2.3 - /usr/local/bin/npm <----- active
pnpm: 8.15.6 - /usr/local/bin/pnpm
npmPackages:
@storybook/addon-essentials: ^8.4.0-alpha.4 => 8.4.0-alpha.4
@storybook/addon-interactions: ^8.4.0-alpha.4 => 8.4.0-alpha.4
@storybook/addon-onboarding: ^8.4.0-alpha.4 => 8.4.0-alpha.4
@storybook/blocks: ^8.4.0-alpha.4 => 8.4.0-alpha.4
@storybook/react: ^8.4.0-alpha.4 => 8.4.0-alpha.4
@storybook/react-vite: ^8.4.0-alpha.4 => 8.4.0-alpha.4
@storybook/test: ^8.4.0-alpha.4 => 8.4.0-alpha.4
storybook: ^8.4.0-alpha.4 => 8.4.0-alpha.4
### Additional context
_No response_ | bug,help wanted,block: other | low | Critical |
2,571,009,776 | godot | Texture bias does not work in web builds | ### Tested versions
Godot 4.3 stable
### System information
Windows 10 - Compatibility - Godot 4.3 Stable
### Issue description
Using any texture method in shaders for html resulting with broken and not working shader.
```glsl
shader_type canvas_item;
uniform sampler2D screen_texture : hint_screen_texture, filter_linear_mipmap_anisotropic, repeat_disable;
uniform float hpass : hint_range(0.0, 1.0, 0.1) = 1.0;
uniform float vpass : hint_range(0.0, 1.0, 0.1) = 1.0;
uniform int radius : hint_range(0, 65, 1) = 65;
render_mode blend_add;
vec4 textureThresholded(sampler2D _texture, vec2 _uv, float _bias) {
vec4 pixel = textureLod(_texture, _uv, _bias);
if ( pixel.r <= 1. && pixel.g <= 1. && pixel.b <= 1. ) {
pixel.rgb = vec3(0.);
}
return pixel;
}
void fragment() {
vec4 pixel = textureThresholded(screen_texture, SCREEN_UV, 0.);
if (radius != 0) {
vec4 blurred = vec4(0., 0., 0., 1.);
float[65] w = {0.0064, 0.0063, 0.0062, 0.0061, 0.006,
0.0059, 0.0058, 0.0057, 0.0056, 0.0055, 0.0054, 0.0053, 0.0052, 0.0051, 0.005,
0.0049, 0.0048, 0.0047, 0.0046, 0.0054, 0.0044, 0.0043, 0.0042, 0.0041, 0.004,
0.0039, 0.0038, 0.0037, 0.0036, 0.0043, 0.0034, 0.0033, 0.0032, 0.0031, 0.003,
0.0029, 0.0028, 0.0027, 0.0026, 0.0052, 0.0024, 0.0023, 0.0022, 0.0021, 0.002,
0.0019, 0.0018, 0.0017, 0.0016, 0.0051, 0.0014, 0.0013, 0.0012, 0.0011, 0.001,
0.0009, 0.0008, 0.0007, 0.0006, 0.0005, 0.0004, 0.0003, 0.0002, 0.0001, 0.
};
float px = 1. / float(textureSize(screen_texture, 0).x);
float py = 1. / float(textureSize(screen_texture, 0).y);
for(int i = 0; i < radius; i++) {
float k = float(i + 1);
blurred += textureThresholded(screen_texture, SCREEN_UV + vec2(float(i) * cos(float(i)), 0) * vec2(px, py) * vec2(hpass, vpass), k) * w[i];
blurred += textureThresholded(screen_texture, SCREEN_UV + vec2(float(-i) * cos(float(i)), 0) * vec2(px, py) * vec2(hpass, vpass), k) * w[i];
blurred += textureThresholded(screen_texture, SCREEN_UV + vec2(0, float(i) * sin(float(i))) * vec2(px, py) * vec2(hpass, vpass), k) * w[i];
blurred += textureThresholded(screen_texture, SCREEN_UV + vec2(0, float(-i) * sin(float(i))) * vec2(px, py) * vec2(hpass, vpass), k) * w[i];
blurred += textureThresholded(screen_texture, SCREEN_UV + vec2(float(i) * cos(float(i)), float(i) * sin(float(i))) * vec2(px, py) * vec2(hpass, vpass), k) * w[i];
blurred += textureThresholded(screen_texture, SCREEN_UV + vec2(float(-i) * cos(float(i)), float(-i) * sin(float(i))) * vec2(px, py) * vec2(hpass, vpass), k) * w[i];
blurred += textureThresholded(screen_texture, SCREEN_UV + vec2(float(i) * cos(float(i)), float(-i) * sin(float(i))) * vec2(px, py) * vec2(hpass, vpass), k) * w[i];
blurred += textureThresholded(screen_texture, SCREEN_UV + vec2(float(-i) * cos(float(i)), float(i) * sin(float(i))) * vec2(px, py) * vec2(hpass, vpass), k) * w[i];
blurred += textureThresholded(screen_texture, SCREEN_UV + vec2(float(-i) * cos(float(i)), float(i) * sin(float(i))) * vec2(px, py) * vec2(hpass, vpass), k) * w[i];
blurred += textureThresholded(screen_texture, SCREEN_UV + vec2(float(i) * cos(float(i)), float(-i) * sin(float(i))) * vec2(px, py) * vec2(hpass, vpass), k) * w[i];
}
blurred /= float(radius) / 6.;
pixel += blurred;
} else {
pixel = vec4(0., 0., 0., 1.);
}
COLOR = pixel;
}
```
I was coding a glow post process using shaders to avoid using the world environment node.
It was successful, however when exporting this shader to the web it did not work. I was able to figure that the bias in the texture method was the problem. So I tried replace it with textureLod but it had the same issue.
desktop version:

https://github.com/user-attachments/assets/c3e9dd3e-06d9-4b6d-a751-5724ec75eb40
web version:

### Steps to reproduce
create a canvas layer node, attach a control and a color rect inside it. make them full rect.
attach this shader code to the color rect.
```
shader_type canvas_item;
uniform sampler2D screen_texture : hint_screen_texture, filter_linear_mipmap_anisotropic, repeat_disable;
uniform float hpass : hint_range(0.0, 1.0, 0.1) = 1.0;
uniform float vpass : hint_range(0.0, 1.0, 0.1) = 1.0;
uniform int radius : hint_range(0, 65, 1) = 65;
render_mode blend_add;
vec4 textureThresholded(sampler2D _texture, vec2 _uv, float _bias) {
vec4 pixel = textureLod(_texture, _uv, _bias);
if ( pixel.r <= 1. && pixel.g <= 1. && pixel.b <= 1. ) {
pixel.rgb = vec3(0.);
}
return pixel;
}
void fragment() {
vec4 pixel = textureThresholded(screen_texture, SCREEN_UV, 0.);
if (radius != 0) {
vec4 blurred = vec4(0., 0., 0., 1.);
float[65] w = {0.0064, 0.0063, 0.0062, 0.0061, 0.006,
0.0059, 0.0058, 0.0057, 0.0056, 0.0055, 0.0054, 0.0053, 0.0052, 0.0051, 0.005,
0.0049, 0.0048, 0.0047, 0.0046, 0.0054, 0.0044, 0.0043, 0.0042, 0.0041, 0.004,
0.0039, 0.0038, 0.0037, 0.0036, 0.0043, 0.0034, 0.0033, 0.0032, 0.0031, 0.003,
0.0029, 0.0028, 0.0027, 0.0026, 0.0052, 0.0024, 0.0023, 0.0022, 0.0021, 0.002,
0.0019, 0.0018, 0.0017, 0.0016, 0.0051, 0.0014, 0.0013, 0.0012, 0.0011, 0.001,
0.0009, 0.0008, 0.0007, 0.0006, 0.0005, 0.0004, 0.0003, 0.0002, 0.0001, 0.
};
float px = 1. / float(textureSize(screen_texture, 0).x);
float py = 1. / float(textureSize(screen_texture, 0).y);
for(int i = 0; i < radius; i++) {
float k = float(i + 1);
blurred += textureThresholded(screen_texture, SCREEN_UV + vec2(float(i) * cos(float(i)), 0) * vec2(px, py) * vec2(hpass, vpass), k) * w[i];
blurred += textureThresholded(screen_texture, SCREEN_UV + vec2(float(-i) * cos(float(i)), 0) * vec2(px, py) * vec2(hpass, vpass), k) * w[i];
blurred += textureThresholded(screen_texture, SCREEN_UV + vec2(0, float(i) * sin(float(i))) * vec2(px, py) * vec2(hpass, vpass), k) * w[i];
blurred += textureThresholded(screen_texture, SCREEN_UV + vec2(0, float(-i) * sin(float(i))) * vec2(px, py) * vec2(hpass, vpass), k) * w[i];
blurred += textureThresholded(screen_texture, SCREEN_UV + vec2(float(i) * cos(float(i)), float(i) * sin(float(i))) * vec2(px, py) * vec2(hpass, vpass), k) * w[i];
blurred += textureThresholded(screen_texture, SCREEN_UV + vec2(float(-i) * cos(float(i)), float(-i) * sin(float(i))) * vec2(px, py) * vec2(hpass, vpass), k) * w[i];
blurred += textureThresholded(screen_texture, SCREEN_UV + vec2(float(i) * cos(float(i)), float(-i) * sin(float(i))) * vec2(px, py) * vec2(hpass, vpass), k) * w[i];
blurred += textureThresholded(screen_texture, SCREEN_UV + vec2(float(-i) * cos(float(i)), float(i) * sin(float(i))) * vec2(px, py) * vec2(hpass, vpass), k) * w[i];
blurred += textureThresholded(screen_texture, SCREEN_UV + vec2(float(-i) * cos(float(i)), float(i) * sin(float(i))) * vec2(px, py) * vec2(hpass, vpass), k) * w[i];
blurred += textureThresholded(screen_texture, SCREEN_UV + vec2(float(i) * cos(float(i)), float(-i) * sin(float(i))) * vec2(px, py) * vec2(hpass, vpass), k) * w[i];
}
blurred /= float(radius) / 6.;
pixel += blurred;
} else {
pixel = vec4(0., 0., 0., 1.);
}
COLOR = pixel;
}
```
make sure HDR 2D is enabled in the project settings



### Minimal reproduction project (MRP)
[Glowing.zip](https://github.com/user-attachments/files/17282271/Glowing.zip)
| bug,platform:web,topic:rendering | low | Critical |
2,571,017,518 | vscode | Validate data in chat API more thoroughly | Any chat participant with a bug could have caused the issue in- https://github.com/microsoft/vscode-copilot-release/issues/1723. It shouldn't be possible for an extension to break the chat list like this. We should validate their data at the extHost* file layer, and also make the chat renderer more resilient to rendering bugs. | debt,panel-chat | low | Critical |
2,571,045,132 | flutter | Make a new "experiments lab" tool feature to be opted into all current and future feature experiment configs | Some new features are implemented in Flutter as opt-in configs. Impeller, [Swift Package Manager](https://github.com/flutter/website/blob/ad2ebeea0ffc62b496f66c53633cb3f42b302737/src/_includes/docs/swift-package-manager/how-to-enable-disable.md?plain=1#L18-L22), [web](https://github.com/flutter/website/blob/041bc7dca4ca1104c2da007d421b7667937c17d4/src/docs/development/platform-integration/web.md?plain=1#L49-L53), and desktop support come to mind. Every time we add one we need publicize that the new feature exists. Often this involves adding to the technical blog post, the website, emailing Flutter Insiders, and pushing on social mdeia.
It would be neat to have a new config `flutter config --enable-all-experiments` or similar for Flutter Insiders or other groups that want to try out all the bleeding edge features, regardless of what channel they are on.
This should be prominently called out in `flutter doctor`, possibly including instructions for how to turn it off. | tool,P2,team-tool,triaged-tool | low | Minor |
2,571,095,685 | PowerToys | Request for a new feature | ### Description of the new feature / enhancement
Is it possible to have a toggle switch added to a zone in FancyZones to apply a screen "filter" for those pages or windows that don't have "dark" mode setting (similar to attaching a physical screen filter to dull/dim the bright white appearance)?
### Scenario when this would be used?
This would be used on a web page or software screen where it does not have a dark mode setting, to alleviate eye strain/fatigue.
### Supporting information
_No response_ | Needs-Triage,Needs-Team-Response | low | Minor |
2,571,116,215 | pytorch | Inductor remove alias from graph causing check id wrong result in custom ops. | Given a custom op not_eq_impl that , its expected for the result to be True, x and y do not have the same id given the post grad
graph bellow, but they do have same id!
```
@torch.library.impl("mylib::not_eq", "cpu", lib=lib)
@torch._dynamo.disable
def not_eq_impl(x, y):
print(id(x))
print(id(y))
self.assertNotEqual(id(x), id(y))
def func(x):
a = torch.ops.aten.alias.default(x)
torch.ops.mylib.not_eq(a, x)
```
although post grad graph is
```
TRACED GRAPH
===== AFTER POST GRAD =====
/home/lsakka/pytorch/torch/fx/_lazy_graph_module.py class <lambda>(torch.nn.Module):
def forward(self, arg0_1: "f32[2, 2][2, 1]cpu"):
# No stacktrace found for following nodes
alias_default: "f32[2, 2][2, 1]cpu" = torch.ops.aten.alias.default(arg0_1)
not_eq_default = torch.ops.mylib.not_eq.default(arg0_1, alias_default); alias_default = not_eq_default = None
# File: /home/lsakka/pytorch/test/inductor/test_auto_functionalize.py:1147 in func, code: torch.ops.mylib.not_eq(a, x)
copy_: "f32[2, 2][2, 1]cpu" = torch.ops.aten.copy_.default(arg0_1, arg0_1); arg0_1 = copy_ = None
return ()
```
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @rec @zou3519 @bdhirsh | triaged,module: custom-operators,oncall: pt2,module: inductor,module: pt2-dispatcher | low | Minor |
2,571,124,317 | ollama | XML boot file for prompt and respond | Boot XML file for classes to handle structured data and logic is a more robust and maintainable approach than using complex template syntax. Here’s why:
1. Encapsulation of Logic:
Class-based Approach: By encapsulating functionality within classes, you create a clear boundary around your logic. This leads to better organization and easier debugging.
Templates: On the other hand, templates that rely heavily on conditionals can become tangled and difficult to manage as the project grows.
2. Reusability:
Classes: Classes can be reused across different parts of the application without rewriting logic. This promotes DRY (Don't Repeat Yourself) principles and allows for easier updates.
Template Syntax: Modifying template logic can often require changes in multiple places, increasing the risk of introducing bugs.
3. Readability and Maintainability:
Class Definitions: Clear class definitions with well-named methods can enhance readability. Other developers can quickly grasp what a class does just by looking at its interface.
Template Logic: Templated syntax, especially when it gets convoluted with conditions, can make it challenging for others (or even the original developer) to understand the flow of logic without extensive comments.
4. Separation of Concerns:
Class-based Systems: This promotes a separation of concerns, where data representation, logic, and presentation can be managed independently. For example, your ai.system class could handle loading, validating, and processing XML data without being intermingled with how that data is presented.
Templates: This can lead to a conflation of data and presentation logic, which can complicate maintenance and testing.
The approach to using classes and structured XML is more aligned with software engineering best practices. It fosters a system that can evolve without the baggage of overly complex templates, making it easier for developers to work with and understand. If the goal is to develop a sustainable and maintainable AI system, sticking to solid OOP principles and clear data representations will always be the better choice. | feature request | low | Critical |
2,571,153,190 | tauri | [bug] Tauri 2: productName from tauri.conf.json not working as expected for iOS | ### Describe the bug
When building app on macOS, dmg file is correctly named after productName from tauri.cong.json (or specific platform one) but when doing same from iOS, when producName in iOS config differs from rust package name, deploy to simulator not is not working, ipa file is not named correctly and app has wrong name.
Rust name: iostest
Product name: Product
```
tauri build
```
macOS dmg file: Product_0.1.0_aarch64.dmg
macOS app name: Product
```
tauri ios build
```
iOS ipa name: iostest.ipa
iOS app name: iostest
```tauri ios dev```
Is failing with ```An application bundle was not found at the provided path.```
### Reproduction
https://github.com/skurovec/iostest
Just need to provide you own app identifier and provisioning.
### Expected behavior
Expected ios ipa file Product.ipa and app name Product.
### Full `tauri info` output
```text
$ tauri info
[✔] Environment
- OS: Mac OS 15.0.1 arm64 (X64)
✔ Xcode Command Line Tools: installed
✔ rustc: 1.81.0 (eeb90cda1 2024-09-04)
✔ cargo: 1.81.0 (2dbb1af80 2024-08-20)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-aarch64-apple-darwin (default)
- node: 22.9.0
- yarn: 1.22.22
- npm: 10.8.3
[-] Packages
- tauri 🦀: 2.0.2
- tauri-build 🦀: 2.0.1
- wry 🦀: 0.44.1
- tao 🦀: 0.30.3
- tauri-cli 🦀: 1.4.0
- @tauri-apps/api : not installed!
- @tauri-apps/cli : 2.0.2
[-] Plugins
- tauri-plugin-shell 🦀: 2.0.1
- @tauri-apps/plugin-shell : not installed!
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../src
[-] iOS
- Developer Teams: XXXXXXX
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,status: needs triage,platform: iOS | low | Critical |
2,571,165,499 | godot | 3D Visible Collision Shapes are interpolated | ### Tested versions
4.4dev3
### System information
Godot v4.4.dev3 - Windows 10.0.19045 - Multi-window, 1 monitor - OpenGL 3 (Compatibility) - NVIDIA GeForce GTX 1060 (NVIDIA; 31.0.15.2802) - Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz (8 threads)
### Issue description
When the "Visible Collision Shapes" debug setting is enabled, the meshes drawn for 3D Collision Shapes are interpolated. 2D Collision Shapes are not affected. The meshes are interpolated even if interpolation is disabled for the Collision Shape node, or if inheriting disabled interpolation from the parent node. Because this setting is intended to visualize the position of the shapes, it doesn't make much sense to interpolate the debug meshes, as this causes a disparity between the rendered position and the true position of the node
### Steps to reproduce
The MRP contains a scene with a sphere that updates its position during physics processing. The physics tickrate is set to 10 to make the interpolation more obvious. The sphere has physics interpolation disabled, to show where the collider is supposed to be.
1. Enable "Debug > Visible Collision Shapes"
2. Run the main scene
3. Observe that while the shapes's graphics are uninterpolated as intended, the debug mesh drawn for it still has interpolation
### Minimal reproduction project (MRP)
[interpolateddebugshapes.zip](https://github.com/user-attachments/files/17283045/interpolateddebugshapes.zip)
| topic:physics | low | Critical |
2,571,190,937 | TypeScript | TS Server fatal error: Maximum call stack size exceeded | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: v1.94
- OS Version: macos 14.2.1
Steps to Reproduce:
1. clone the [minimal reproduction](https://github.com/MoeYc/vscode-ts-break-case.git) repo
2. open `src/file-0.ts`
3. TS server crashed
### Additional info
1. We have a repo with `1000+` TS files and `100k` lines of code. TS `<= v5.4.5` works fine, but `> v5.4.5` doesn't work.
2. The minimal reproduction has `1500` TS files and `200k` lines of code, and TS `v5.4.5` doesn't work.
3. In real cases, we will have a repo with `10,000+` ts files and `4 million` lines of code. i hope TS LSP can work in this scenario.
| Bug | low | Critical |
2,571,206,342 | godot | ProgressBar from saved scene has its size changed by a Control node after reopening the scene again. | ### Tested versions
4.4.dev3, 4.3 standard, 4.3-mono, 4.3.1-mono-rc
### System information
Windows 11 - Vulkan - Nvidia RTX 4070 - intel i5 13600KF
### Issue description
I have a scene which has a Control node (nothing else) and a ProgressBar, this ProgressBar was saved as a separate scene, now when I close the Control scene the ProgressBar gets resized if it's around the border (not outside but just precisely on the border, in the video it's the top left corner). If I put the progress away from the edges/corners it seems to be fine. Same happens when the Control scene is used in another scene as a Ui for example.
Watch the video below please.
https://github.com/user-attachments/assets/ef296f54-fce9-46ab-a3d6-31dc47490809
From what it seems to me the ProgressBar resizes to fit the font size despite the Show Percentage being unchecked, I also tried to override the font size to 1 px but that did not change this behaviour at all.
### Steps to reproduce
1. Create a new Control scene with a Control Node as the top node.
2. Add a ProgressBar (name it however you want).
3. Save the scene (I'll refer to this file as Control scene).
4. Change ProgressBar size down (it needs to be smaller than some who knows what value): Layout -> Transform -> Size (67, 5)
5. Uncheck ProgressBar's ShowPercentage (make sure it doesn't show any numbers otherwise you can't resize it below the font size limit!)
6. Right click the ProgressBar and save it as a scene (I'll refer to this file as a Progress scene).
7. Now close the Control scene.
8. Open the Control scene again.
9. (If it did not work then you might need to delete the ProgressBar from the scene and drag and drop the save Progress scene into the Control node and make sure you keep it at the origin - top left corner.
10. Observer the ProgressBar expanding to some random size (it might be related to some default font size perhaps, at least that's what it looks like to me it resizes to?)
### Minimal reproduction project (MRP)
[ProgressBarMrp.zip](https://github.com/user-attachments/files/17283184/ProgressBarMrp.zip)
| bug,topic:editor,topic:gui | low | Minor |
2,571,219,230 | tensorflow | `SimpleDynamicBuffer::AddString` is calling `memcpy` with null data | I've noticed this hitting on our ubsan builds recently:
```
../../third_party/tflite/src/tensorflow/compiler/mlir/lite/utils/string_utils.cc:32:10: runtime error: null pointer passed as argument 1, which is declared to never be null
../../build/linux/debian_bullseye_amd64-sysroot/usr/include/string.h:44:28: note: nonnull attribute specified here
#0 0x5a36de826450 in mlir::TFL::SimpleDynamicBuffer::AddString(char const*, unsigned long) third_party/tflite/src/tensorflow/compiler/mlir/lite/utils/string_utils.cc:32:3
#1 0x5a36de825d3e in tflite::DynamicBuffer::AddString(char const*, unsigned long) third_party/tflite/src/tensorflow/lite/string_util.cc:37:28
#2 0x5a36de82924d in PopulateTensor<std::__Cr::basic_string<char, std::__Cr::char_traits<char>, std::__Cr::allocator<char> > > third_party/tflite_support/src/tensorflow_lite_support/cc/task/core/task_utils.h:125:13
#3 0x5a36de82924d in tflite::task::processor::UniversalSentenceEncoderPreprocessor::Preprocess(std::__Cr::basic_string<char, std::__Cr::char_traits<char>, std::__Cr::allocator<char>> const&) third_party/tflite_support/src/tensorflow_lite_support/cc/task/processor/universal_sentence_encoder_preprocessor.cc:58:3
#4 0x5a36de81d3f7 in tflite::task::text::TextEmbedder::Preprocess(std::__Cr::vector<TfLiteTensor*, std::__Cr::allocator<TfLiteTensor*>> const&, std::__Cr::basic_string<char, std::__Cr::char_traits<char>, std::__Cr::allocator<char>> const&) third_party/tflite_support/src/tensorflow_lite_support/cc/task/text/text_embedder.cc:174:25
#5 0x5a36de81cd8c in tflite::task::core::BaseTaskApi<tflite::task::processor::EmbeddingResult, std::__Cr::basic_string<char, std::__Cr::char_traits<char>, std::__Cr::allocator<char>> const&>::InferWithFallback(std::__Cr::basic_string<char, std::__Cr::char_traits<char>, std::__Cr::allocator<char>> const&) third_party/tflite_support/src/tensorflow_lite_support/cc/task/core/base_task_api.h:146:5
#6 0x5a36de81cc40 in tflite::task::text::TextEmbedder::Embed(std::__Cr::basic_string<char, std::__Cr::char_traits<char>, std::__Cr::allocator<char>> const&) third_party/tflite_support/src/tensorflow_lite_support/cc/task/text/text_embedder.cc:169:10
#7 0x5a36d2728c24 in ai_chat::TextEmbedder::EmbedText(std::__Cr::basic_string<char, std::__Cr::char_traits<char>, std::__Cr::allocator<char>> const&, tflite::task::processor::EmbeddingResult&) brave/components/ai_chat/core/browser/text_embedder.cc:271:49
#8 0x5a36d2728073 in ai_chat::TextEmbedder::EmbedSegments() brave/components/ai_chat/core/browser/text_embedder.cc:287:19
#9 0x5a36c8c67059 in ai_chat::TextEmbedderUnitTest::EmbedSegments(ai_chat::TextEmbedder*)::'lambda'()::operator()() const brave/components/ai_chat/core/browser/text_embedder_unittest.cc:67:58
#10 0x5a36c6762969 in base::OnceCallback<void ()>::Run() && base/functional/callback.h:156:12
#11 0x5a36d4977df2 in base::TaskAnnotator::RunTaskImpl(base::PendingTask&) base/task/common/task_annotator.cc:202:34
#12 0x5a36d49dcfe9 in RunTask<(lambda at ../../base/task/thread_pool/task_tracker.cc:678:35)> base/task/common/task_annotator.h:90:5
#13 0x5a36d49dcfe9 in base::internal::TaskTracker::RunTaskImpl(base::internal::Task&, base::TaskTraits const&, base::internal::TaskSource*, base::internal::SequenceToken const&) base/task/thread_pool/task_tracker.cc:677:19
#14 0x5a36d49dd0f1 in base::internal::TaskTracker::RunSkipOnShutdown(base::internal::Task&, base::TaskTraits const&, base::internal::TaskSource*, base::internal::SequenceToken const&) base/task/thread_pool/task_tracker.cc:662:3
#15 0x5a36d49dc1f5 in base::internal::TaskTracker::RunTask(base::internal::Task, base::internal::TaskSource*, base::TaskTraits const&) base/task/thread_pool/task_tracker.cc:520:5
#16 0x5a36d4af81fb in base::test::TaskEnvironment::TestTaskTracker::RunTask(base::internal::Task, base::internal::TaskSource*, base::TaskTraits const&) base/test/task_environment.cc:1028:46
#17 0x5a36d49db5a5 in base::internal::TaskTracker::RunAndPopNextTask(base::internal::RegisteredTaskSource) base/task/thread_pool/task_tracker.cc:415:5
#18 0x5a36d4a0cabd in base::internal::WorkerThread::RunWorker() base/task/thread_pool/worker_thread.cc:493:36
#19 0x5a36d4a0c100 in base::internal::WorkerThread::RunPooledWorker() base/task/thread_pool/worker_thread.cc:379:3
#20 0x5a36d4a0bc86 in base::internal::WorkerThread::ThreadMain() base/task/thread_pool/worker_thread.cc:359:7
#21 0x5a36d4a3e1ec in base::(anonymous namespace)::ThreadFunc(void*) base/threading/platform_thread_posix.cc:101:13
#22 0x7695b109ca93 in start_thread nptl/pthread_create.c:447:8
#23 0x7695b1129c3b in clone3 misc/../sysdeps/unix/sysv/linux/x86_64/clone3.S:78
``` | type:bug,comp:lite,TFLiteConverter,awaiting PR merge | medium | Critical |
2,571,231,738 | godot | Editor window stops updating when launching project in top left corner on second monitor | ### Tested versions
- Reproducible in 4.1-dev1 onward
- Not reproducible in 4.0.4-stable
### System information
Godot v4.3.stable (77dcf97d8) - Arch Linux #1 SMP PREEMPT_DYNAMIC Fri, 04 Oct 2024 21:51:11 +0000 - Wayland - Vulkan (Mobile) - dedicated AMD Radeon RX 6600 (RADV NAVI23) - AMD Ryzen 7 5700G with Radeon Graphics (16 Threads)
### Issue description
It only happens with freesync enabled on my main monitor and run/window_placement/rect set to top left on my second monitor. The editor starts to refresh again after refocusing it.
### Steps to reproduce
- Create new project.
- Set run/window_placement/rect to top left.
- Set run/window_placement/screen to next screen, previous screen, or screen 2.
### Minimal reproduction project (MRP)
n/a | bug,topic:editor | low | Minor |
2,571,232,500 | TypeScript | Ability to pick setter types (instead of getter types) in a mapped type | ### 🔍 Search Terms
typescript extract setter types mapped type
### ✅ Viability Checklist
- [X] This wouldn't be a breaking change in existing TypeScript/JavaScript code
- [X] This wouldn't change the runtime behavior of existing JavaScript code
- [X] This could be implemented without emitting different JS based on the types of the expressions
- [X] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, new syntax sugar for JS, etc.)
- [ ] This isn't a request to add a new utility type: https://github.com/microsoft/TypeScript/wiki/No-New-Utility-Types
- [X] This feature would agree with the rest of our Design Goals: https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals
### ⭐ Suggestion
([original thread](https://discord.com/channels/508357248330760243/1292604210591825921) on Discord)
This might be a request for a new utility type, or a new language feature, because there seems to be no way to do it with current language features currently:
Given this type:
```ts
class Foo {
get foo(): number {...}
set foo(v: 'foo' | 'bar' | number) {...}
}
```
We want to derive a type like this:
```ts
type Derived = PickWithSetterTypes<Foo>
// result:
// {foo: 'foo' | 'bar' | number}
```
There seems to be no way to implement `PickWithSetterTypes` in userland.
### 📃 Motivating Example
When writing class-based JSX components (for example, custom elements are written as classes), the JSX property types are *setters, not getters*.
In this JSX expression:
```tsx
return <some-element foo={123} />
```
The `foo` prop is a `setter`, and it is setting the property on the custom element, it is *not reading* the property from the element.
So, when we define a class component, for example:
```ts
class SomeElement extends HTMLElement {
get foo(): number {...}
set foo(v: 'foo' | 'bar' | number) {...}
someMethod() {... this method should not be included in JSX types ...}
}
customElements.define('some-element', SomeElement)
```
We need to pluck the properties that we want available in the JSX. For example, something along the lines of this:
```js
declare module 'some-lib' {
namespace JSX {
interface IntrinsicElements {
'some-element': Pick<SomeElement, 'foo'>
}
}
}
```
Now, the problem is, when we try to set a valid value for the property in JSX, it will not work:
```tsx
return <some-element
foo={'foo'} // Type Error: 'foo' is not assignable to number
/>
```
There should not be a type error, because the setter actually does accept the value `'foo'`.
### 💻 Use Cases
1. What do you want to use this for?
- Any situations where the setter types need to be extracted, for example JSX props
3. What shortcomings exist with current approaches?
- it is impossible right now
5. What workarounds are you using in the meantime?
- Workarounds could include providing separate named properties that can be extracted with template string types, but it is very cumbersome
```ts
class SomeElement extends HTMLElement {
set foo(v: this['_set_foo']) {...}
get foo(): number {...}
/** do not use this property, it is for types only */
_set_foo!: 'foo' | 'bar' | number
}
```
With this method, now a utility type can be written that can use template string types to extract the setter type from the non-setter dummy property type, something like this:
```ts
type SomeElementJSXProps = JSXProps<SomeElement, 'fooBar' | 'foo'> // see linked playground below
declare module 'react' {
namespace JSX { interface IntrinsicElements { 'my-el': SomeElementJSXProps } }
}
```
[TypeScript playground example](https://www.typescriptlang.org/play/?target=99#code/JYWwDg9gTgLgBAKjgQwM5wEoFNkGN4BmUEIcA5FDvmQFA24A2a6AsgJ4CiDcWAHjFgB2AE3QAJACosAMlywgh8AN404auAG0ADAF0AhAC44ARgBMAZjgB6K3FQksPflGSCsEAK7oA1ljZwAd0dhCEEyeADXeGBBOBgAC0cAKQBlAA06dTgCCAgAIWQoQxMLa1t7BTgIBKwoODBiMFqYfyDAqLgYuMS4VLS4AApBCDsm3GBkbhhKZBgFQRgASkz1GzgJHr64tia7eM8GYTh45AA3Rxq7LBgBOpbdiAJu4HQGiCbYNlV1AHNr7NyA0WRkEHhAACNanAlJQYB4oLEtABfb5qVD-HIQAanIwJF4aMgAfXRMEJmLIOkW0JRqLgxOuZNyxTI5LgAB9yODCmR2XBQRDais1GsNsl0ttdqh9h5DsczhceiTbhLHI9nq9Gs0vlk-vAuVAgSCwZC6jDrvDESisiS4Prsbj4viiSTGRAKVSlDSsvTSfriuDcgwcLEOfyTTQaSLHegTuhLvcsOg2pEFgC6m8wOguls+A1E6hgKFUDQE3B2Fw+gAFRroAC8vXS1feqAAPOWGAAaciYgpQHkclm5fuc7kAPjowiwjEKjhAEGEMscFCoMB5KiygmQClQYDwYv666y6hitwIe7gAEkFlAYgXcHJ5jB0Iej0eyCA2ABaLAMMhGdtVjWtJZFa6gojSuBFvAABWqC8BIibwPWLYft+DC0pitZKIObpImUVTeLS+pYdMHhYHhawQIRtJrEIqDwo4MSJDeAhHFsGbNMAiZOC8MC0sAwhYQAROCTDxEJoFoi0QZYUokEMNARjLsIZBIiio4tlYqE-uO9BQXAsHwYhphwMh2noVkmFKGY5gUbYVFEYUWE2XZBE0BpWlfjpdCQYIqAwXBCH+ZYZleRZqy2AAAk+368GMMDflAxB1AGEBBq4nToMM8DIHApyTGReUFVgGG5CRUBkZJ+HRagsXxYlyV2NMMQ-JlfLVCgRUMIV+XdSVWTEUoQk2RJ7maeZulrKAYBBo+syFrENhLctK2rWt60bWtdCloBzYthIXaVsAuC+MIADSfjoHwAgiD4fhqtgc7nNWWAEMAvCJi2j0QOcACCuC4Pm0CthIo5dik1y3BIOxYC9b28KOo6mXAtIAPIgMAMBfSuAB0AAi1zIMAQbCJIMhNpm2N4DAONk9Iv03De4IeAIINg+s7NHSdWDnZd45ZAAZHAlaFDAEwMC2XPeNjT2w5Q8Ofd9f0A0DUBs+DkO1NDTRw+9nPHadF1sKgiPbTDcApM1gg-EbqCo4IDBsPtSP1gAqoIC0AOqY9KWMHXyxpQhyqBsBCaW6aW7tez7nh+122tYBIEBK1gLvrE4N2iOsMNJyncAAPx8lg5x1EYEgANxm7sKe6x9IOHfL70Z0IWf+Te1tp4eGhnZ0sS+GwaoSCg6A99dLfoG3LUF5g8g-XLr3vS2Z0NwvCNwEYZ06GX3c6JXNKljXje8Ptze3U17c-Cv8On63Vs-GnQ9j2fAAGAAkSi10i78xAQULYP5SJn7T3-vAMulcSzmxTv9QGqB7Bq2dsjF8mge5dH7oPYecBR78HHjPWWtcWyWwvrbe2jsCGaygAnW2AAxaAzt2YQxuFrGGtckaFzcCXdemCt7rB3hGKujgGFQxhtQ2hoNkZoKeLXKW+0NaMIocwo+EdzZSINjI4WR8b4Tzvp3b43de5wAkenNAmDNFwDfh-I+X8lCT2toA6ePcQTF1qNwiQO897gNLIIphOsNH1mdAyWgQA)
| Suggestion,Awaiting More Feedback | low | Critical |
2,571,234,979 | vscode | Support Ghost Cells In Notebooks - Insert Cells In Edit Mode | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
I'd like to be able to build Notebook extensions that insert "ghost cells". Ghost cells are similar to ghost text except instead of inserting ghost text at the current cursor position one or more cells is inserted into the notebook. These cells are rendered with a different font color to provide a clear visual distinction.
Here's a demo video
https://github.com/user-attachments/assets/2d68fbd3-8d3b-43d5-accc-5f17c723dea2
You can largely implement this today using TextDecorations to change the font color of a cell. This works well for code cells. For Markup cells this doesn't work because code cells get inserted in "preview" mode not in "edit" mode and text decorations for markup cell only show up in edit mode. You can see this in the video provided above.
One possible way to fix this would be to allow markup cells to be inserted in "edit" mode and/or provide an API to change the mode of a cell. As far as I could tell this currently isn't possible except possibly by changing the focus. In my case I don't want to change the focus because the user is continuing to type into the current cell as the AI is inserting/updating ghost cells.
As a work around I can do the following (https://github.com/stateful/vscode-runme/pull/1713)
1. Insert markup cells as code cells with language markup
1. When cell is accepted replace the cell with markup
This work around has a couple drawback
1. Its confusing to users to render markdown as code cells rather than markup cells
1. When user accepts the cell it ends up showing in preview mode rather than edit mode
| feature-request | low | Minor |
2,571,240,226 | neovim | Tree-Sitter errors when filetype contains a dot | ### Problem
Neovim (inheriting from Vim) has a seldom used feature for combining multiple filetypes into the `'filetype'` option separated by a dot: https://github.com/neovim/neovim/blob/61f1b091ea97793f9b644cebf6c84cf6bbb4f0bc/runtime/doc/options.txt#L2541-L2546
The runtime files for `eruby` and `liquid`, both templating languages, use the second filetype to decide the syntax of the language the template is generating, combining those runtime files with its own. Outside of those special cases, the default behavior in Vim is to load each language's runtime files in succession, with the first language setting a variable—`b:current_syntax`, `b:did_ftplugin`, `b:did_indent`—that subsequent runtime files use to short-circuit. These short circuits are missing from the syntax files that delegate to Tree-Sitter, producing an error whenever a Tree-Sitter language is used as a secondary syntax.
### Steps to reproduce
```
nvim --clean
:setf sh.lua
```
Neovim 0.10:
```
Error detected while processing FileType Autocommands for "*"..function <SNR>1_LoadFTPlugin[20]..script /opt/homebrew/Cellar/neovim/0.10.2_1/share/nvim/runtime/ftplugin/lua.lua:
E5113: Error while calling lua chunk: ...0.2_1/share/nvim/runtime/lua/vim/treesitter/language.lua:101: 'sh.lua' is not a valid language name
stack traceback:
[C]: in function 'error'
...0.2_1/share/nvim/runtime/lua/vim/treesitter/language.lua:101: in function 'add'
...1/share/nvim/runtime/lua/vim/treesitter/languagetree.lua:111: in function 'new'
...eovim/0.10.2_1/share/nvim/runtime/lua/vim/treesitter.lua:41: in function '_create_parser'
...eovim/0.10.2_1/share/nvim/runtime/lua/vim/treesitter.lua:108: in function 'get_parser'
...eovim/0.10.2_1/share/nvim/runtime/lua/vim/treesitter.lua:416: in function 'start'
...llar/neovim/0.10.2_1/share/nvim/runtime/ftplugin/lua.lua:2: in main chunk
```
Neovim 0.11 HEAD:
```
Error detected while processing FileType Autocommands for "*"..function <SNR>1_LoadFTPlugin[20]..script /home/tpope/Code/vim/neovim/builds/0.11/share/nvim/runtime/ftplugin/lua.lua:
E5113: Error while calling lua chunk: ...im/builds/0.11/share/nvim/runtime/lua/vim/treesitter.lua:426: Parser could not be created for buffer 1 and language "sh"
stack traceback:
[C]: in function 'assert'
...im/builds/0.11/share/nvim/runtime/lua/vim/treesitter.lua:426: in function 'start'
...m/neovim/builds/0.11/share/nvim/runtime/ftplugin/lua.lua:2: in main chunk
```
Neovim 0.11 does fix the reverse case: `filetype=lua.sh`.
### Expected behavior
I would expect silence. For file types like `eruby` and `liquid`, it would of course be nice if they functioned as intended, but I wouldn't expect combining Tree-Sitter with regexp based highlighting to work, and it's not the subject of this bug report.
### Nvim version (nvim -v)
0.10.2, HEAD
### Vim (not Nvim) behaves the same?
no, vim 9.1
### Operating system/version
any
### Terminal name/version
any
### $TERM environment variable
any
### Installation
Homebrew, manual compile | bug,runtime | low | Critical |
2,571,240,693 | ui | [bug]: Toast not displaying | ### Describe the bug
The Toaster component is not displaying on the screen when triggered. Despite following the setup instructions and ensuring the component is correctly imported and used, the toast notifications do not appear.
### Affected component/components
Toast
### How to reproduce
https://github.com/user-attachments/assets/3df8c6e3-c4c8-4e4b-b810-c17344f31eaf
Bug description:
The Toaster component, part of the shadcn/ui library, is not rendering on the page as expected. This issue persists despite using the recommended setup and usage patterns.
Set up a Next.js project and import the Toaster component from shadcn/ui.
Add the Toaster component to a page and trigger it via a user action (e.g., clicking a button).
Observe that no toast notification appears on the screen.
```tsx
import { Button } from "@/components/ui/button";
import Head from "next/head";
import {toast} from "@/components/hooks/use-toast";
import { ToastAction } from "@/components/ui/toast";
const TestPage = () => {
return (
<>
<Head>
<title>Test Page</title>
<meta name="description" content="This is a test page" />
</Head>
<main className="flex flex-col items-center justify-center h-screen">
<h1 className="text-6xl font-bold">Test Page</h1>
<Button
variant="outline"
onClick={() => {
toast({
title: "Scheduled: Catch up ",
description: "Friday, February 10, 2023 at 5:57 PM",
action: (
<ToastAction altText="Goto schedule to undo">Undo</ToastAction>
),
})
}}
>
Add to calendar
</Button>
</main>
</>
)
}
export default TestPage;
```
`layout.tsx`
---
```tsx
import type { Metadata } from "next";
import localFont from "next/font/local";
import "./globals.css";
import { Open_Sans } from 'next/font/google';
import { Toaster } from "@/components/ui/toaster";
interface RootLayoutProps {
children: React.ReactNode;
}
// export default async function RootLayout({
// children,
// }: Readonly<{
// children: React.ReactNode;
// }>) {
export default function RootLayout({ children }: RootLayoutProps) {
return (
<html lang="en">
<body
className={`${geistSans.variable} ${geistMono.variable} ${geistOpenSans.variable} ${geistGilroy.variable} antialiased`}
>
{children}
<Toaster/>
</body>
</html>
);
}
```
### Codesandbox/StackBlitz link
_No response_
### Logs
```bash
No errors output.
```
### System Info
```bash
macOS 15.0.1 (24A348)
React Version 18.3.1
Next.js Version 14.2.14
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,571,243,548 | rust | Broken MIR in DefId (TerminatorKind::Call with GATs) | <!--
Thank you for finding an Internal Compiler Error! 🧊 If possible, try to provide
a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for
how to create smaller examples.
http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/
-->
### Code
```Rust
struct Ref<T>(T);
trait Reference: 'static {
type Ref<'a>;
fn get_ref(&self) -> Self::Ref<'_>;
}
trait Lock: 'static {
type Locked<'a>;
fn locked(&self) -> Self::Locked<'_>;
}
struct SliceRef<'a, T: ?Sized> {
inner: &'a T
}
impl<'a, 'b, T: ?Sized, SR: Reference> IntoIterator for &'b SliceRef<'a, T> where &'a T: IntoIterator<Item=&'a SR> {
type Item = SR::Ref<'a>;
type IntoIter = std::iter::Map<<&'a T as IntoIterator>::IntoIter, for<'c> fn(&'c SR) -> SR::Ref<'c>>;
fn into_iter(self) -> Self::IntoIter {
self.inner.into_iter().map(|cr| { cr.get_ref() })
}
}
impl<SR: Reference> Reference for Vec<SR> {
type Ref<'a> = SliceRef<'a, [SR]>;
fn get_ref(&self) -> Self::Ref<'_> {
SliceRef {
inner: &**self,
}
}
}
impl<SR: Reference> Lock for Ref<SR> {
type Locked<'a> = SR::Ref<'a>;
fn locked(&self) -> Self::Locked<'_> {
self.0.get_ref()
}
}
impl Reference for () {
type Ref<'a> = &'a ();
fn get_ref(&self) -> Self::Ref<'_> {
unimplemented!()
}
}
fn main() {
let data = Ref(Vec::<()>::new());
let _ = (&data.locked()).into_iter();
}
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.81.0 (eeb90cda1 2024-09-04)
binary: rustc
commit-hash: eeb90cda1969383f56a2637cbd3037bdf598841c
commit-date: 2024-09-04
host: x86_64-pc-windows-msvc
release: 1.81.0
LLVM version: 18.1.7
```
### Error output
```
error: internal compiler error: broken MIR in DefId(0:43 ~ tester[0e95]::main) (Terminator { source_info: SourceInfo { span: src/main.rs:50:13: 50:41 (#0), scope: scope[1] }, kind: _3 = <&SliceRef<'_, [()]> as IntoIterator>::into_iter(move _4) -> [return: bb4, unwind: bb6] }): call dest mismatch (std::iter::Map<std::slice::Iter<'?7, ()>, Binder { value: fn(&'^0.Named(DefId(0:26 ~ tester[0e95]::{impl#0}::IntoIter::'c), "'c") ()) -> &'^0.Named(DefId(0:26 ~ tester[0e95]::{impl#0}::IntoIter::'c), "'c") (), bound_vars: [Region(BrNamed(DefId(0:26 ~ tester[0e95]::{impl#0}::IntoIter::'c), 'c))] }> <- std::iter::Map<std::slice::Iter<'?6, ()>, Binder { value: fn(&'^0.Named(DefId(0:26 ~ tester[0e95]::{impl#0}::IntoIter::'c), "'c") ()) -> Alias(Projection, AliasTy { args: [(), '^0.Named(DefId(0:26 ~ tester[0e95]::{impl#0}::IntoIter::'c), "'c")], def_id: DefId(0:8 ~ tester[0e95]::Reference::Ref) }), bound_vars: [Region(BrNamed(DefId(0:26 ~ tester[0e95]::{impl#0}::IntoIter::'c), 'c))] }>): NoSolution
--> src/main.rs:50:13
|
50 | let _ = (&data.locked()).into_iter();
| ^^^^^^^^^^^^^^^^
|
note: delayed at compiler\rustc_borrowck\src\type_check\mod.rs:1571:21
0: std::backtrace_rs::backtrace::dbghelp64::trace
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library\std\src\..\..\backtrace\src\backtrace\dbghelp64.rs:91
1: std::backtrace_rs::backtrace::trace_unsynchronized
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library\std\src\..\..\backtrace\src\backtrace\mod.rs:66
2: std::backtrace::Backtrace::create
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library\std\src\backtrace.rs:331
3: std::backtrace::Backtrace::capture
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library\std\src\backtrace.rs:296
4: <rustc_errors::DiagCtxtHandle>::steal_fulfilled_expectation_ids
5: <rustc_errors::DiagCtxtHandle>::emit_diagnostic
6: <rustc_span::ErrorGuaranteed as rustc_errors::diagnostic::EmissionGuarantee>::emit_producing_guarantee
7: <rustc_pattern_analysis::errors::NonExhaustiveOmittedPatternLintOnArm as rustc_errors::diagnostic::LintDiagnostic<()>>::decorate_lint
8: <rustc_borrowck::type_check::TypeChecker>::push_region_constraints
9: rustc_borrowck::dataflow::calculate_borrows_out_of_scope_at_location
10: rustc_borrowck::dataflow::calculate_borrows_out_of_scope_at_location
11: <rustc_borrowck::type_check::TypeChecker>::push_region_constraints
12: rustc_borrowck::mir_borrowck
13: rustc_query_impl::plumbing::query_key_hash_verify_all
14: rustc_ty_utils::ty::self_ty_of_trait_impl_enabling_order_dep_trait_object_hack
15: rustc_query_impl::plumbing::query_key_hash_verify_all
16: rustc_interface::passes::analysis
17: rustc_ty_utils::ty::adt_sized_constraint
18: rustc_ty_utils::ty::adt_sized_constraint
19: rustc_query_impl::query_system
20: _LNan_C
21: _LNan_C
22: _LNan_C
23: alloc::boxed::impl$48::call_once
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library\alloc\src\boxed.rs:2070
24: alloc::boxed::impl$48::call_once
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library\alloc\src\boxed.rs:2070
25: std::sys::pal::windows::thread::impl$0::new::thread_start
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library\std\src\sys\pal\windows\thread.rs:58
26: BaseThreadInitThunk
27: RtlUserThreadStart
--> src/main.rs:50:13
|
50 | let _ = (&data.locked()).into_iter();
| ^^^^^^^^^^^^^^^^
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary><strong>Backtrace</strong></summary>
<p>
```
Compiling tester v0.1.0 (C:\Users\BLUEM\rust\tester)
note: no errors encountered even though delayed bugs were created
note: those delayed bugs will now be shown as internal compiler errors
error: internal compiler error: broken MIR in DefId(0:43 ~ tester[0e95]::main) (Terminator { source_info: SourceInfo { span: src/main.rs:50:13: 50:41 (#0), scope: scope[1] }, kind: _3 = <&SliceRef<'_, [()]> as IntoIterator>::int
o_iter(move _4) -> [return: bb4, unwind: bb6] }): call dest mismatch (std::iter::Map<std::slice::Iter<'?7, ()>, Binder { value: fn(&'^0.Named(DefId(0:26 ~ tester[0e95]::{impl#0}::IntoIter::'c), "'c") ()) -> &'^0.Named(DefId(0:26
~ tester[0e95]::{impl#0}::IntoIter::'c), "'c") (), bound_vars: [Region(BrNamed(DefId(0:26 ~ tester[0e95]::{impl#0}::IntoIter::'c), 'c))] }> <- std::iter::Map<std::slice::Iter<'?6, ()>, Binder { value: fn(&'^0.Named(DefId(0:26 ~
tester[0e95]::{impl#0}::IntoIter::'c), "'c") ()) -> Alias(Projection, AliasTy { args: [(), '^0.Named(DefId(0:26 ~ tester[0e95]::{impl#0}::IntoIter::'c), "'c")], def_id: DefId(0:8 ~ tester[0e95]::Reference::Ref) }), bound_vars: [Region(BrNamed(DefId(0:26 ~ tester[0e95]::{impl#0}::IntoIter::'c), 'c))] }>): NoSolution
--> src/main.rs:50:13
|
50 | let _ = (&data.locked()).into_iter();
| ^^^^^^^^^^^^^^^^
|
note: delayed at compiler\rustc_borrowck\src\type_check\mod.rs:1571:21
0: std::backtrace_rs::backtrace::dbghelp64::trace
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library\std\src\..\..\backtrace\src\backtrace\dbghelp64.rs:91
1: std::backtrace_rs::backtrace::trace_unsynchronized
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library\std\src\..\..\backtrace\src\backtrace\mod.rs:66
2: std::backtrace::Backtrace::create
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library\std\src\backtrace.rs:331
3: std::backtrace::Backtrace::capture
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library\std\src\backtrace.rs:296
4: <rustc_errors::DiagCtxtHandle>::steal_fulfilled_expectation_ids
5: <rustc_errors::DiagCtxtHandle>::emit_diagnostic
6: <rustc_span::ErrorGuaranteed as rustc_errors::diagnostic::EmissionGuarantee>::emit_producing_guarantee
7: <rustc_pattern_analysis::errors::NonExhaustiveOmittedPatternLintOnArm as rustc_errors::diagnostic::LintDiagnostic<()>>::decorate_lint
8: <rustc_borrowck::type_check::TypeChecker>::push_region_constraints
9: rustc_borrowck::dataflow::calculate_borrows_out_of_scope_at_location
10: rustc_borrowck::dataflow::calculate_borrows_out_of_scope_at_location
11: <rustc_borrowck::type_check::TypeChecker>::push_region_constraints
12: rustc_borrowck::mir_borrowck
13: rustc_query_impl::plumbing::query_key_hash_verify_all
14: rustc_ty_utils::ty::self_ty_of_trait_impl_enabling_order_dep_trait_object_hack
15: rustc_query_impl::plumbing::query_key_hash_verify_all
16: rustc_interface::passes::analysis
17: rustc_ty_utils::ty::adt_sized_constraint
18: rustc_ty_utils::ty::adt_sized_constraint
19: rustc_query_impl::query_system
20: _LNan_C
21: _LNan_C
22: _LNan_C
23: alloc::boxed::impl$48::call_once
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library\alloc\src\boxed.rs:2070
24: alloc::boxed::impl$48::call_once
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library\alloc\src\boxed.rs:2070
25: std::sys::pal::windows::thread::impl$0::new::thread_start
at /rustc/eeb90cda1969383f56a2637cbd3037bdf598841c/library\std\src\sys\pal\windows\thread.rs:58
26: BaseThreadInitThunk
27: RtlUserThreadStart
--> src/main.rs:50:13
|
50 | let _ = (&data.locked()).into_iter();
| ^^^^^^^^^^^^^^^^
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: rustc 1.81.0 (eeb90cda1 2024-09-04) running on x86_64-pc-windows-msvc
note: compiler flags: --crate-type bin -C embed-bitcode=no -C debuginfo=2 -C incremental=[REDACTED]
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
end of query stack
error: could not compile `tester` (bin "tester")
```
</p>
</details>
| I-ICE,T-compiler,C-bug,S-bug-has-test,T-types,fixed-by-next-solver,A-GATs | low | Critical |
2,571,246,921 | pytorch | Custom Ops + torch.compile bug/clarification | ### 🐛 Describe the bug
I'm trying to register a custom CPP op with torch.library instead of the "old" way of directly calling the dlopened handle from an autograd function. I followed the [custom ops tutorial](https://pytorch.org/tutorials/advanced/python_custom_ops.html).
The issue I'm facing is that even though I'm calling `.contiguous()` on inputs to the custom op, the generated graph will have "incorrect" strides, and the contiguous checks in the op will end up failing.
Input tensors are of shape `[2, 4, 7, 7, 32]`, and with pytorch's contiguous memory layout, that should translate into stride of `[6272, 1568, 224, 32, 1]`. However, the generated graph will produce those buffers with stride of `[6400, 1600, 224, 32, 1]`.
Here is a minimum reproducible script that does not include any dependencies, or the custom CPP op. I'm hoping I can be pointed in the right direction as to what I'm doing wrong:
https://gist.github.com/alihassanijr/f211960a122dbf5c182b72e9e57aa3d9
If I had to guess, I'd say the GEMM preceding the `.contiguous()` calls and the custom op is padding the input for memory alignment, and somewhere something's breaking, but even if that assumption is true, it doesn't really give me any ideas on what to try next.
A no brainer is that I could call `.contiguous()` on the tensors inside the custom op, but the I don't know how to write a backwards pass for them. Doing so will obviously go around the runtime issue, but if I train models with it, there's a very noticeable error.
I tried using `clone` and `copy`, but the strides emitted in the final graph are still wrong.
I've tried this with both torch `2.4.0` and `2.4.1`, both with CTK 12.4 and on an A100-SXM.
### Versions
Collecting environment information...
PyTorch version: 2.4.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.25.3
Libc version: glibc-2.35
Python version: 3.10.15 (main, Oct 3 2024, 07:27:34) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-118-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
GPU 4: NVIDIA A100-SXM4-80GB
GPU 5: NVIDIA A100-SXM4-80GB
GPU 6: NVIDIA A100-SXM4-80GB
GPU 7: NVIDIA A100-SXM4-80GB
Nvidia driver version: 560.35.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 256
On-line CPU(s) list: 0-254
Off-line CPU(s) list: 255
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7763 64-Core Processor
CPU family: 25
Model: 1
Thread(s) per core: 2
Core(s) per socket: 64
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU max MHz: 3529.0520
CPU min MHz: 0.0000
BogoMIPS: 4900.35
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca
Virtualization: AMD-V
L1d cache: 4 MiB (128 instances)
L1i cache: 4 MiB (128 instances)
L2 cache: 64 MiB (128 instances)
L3 cache: 512 MiB (16 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-63,128-191
NUMA node1 CPU(s): 64-127,192-254
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==7.0.0
[pip3] mypy==1.8.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.3
[pip3] torch==2.4.0+cu124
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.19.0+cu124
[pip3] triton==3.0.0
[conda] numpy 1.26.3 pypi_0 pypi
[conda] torch 2.4.0+cu124 pypi_0 pypi
[conda] torchsummary 1.5.1 pypi_0 pypi
[conda] torchvision 0.19.0+cu124 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @zou3519 @bdhirsh | triaged,module: custom-operators,oncall: pt2,module: inductor,module: pt2-dispatcher | low | Critical |
2,571,260,682 | flutter | I wish there was a bot that re-opened issues closed by forks | As one particular example, in https://github.com/flutter/flutter/issues/155131#event-14545254213, a merged fork closed an issue I intentionally re-opened. If I had not gotten lucky that it was a Monday morning and I had a small inbox, I would have probably missed this and never re-opened it.
Apparently this is a bug/mis-feature of the `Closes XYZ` and `Fixes XYZ` syntax [best described as](https://github.com/orgs/community/discussions/17308#discussioncomment-6842022):
> This feature is train wreck, it's basically an advertisement for JIRA.
Some conversation from an offline chat without names attached:
> Yeah it's really annoying, it basically makes me never want to use the "fixes <>" syntax and instead just close things manually
>
> otherwise if you have to re open, it gets re closed whenever people sync their fork on github I think
> I'm only for it if that theoretical bot linked the issue to the PR so it still closed. This situation is bad but only happens with reverts until they are closed for good. There are many many times more bugs are closed correctly, and it would be a load to add on our triagers to dig through open issues to search ones that should have been closed. Plus I was told engine reverts are soon to be far less common an occurrence
It would be nice to have guidance on what we expect our team to do, i.e. should we just be OK with issues randomly being closed? Should triage check recently closed issues (somehow) and verify they were closed correctly? Should we invest in more automation? Should we move our bug tracker to Bugzilla (mostly kidding)? | team-infra,P2,c: tech-debt,triaged-infra,monorepo | low | Critical |
2,571,261,991 | three.js | BatchedMesh example sortObjects bugs on Safari iOS WebGPU | ### Description
On iOS it seems BatchedMesh example have 3 bugs :
- indexing problem (my guess) : always present
- sorting problem : `sortObjects = false` reduce the visibility of the bug
- perInstanceFrustumCulled problem : `perInstanceFrustumCulled = false` reduce the visibility of the bug
https://github.com/user-attachments/assets/9326e024-0f05-4332-a35c-ceb7e00d1e52
Note : `sortObjects = true` also lead to a bug on Android Chrome on one of my current mission but I cant figured out whats the condition of the bug as the project is complexe and involve multiple BatchedMesh / custom nodes .. ( but sortObjects=false solve the bugs I got on Android ) I'll dive more on it when i ll got the time.
Note 2 : on the same project on iOS it feels like the index of the BatchedMesh is slowly degenerating, this second one might be related to : https://github.com/mrdoob/three.js/issues/29379 but i prefer to notify it here as it might be linked.
Related to https://github.com/mrdoob/three.js/issues/29041 but I prefered create a new issue.
@mwyrzykowski
### Reproduction steps
1. get iOS 18.0 / 18.1
2. enabled flag WebGPU on Safari
3. Open the example with webGPU: https://threejs.org/examples/webgpu_mesh_batch.html
### Code
https://github.com/mrdoob/three.js/blob/master/examples/webgpu_mesh_batch.html
### Live example
https://threejs.org/examples/webgpu_mesh_batch.html
### Screenshots
_No response_
### Version
r169
### Device
Mobile
### Browser
Safari
### OS
iOS | Browser Issue | low | Critical |
2,571,351,202 | Python | incorrect union method implementation in fuzzy operations | null | enhancement | medium | Minor |
2,571,353,838 | pytorch | `torch.pow` throws with `RuntimeError: "reciprocal_cuda" not implemented for 'Char'` | ### 🐛 Describe the bug
something like this vvv
```
t_0 = tensor([1, 2, 3, 4], device="cuda", dtype=torch.int8)
t_1 = tensor(-1, dtype=torch.int8)
torch.pow(t_0, t_1)
```
would trigger `RuntimeError: "reciprocal_cuda" not implemented for 'Char'`.
I think we need to check for inputs type [here ](https://github.com/pytorch/pytorch/blob/fe44b6a67f32b562c88701b630e65b62ce1b63ba/aten/src/ATen/native/cuda/PowKernel.cu#L177) and disable the short-cut for integral dtypes.
Happy to start a PR but where should I throw a test for this?
### Versions
git3346159
cc @ptrblck @msaroufim @manuelcandales @SherlockNoMad @angelayi | module: cuda,low priority,triaged,actionable,module: core aten | low | Critical |
2,571,355,118 | go | proposal: unicode: Graphemes and GraphemesReversed | ### Proposal Details
I propose to add the functions Graphemes and GraphemesReversed to the packages ~~strings and bytes~~ unicode [Edited by adonovan, Oct 8].
#14820 proposes to add such functionality to /x/text, but iterators have made this function easier to write and use, so I think these function belong in std.
First, a few tables are needed, but these are worth their space.
These mostly consist of other tables, so it can be optimized. https://www.unicode.org/reports/tr29/#Grapheme_Cluster_Break_Property_Values
Also these can be used in regexp, for example this regex can used to match a grapheme: `\p{gcb=CR}\p{gcb=LF}|\p{gcb=Control}|\p{gcb=Prepend}*(((\p{gcb=L}*(\p{gcb=V}+|\p{gcb=LV}\p{gcb=V}*|\p{gcb=LVT})\p{gcb=T}*)|\p{gcb=L}+|\p{gcb=T}+)|\p{gcb=RI}\p{gcb=RI}|\p{Extended_Pictographic}(\p{gcb=Extend}*\p{gcb=ZWJ}\p{Extended_Pictographic})*|[^\p{gcb=Control}\p{gcb=CR}\p{gcb=LF}])[\p{gcb=Extend}\p{gcb=ZWJ}\p{gcb=SpacingMark}]*|\p{any}`.
```go
// https://www.unicode.org/Public/UCD/latest/ucd/auxiliary/GraphemeBreakProperty.txt
var PREPEND = &unicode.RangeTable{}
var CONTROL = &unicode.RangeTable{}
var EXTEND = &unicode.RangeTable{}
var SPACING_MARK = &unicode.RangeTable{}
var REGIONAL_INDICATOR = &unicode.RangeTable{}
var L = &unicode.RangeTable{}
var V = &unicode.RangeTable{}
var T = &unicode.RangeTable{}
var LV = &unicode.RangeTable{}
var LVT = &unicode.RangeTable{}
// https://www.unicode.org/Public/16.0.0/ucd/emoji/emoji-data.txt
var EXTENDED_PICTOGRAPHIC = &unicode.RangeTable{}
// https://www.unicode.org/Public/UCD/latest/ucd/DerivedCoreProperties.txt
var INCB_LINKER = &unicode.RangeTable{}
var INCB_CONSONANT = &unicode.RangeTable{}
var INCB_EXTEND = &unicode.RangeTable{}
```
These are some constants and a helper function.
```go
func RunesReversed(s string) iter.Seq2[int, rune] {
return func(yield func(int, rune) bool) {
for i := len(s); i > 0; {
r, size := utf8.DecodeLastRuneInString(s[0:i])
i -= size
if !yield(i, r) {
return
}
}
}
}
const LF = '\n'
const CR = '\r'
const ZWJ = '\u200d'
```
And then you can concatenate the files and generate these tables:
```go
var re = regexp.MustCompile(`(?m)^(?<startRange>[[:xdigit:]]+)\.{0,2}(?<endRange>[[:xdigit:]]+)?\s*;\s*(?<property>\w+)(?:\s*;s*(?<subProperty>\w+))?`)
func gen() {
matches := re.FindAllStringSubmatch(GBP_TXT, -1)
for _, match := range matches {
startRange, err1 := strconv.ParseUint(match[1], 16, 32)
endRange, err2 := strconv.ParseUint(match[2], 16, 32)
if err1 != nil {
panic("should not be")
}
if err2 != nil {
endRange = startRange
}
newRange := make([]rune, 0)
for r := startRange; r <= endRange; r++ {
newRange = append(newRange, rune(r))
}
var rangeTable *unicode.RangeTable
switch match[3] {
case "InCb":
switch match[4] {
case "Consonant", "Extend", "Linker":
default:
continue
}
case "Prepend":
rangeTable = PREPEND
case "Control":
rangeTable = CONTROL
case "Extend":
rangeTable = EXTEND
case "Regional_Indicator":
rangeTable = REGIONAL_INDICATOR
case "SpacingMark":
rangeTable = SPACING_MARK
case "L":
rangeTable = L
case "V":
rangeTable = V
case "T":
rangeTable = T
case "LV":
rangeTable = LV
case "LVT":
rangeTable = LVT
case "Extended_Pictographic":
rangeTable = EXTENDED_PICTOGRAPHIC
default:
continue
}
newRangeTable := rangetable.New(newRange...)
*rangeTable = *rangetable.Merge(rangeTable, newRangeTable)
}
}
```
I created a sample implementation that focuses on easy readability and understandability, so I left out caching and ASCII optimizations.
```go
func Graphemes(s string) iter.Seq2[int, string] {
return func(yield func(int, string) bool) {
if s == "" {
return
}
currIdx := 0
lowIdx := 0
currSize := 0
loop:
for ; ; currIdx += currSize {
currRune, size := utf8.DecodeRuneInString(s[currIdx:])
currSize = size
nextIdx := currIdx + currSize
if nextIdx >= len(s) {
goto ret
}
nextRune, _ := utf8.DecodeRuneInString(s[nextIdx:])
if currRune == CR && nextRune == LF {
continue
}
if unicode.Is(CONTROL, currRune) || currRune == CR || currRune == LF {
goto isBreak
}
if unicode.Is(CONTROL, nextRune) || nextRune == CR || nextRune == LF {
goto isBreak
}
switch {
case unicode.In(currRune, L) && unicode.In(nextRune, L, V, LV, LVT):
continue
case unicode.In(currRune, L, V) && unicode.In(nextRune, V, T):
continue
case unicode.In(currRune, LVT, T) && unicode.In(nextRune, T):
continue
}
if unicode.Is(EXTEND, nextRune) || nextRune == ZWJ || unicode.Is(SPACING_MARK, nextRune) {
continue
}
if unicode.Is(PREPEND, currRune) {
continue
}
if unicode.Is(INCB_CONSONANT, nextRune) {
inCbLinkerCount := 0
for _, prevRune := range RunesReversed(s[:nextIdx]) {
if unicode.Is(INCB_LINKER, prevRune) {
inCbLinkerCount += 1
continue
}
if unicode.Is(INCB_EXTEND, prevRune) {
continue
}
if unicode.Is(INCB_CONSONANT, prevRune) && inCbLinkerCount > 0 {
continue loop
}
break
}
}
if currRune == ZWJ && unicode.Is(EXTENDED_PICTOGRAPHIC, nextRune) {
for _, prevRune := range RunesReversed(s[:currIdx]) {
if unicode.Is(EXTENDED_PICTOGRAPHIC, prevRune) {
continue loop
}
if unicode.Is(EXTEND, prevRune) {
continue
}
break
}
}
if unicode.Is(REGIONAL_INDICATOR, currRune) && unicode.Is(REGIONAL_INDICATOR, nextRune) {
riCount := 1
for _, prevRune := range RunesReversed(s[:currIdx]) {
if unicode.Is(REGIONAL_INDICATOR, prevRune) {
riCount += 1
continue
}
break
}
if riCount%2 != 0 {
continue
}
}
goto isBreak
isBreak:
if !yield(lowIdx, s[lowIdx:nextIdx]) {
return
}
lowIdx = nextIdx
}
ret:
yield(lowIdx, s[lowIdx:])
}
}
func GraphemesReversed(s string) iter.Seq2[int, string] { /* The same, but backwards */}
``` | Proposal | low | Major |
2,571,406,146 | ui | [bug]: Charts > Radial Chart > Tooltip not showing up until half of the width of the first circle | ### Describe the bug
When the cursor is hover the first circle on the radial chart it's not triggering the tooltip to show. Only passed the half of it it will show up wich is a bad UX as it's usually only about 5 pixels width.
### Affected component/components
Charts
### How to reproduce
1. Go to the official website: https://ui.shadcn.com/charts#radial-chart
2. Hover your mouse on the Radial Chart first circle (blue) and see how the tooltip behave
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
All browsers
Windows 11
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,571,421,015 | angular | ng-content doesn't project deferrable views | ### Which @angular/* package(s) are the source of the bug?
core
### Is this a regression?
No
### Description
It is not possible to use content projection based on CSS selectors when the projected content is in a deferrable view.
But it works when using `contentChildren` with structural directives and `ngTemplateOutlet`.
I was expecting that both approaches work the same regarding deferrable views, and it would be really nice to use `@defer` in simple CSS selector based content projection as well.
### Please provide a link to a minimal reproduction of the bug
https://stackblitz.com/edit/stackblitz-starters-ty28hm?file=src%2Fmain.ts
### Please provide the exception or error you saw
_No response_
### Please provide the environment you discovered this bug in (run `ng version`)
Angular v18
### Anything else?
_No response_ | area: core,core: content projection,P3,bug,core: defer | low | Critical |
2,571,421,569 | godot | --headless --import does not work as expected if localization data is present | ### Tested versions
- Reproducible in 4.2.2.stable.official
### System information
macOS 14.6.1 - GLES3 (Compatibility) - Apple M1 - Apple M1 (8 Threads)
### Issue description
When using --headless --import with a project that has localization files connected (.en.translation), a single run is not sufficient to import all assets. A second, subsequent run with the same (--headless --import) parameters is required. I suspect the presence of localization causes the initial import to fail or early-out, so the second run can progress further until complete. This could be related to the number of localization files, I only tested with none and 1.
### Steps to reproduce
Create any Godot project. Add some resources like textures, so you can see them when you run your project.
Add a localization file
Clear all .godot folders
Run with --headless --import
Run directly and notice the textures are missing
Clear all .godot folders
Run with --headless --import
Run with --headless --import a second time
Run directly and notice the textures are present
Clear all .godot folders
Remove the localization files
Run with --headless --import
Run directly and notice the textures are present
### Minimal reproduction project (MRP)
[GodotImportTest.zip](https://github.com/user-attachments/files/17284237/GodotImportTest.zip)
| bug,topic:editor,topic:import | low | Minor |
2,571,427,028 | godot | .blend files import vertex colors inconsistently | ### Tested versions
- Godot 4.3.stable
- Blender 4.2.2 LTS
### System information
MacOS 15.0.1 - M1 Pro - Vulkan (Forward+)
### Issue description
Godot's Blender file importer inconsistently imports vertex colors on a per-mesh basis.
You CAN get around this by using the .glb exporter in Blender, and selecting "Use Vertex Color: Active," however, there is no such setting or default behavior for importing .blend files.

Advanced import view for the example .blend file (note the white meshes with missing color)

Advanced import view for an .glb export of the example .blend file (with colors working as intended):

### Steps to reproduce
After countless hours of testing and troubleshooting, I honestly don't know how to get around this without that .glb export setting. My **guesses** have been:
- Separate meshes that share the same material, thus trying to access the "same" color attribute across multiple meshes, wouldn't import
- Materials with specified OR unspecified color attributes using the Color Attribute material node
- Specifying "Color Attribute Domain/Data Type" in Blender, (trying combos of Vertex Color, Face Corner, Byte Color)
However, I haven't been able to identify any consistent behaviors.
### Minimal reproduction project (MRP)
The MRP contains the .blend file, .glb export, and a basic 3D scene with the models placed down.
[blender-vertex-color-mrp.zip](https://github.com/user-attachments/files/17284235/blender-vertex-color-mrp.zip) | topic:import | low | Minor |
2,571,469,562 | react-native | 0.75.4: Local image not displayed by <Image /> when extension is not explicit | ### Description
My app is downloading differente pngs saving them without an explicit extension in order to show them later through the `<Image />` react native component(you can find an example [here](https://github.com/user-attachments/files/17284059/test.zip)). This was working fine, but starting from React 0.73.0 the image is not displayed correctly.
Adding an explicit extension to the file (e.g. manually adding the .png) seems fixing it, but my app have several images already downloaded so it's not so easy to handle existing users. IMO it's still an unwanted issue that should be fixed considering it's a breaking change.
### Steps to reproduce
1. Install the application from the [reproducer](https://github.com/fbp93/RN75/tree/master)
2. Run the app, check the directory path in the logs
3. copy the testing [images](https://github.com/user-attachments/files/17284380/images.zip) in the directory at that path
4. update the `path` var in App.tsx with the path of the directory
3. Run the app and see that only one image (the one with the explicit extension) is displayied
### React Native Version
0.75.4
### Affected Platforms
Runtime - iOS
### Output of `npx react-native info`
```text
System:
OS: macOS 14.6.1
CPU: (10) arm64 Apple M1 Pro
Memory: 133.64 MB / 32.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 18.12.1
path: ~/Library/Caches/fnm_multishells/60602_1728334021609/bin/node
Yarn:
version: 3.6.4
path: /usr/local/bin/yarn
npm:
version: 8.19.2
path: ~/Library/Caches/fnm_multishells/60602_1728334021609/bin/npm
Watchman:
version: 2024.09.23.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods: Not Found
SDKs:
iOS SDK:
Platforms:
- DriverKit 24.0
- iOS 18.0
- macOS 15.0
- tvOS 18.0
- visionOS 2.0
- watchOS 11.0
Android SDK: Not Found
IDEs:
Android Studio: 2022.3 AI-223.8836.35.2231.10671973
Xcode:
version: 16.0/16A242
path: /usr/bin/xcodebuild
Languages:
Java: Not Found
Ruby:
version: 3.0.0
path: /Users/work/.rbenv/shims/ruby
npmPackages:
"@react-native-community/cli": Not Found
react: Not Found
react-native: Not Found
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: false
iOS:
hermesEnabled: true
newArchEnabled: false
(node:68279) ExperimentalWarning: The Fetch API is an experimental feature. This feature could change at any time
(Use `node --trace-warnings ...` to show where the warning was created)
```
### Stacktrace or Logs
```text
No stacktrace
```
### Reproducer
https://github.com/fbp93/RN75/tree/master
### Screenshots and Videos
_No response_ | Resolution: Fixed,Component: Image | medium | Major |
2,571,485,327 | godot | Nodes created through gdscript have invalid names by default. | ### Tested versions
Reproducible in v4.4.dev3.official [f4af8201b] and v4.3.stable.official [77dcf97d8]
### System information
Godot v4.4.dev3 - Windows 10.0.22631 - Multi-window, 2 monitors - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 2060 SUPER (NVIDIA; 32.0.15.6094) - Intel(R) Core(TM) i7-9700 CPU @ 3.00GHz (8 threads)
### Issue description
When creating a new node from gdscript the nodes will be created with invalid names.
It would be nice if the nodes got names as when creating from the editor node menu.
https://github.com/user-attachments/assets/32791e14-0c9a-4136-8be3-89bd679f0cd8
### Steps to reproduce
Make a script that creates a new node then print it's name.
### Minimal reproduction project (MRP)
[invalidname.zip](https://github.com/user-attachments/files/17284529/invalidname.zip)
| discussion,topic:core | low | Minor |
2,571,521,432 | godot | Problems dragging node to floating script editor | ### Tested versions
v4.3.stable.official [77dcf97d8]
### System information
Godot v4.3.stable - Windows 10.0.19045 - GLES3 (Compatibility) - AMD Radeon RX 6600 (Advanced Micro Devices, Inc.; 31.0.24033.1003) - Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz (8 Threads)
### Issue description
Usually it's possible to drag a node into a script, so it writes out the $name of the node in the script.
But if you click the "Make the script editor floating" button and tile the windows (or move the script editor to a second screen):
(a) when you drag a node into script, there's no floating node name, and no cursor, so you don't know where the $name will be written
And if the script window has focus before you drag the node:
(b) you can't drop the $name into the script at all, and
(c) even if you release the mouse button, you're still dragging the node if you move the mouse back to the Godot Engine window
(this was touched on in issue https://github.com/godotengine/godot/issues/86487 but there the person was using ALT-TAB between windows, and it doesn't list all the problems)
### Steps to reproduce
Open a new project
Create Root Node: 2D Scene
Click add script > create
#### What happens (correctly) when the editor is not floating
Click somewhere in the script
Drag the Node2D node over the script
you will see a floating "Node2D" and a cursor
release the mouse
$"." will appear at the cursor
Delete $"."
#### What happens when the editor is floating but does not have focus
Click the "Make the script editor floating" icon
Move the Script Editor to a second screen (or tile the windows on the same screen)
Click on the title bar of the Godot Engine window, so it has focus
Drag the Node2D node over the script
There is no floating "Node2D" **(bug)** and no cursor **(bug)**
Position the mouse somewhere in the script, perhaps in the middle of a comment
Release the mouse
$"." will appear (perhaps unexpectedly) at the mouse position
Delete $"."
#### What happens when the editor is floating and has focus
Click somewhere in the script, so that the script editor window has focus
Drag the Node2D node over the script
There is no floating "Node2D" and no cursor
Position the mouse somewhere in the script
Release the mouse - the node does not appear **(bug)**
Move the mouse over the Godot Engine window
You are still dragging the Node2D, even though you are not holding down the mouse button **(bug)**
### Minimal reproduction project (MRP)
N/A | bug,topic:editor,usability | low | Critical |
2,571,546,096 | PowerToys | Keyboard Manager: Add "Shortcut" as a Destination for Remap to allow Remapping to ONLY a Shortcut | ### Description of the new feature / enhancement
For cases where you wish to remap a key ENTIRELY to a Shortcut (or to Shortcuts). In the "Remap keys" feature, add "Shortcut(s)" to the list of "To send:" options. This would basically act as an extension to the "Disable" option. But rather than just killing the key, this option would keep it available for use as part of a Shortcut (either in one or many Shortcuts).
### Scenario when this would be used?
As an example, to suppress unintentionally entering the dreaded cAPSLOCK state, making CapsLock ONLY triggerable via a Shortcut. For example, creating a Shortcut mapping LeftShift+CapsLock to CapsLock, while Disabling simple presses of CapsLock, will retain the functionality of CapsLock for only when it is occasionally needed. (Incidentally, this is similar to how old typewriters worked, and it is quite an elegant solution.) This new functionality could be extended to any case where remapping of a key to ONLY a Shortcut is desired, which just seems a plainly useful feature!
### Supporting information
This request is an extension (and clarification) of a previous request, linked here:
https://github.com/microsoft/PowerToys/issues/32984 | Needs-Triage | low | Minor |
2,571,572,200 | godot | Camera2D with physics_interpolation_mode off triggers editor and runtime errors | ### Tested versions
Godot v4.3.stable.mono - Fedora Linux 40 (Workstation Edition) - Wayland - Vulkan (Forward+) - integrated Intel(R) Graphics (ADL GT2) - 12th Gen Intel(R) Core(TM) i7-1280P (20 Threads)
### System information
Godot v4.3.stable.mono - Fedora Linux 40 (Workstation Edition) - Wayland - Vulkan (Forward+) - integrated Intel(R) Graphics (ADL GT2) - 12th Gen Intel(R) Core(TM) i7-1280P (20 Threads)
### Issue description
Turning off physics interpolation on a `Camera2D` causes `./scene/main/node.h:446 - Parameter "data.tree" is null.` errors both in the editor and at runtime:

### Steps to reproduce
1. Create a scene
2. Add a Camera2D
3. Set "Physics Interpolation" to "Off"
Re-open the project and see:

Start the project and see:

### Minimal reproduction project (MRP)
[bug_cam_physics.zip](https://github.com/user-attachments/files/17284937/bug_cam_physics.zip)
| bug,topic:rendering,topic:physics | low | Critical |
2,571,585,849 | flutter | Voice Control shows a number on the character count label, which is misleading | ### Steps to reproduce
- Create a Textfield with a maxLength to show the label.
- Turn Voice Control on
- Say "Show numbers"
- Character count label has a number, but doesn't have any action
### Expected results
Label shouldn't have a number since it's not an actionable element.
### Actual results
The label has a number, which is misleading
### Code sample
```dart
import 'package:flutter/material.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return const MaterialApp(
debugShowCheckedModeBanner: false,
home: Scaffold(
body: Center(
child: TextField(
maxLength: 6,
obscureText: true,
decoration: InputDecoration(
border: OutlineInputBorder(),
labelText: 'Password',
),
),
),
),
);
}
}
```
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
_No response_
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[Paste your output here]
```
</details>
| platform-ios,framework,a: accessibility,has reproducible steps,P1,found in release: 3.24,team-accessibility,triaged-accessibility,found in release: 3.26 | medium | Critical |
2,571,633,844 | flutter | [web:a11y] Autocomplete not compatible with screen reader shortcuts | Internal issues:
* b/346586723
* b/346601921
* b/346588887
Reproduction:
* [Live demo](https://fwa11y.web.app/#/auto-complete)
* [Source code](https://github.com/flutter/flutter/tree/master/dev/a11y_assessments)
We received a bouquet of issues filed against the `Autocomplete` widget:
- Accessible names missing for auto-complete items
- Auto-complete user flow cannot be completed using screen reader shortcuts
- Auto-complete item list not reachable using a screen reader
All these issues sound too close to each other to address them one by one. Filing a single issue for us to review what's going on with `AutoComplete`. It is likely that we only need one fix to capture them all.
| a: accessibility,platform-web,P2,team-web,triaged-web | low | Minor |
2,571,670,286 | ollama | Long responses can corrupt the model until unloaded | ### What is the issue?
In a relatively simple prompt, one of the Phi models went off track and ranted for several thousand words. After, all future responses produced (mostly) garbage output, even in separate API calls or interactive sessions with cleared session context. This persisted until the model was completely unloaded and reloaded.
It feels like something may have overflowed a buffer used for the context window or response and corrupted the model weights. Within the garbage output, the model appeared to have brief periods of "lucidity" where it demonstrated knowledge of prompts from completely separate sessions.
In the most recent case, I was using `phi3.5:3.8b-mini-instruct-q4_K_M` but have seen the same sort of behavior in other Phi releases. I'll try to find a prompt that can replicate this, though it's obviously stochastic given the nature of LLMs.
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.12 | bug,needs more info | low | Major |
2,571,681,197 | tauri | [feat] Plugin names should be able to include underscore '_' | ### Describe the problem
When creating my first (internal) plugin, I called it `sqlite_proxy`. Then I spent literally hours chasing down the build issue because the message is not very clear:
```
thread 'main' panicked at build.rs:11:6:
failed to run tauri-build: failed to parse JSON: identifiers can only include lowercase ASCII, hyphens which are not leading or trailing, and a single colon if using a prefix at line 16 column 23
```
It does not identify the `.json` file that needs fixing; nor specify that `identifiers` here refers to plugin names. Being new to the plugin ecosystem, I assumed that I simply failed to create some sort of capability file / permission setup. This was not the case. The above error message is a result of https://github.com/tauri-apps/tauri/pull/9952, which improved the state of the world, but punted on documentation changes and the underlying issue.
### Describe the solution you'd like
I believe there are 3 different levels of fixing this, listed in order of difficulty:
- [ ] Update the documentation around plugins to be very explicit about allowed names.
- [ ] Update the error message to:
1. mention the file it's failing on
2. mention that the problem is the plugin name.
3. Optionally add validation _elsewhere_ that fails quickly on disallowed plugin names (e.g. plugin init)
- [ ] The ultimate fix would be to simply remove this constraint.
Note: this was already considered in https://github.com/tauri-apps/tauri/issues/9951: `writing our own deserializer without serde_untagged`. I do not know the specific limitations of serde/serde_untagged regarding this. Sharing context on the reasons it's so painful to change might be useful.
### Alternatives considered
_No response_
### Additional context
I personally wouldn't consider the issue fully solved until the restriction is removed; however the interim steps would help mitigate some of the harm. | type: feature request | low | Critical |
2,571,681,365 | PowerToys | Enabling Ahead of time for PowerToys | ### Description of the new feature / enhancement
This is the tracking issue for AoT work.
### List of known issues
- EventSource workaround
- public class TelemetryBase inherits off EventSource
- https://github.com/dotnet/runtime/pull/83751 may have it fixed?
### Done libraries
-
| Idea-Enhancement,Area-Quality,Area-Build | low | Minor |
2,571,686,501 | deno | `{deno:{namespace:false}}` worker option doesn't remove `Deno` namespace | **Version**: Deno `1.46.x` and `2.x`
This logs `Received from worker: Hello from Worker! 2.0.0-rc.10+053894b` instead of throwing a reference error:
```js
const workerBlob = new Blob([`
self.onmessage = async function(e) {
self.postMessage('Hello from Worker! ' + Deno.version.deno);
}
`], { type: 'application/javascript' });
const workerURL = URL.createObjectURL(workerBlob);
const worker = new Worker(workerURL, {type:"module", deno:{permissions:"none", namespace:false}});
worker.onmessage = function(e) {
console.log('Received from worker:', e.data);
};
worker.postMessage('Hello from main thread!');
```
This issue may be more of a feature request, since it's not explicitly mentioned in the docs that `namespace:false` disables Deno stuff. It only says:
> Starting in v1.22 the Deno namespace is available in worker scope by default. To enable the namespace in earlier versions pass deno: { namespace: true } when creating a new worker.
Note if `delete globalThis.Deno` is equivalent, this may not be an ideal solution for me, because part of the reason I want to remove the Deno stuff is to reduce the memory overhead of the worker. | suggestion | low | Critical |
2,571,708,441 | godot | Runtime use of the inspector to modify the size and elements of arrays in multiple threads has a probability of causing an internal editor error | ### Tested versions
Can be reproduced in the following versions:
`v4.3.stable.official` [77dcf97d8]
`v4.4.dev3.official` [f4af8201b]
### System information
Godot v4.4.dev3 - Windows 10.0.22631 - Multi-window, 2 monitors - Vulkan (Forward+) - integrated Intel(R) UHD Graphics (Intel Corporation; 31.0.101.4502) - 12th Gen Intel(R) Core(TM) i5-12450H (12 threads)
### Issue description
Using the remote inspector to modify the data and array length of an array that is being modified by a thread **other** than the `main thread` has a probability of triggering an error inside the editor.
Sample video:
https://github.com/user-attachments/assets/df9f7156-41b8-4edc-a8a5-1e05cfb8b0e3
Personally, it may be caused by
> traversing containers in multiple threads is thread-unsafe behavior
### Steps to reproduce
* Open and run the project
* Open the `Remote` Tab at the top of the `Node list` on the left
* Click on the `MainScene` node and make sure that the `Inspector` on the right shows the current information for this node correctly
* Attempt to modify the data and size of an array with attribute name `Number Arr` and size 100
If you reproduce the error, you may see the following error message in the debugger:
```
MainScene.gd:34 @ mainForThread(): Condition "!success" is true.
MainScene.gd:34 @ mainForThread(): Parameter "_fp" is null.
MainScene.gd:34 @ mainForThread(): Parameter "_fp" is null.
```
### Minimal reproduction project (MRP)
[MultithreadDebugError.zip](https://github.com/user-attachments/files/17285325/MultithreadDebugError.zip)
| discussion,topic:editor,needs testing | low | Critical |
2,571,736,528 | deno | Web Workers use 7x more memory than in Chrome, and terminating them doesn't remove all the memory | Version: Tested in Deno `1.46.x` and `2.x`
The code below creates 100 workers, then waits 5 seconds, then terminates them, and repeats.
* In Chrome the memory usage spikes to +150mb, and then back down to zero each cycle.
* In Deno, it spikes to +1000mb, but doesn't go back down to zero. It accumlates memory per cycle
* **EDIT**: up to a maximum of +1500mb, so this extra 500mb is potentially just due to some sort of resource pool, or something - maybe expected behavior
* **EDIT 2**: It does continue creeping up slowly - after 100k workers created+destroyed, it's now at 700mb
* **EDIT 3**: After 500k workers created+destroyed, it's now at about 2000mb leaked.
```js
if(self.Deno) {
console.log(Deno.memoryUsage());
setInterval(() => {
console.log(Deno.memoryUsage());
}, 1000);
}
while(1) {
let workers = [];
for(let i = 0; i < 100; i++) {
const workerBlob = new Blob([`
delete globalThis.Deno;
self.onmessage = async function(e) {
self.postMessage('Hello from Worker!');
}
`], {type:"application/javascript"});
const workerURL = URL.createObjectURL(workerBlob);
const worker = new Worker(workerURL, {type:"module"});
worker.onmessage = function(e) {
URL.revokeObjectURL(workerURL);
console.log('Received from worker:', e.data);
};
worker.postMessage('Hello from main thread!');
workers.push(worker);
}
await new Promise(r => setTimeout(r, 5*1000));
for(let w of workers) w.terminate();
workers = [];
}
``` | bug,needs investigation | low | Major |
2,571,740,017 | godot | Transparent windows become permanently opaque when opened on an HDR-enabled monitor in Windows | ### Tested versions
- Reproducible in: v4.3.stable
### System information
Godot v4.3.stable - Windows 10.0.22631 - GLES3 (Compatibility) - AMD Radeon RX 6600 (Advanced Micro Devices, Inc.; 32.0.12011.1036)
### Issue description
Transparent windows become permanently opaque if they open on an HDR-enabled monitor in Windows. Disabling "Auto HDR" in Windows has no effect.
### Steps to reproduce
Requires an HDR monitor. Not tested on Windows 10 or Nvidia GPUs.
To create MRP, create project with the following settings:
```
[display]
window/size/transparent=true
window/per_pixel_transparency/allowed=true
[rendering]
renderer/rendering_method="gl_compatibility"
renderer/rendering_method.mobile="gl_compatibility"
viewport/transparent_background=true
```
#### With MRP:
Run project on HDR-enabled monitor in Windows -> **black background** ❌
Run project on HDR-disabled monitor -> transparent background ✅
Run project on HDR-disabled monitor and move window to HDR-enabled monitor -> transparent background ✅
Run project on HDR-disabled monitor and enable HDR on that monitor -> transparent background ✅
### Minimal reproduction project (MRP)
[mrp-godot-hdr-transparency.zip](https://github.com/user-attachments/files/17285445/mrp-godot-hdr-transparency.zip) (Godot v4.3.stable, 2 KiB)
| bug,platform:windows,topic:gui | low | Minor |
2,571,741,460 | react-native | Transform (scale, rotate and skew) not working correctly on Android 8.1 and below! | ### Description
I've encountered inconsistent behavior with scaling, rotation, and skewing transformations on older Android versions. Specifically, none of these transformations work on Android 7 and below, while scaling works on Android 8 and 8.1 but rotation and skewing do not. The issue seems to be resolved on Android 10 and above. Is this a known limitation or bug, and are there any documented workarounds?

### Steps to reproduce
1. `npx create-expo-app@latest --template blank`
2. copy the code from [transform doc](https://reactnative.dev/docs/transforms#example)
3. start the app on Android 8.1 and below
4. saw a weird behavior of scaling, rotation, and skewing
### React Native Version
0.75.4
### Affected Platforms
Runtime - Android
### Output of `npx react-native info`
```text
System:
OS: Windows 10 10.0.19045
CPU: (2) x64 Intel(R) Core(TM)2 Duo CPU E7500 @ 2.93GHz
Memory: 975.01 MB / 7.75 GB
Binaries:
Node:
version: 20.11.1
path: C:\Program Files\nodejs\node.EXE
Yarn:
version: 1.22.21
path: ~\AppData\Roaming\npm\yarn.CMD
npm:
version: 10.2.5
path: C:\Program Files\nodejs\npm.CMD
Watchman: Not Found
SDKs:
Android SDK: Not Found
Windows SDK:
AllowDevelopmentWithoutDevLicense: Enabled
IDEs:
Android Studio: AI-241.18034.62.2411.12071903
Visual Studio: Not Found
Languages:
Java:
version: 21.0.1
path: /c/Program Files/Common Files/Oracle/Java/javapath/javac
Ruby: Not Found
npmPackages:
"@react-native-community/cli": Not Found
react:
installed: 18.2.0
wanted: 18.2.0
react-native:
installed: 0.75.4
wanted: 0.75.4
react-native-windows: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: Not found
newArchEnabled: Not found
iOS:
hermesEnabled: Not found
newArchEnabled: Not found
```
### Stacktrace or Logs
```text
none
```
### Reproducer
https://github.com/Ammar1999y/react-native-transforms
### Screenshots and Videos
_No response_ | Platform: Android,Newer Patch Available | low | Critical |
2,571,745,017 | TypeScript | Add Temporal (Stage 3) types | ### ⚙ Compilation target
ESNext
### ⚙ Library
n/a
### Missing / Incorrect Definition
- `Temporal`
- `Temporal.Now`
- `Temporal.Instant`
- `Temporal.ZonedDateTime`
- `Temporal.PlainDate`, `Temporal.PlainTime`, `Temporal.PlainDateTime`
- `Temporal.PlainYearMonth`, `Temporal.PlainMonthDay`
- `Temporal.Duration`
### Sample Code
```TypeScript
const now = Temporal.Now.instant();
const past = Temporal.Instant.from('1969-07-20T20:17Z');
```
### Documentation Link
https://tc39.es/proposal-temporal/docs/index.html
The API itself is quite stable. It is calendar that is blocking implementation. | Domain: lib.d.ts | low | Minor |
2,571,761,890 | godot | No warning when attempting to access native enum types from instance. | ### Tested versions
v4.2.2.stable.official [15073afe3]
### System information
Godot v4.2.2.stable - Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1060 6GB (NVIDIA; 32.0.15.5612) - Intel(R) Core(TM) i5-6500 CPU @ 3.20GHz (4 Threads)
### Issue description
The GDScript analyzer does not warn about accessing Enum types from an instance.
For example, in a script that extends `Node`, `print(self.ConnectFlags)` does not raise a warning from the analyzer, yet produces the runtime error "Invalid get index 'ConnectFlags'". This behavior feels inconsistent with how accessing other global properties work. For example `print(self.Object)` raises the analyzer warning "The property "Object" is not present on the inferred type". On the other hand accessing an Enum value from an instance, e.g., `print(self.CONNECT_DEFERRED)` runs without error and prints the correct value.
### Steps to reproduce
```gdscript
extends Node
func _ready():
print(self.ConnectFlags)
```
### Minimal reproduction project (MRP)
N/A | bug,discussion,topic:gdscript | low | Critical |
2,571,780,168 | kubernetes | Regression in Scheduler Performance in Large Scale Clusters | ### What happened?
Scheduler throughput and Performance has regressed in 1.31 when compared to 1.30
### What did you expect to happen?
Scheduler throughput and Performance should at least stay same as 1.30 on 1.31 or improve.
### How can we reproduce it (as minimally and precisely as possible)?
I'm leveraging the [test](https://github.com/kubernetes/perf-tests/blob/master/clusterloader2/testing/scheduler-throughput/config.yaml) that I have written to measure scheduler throughput and performance by directly creating pods to APIServer without KCM controllers in the picture.
Settings:
- You can run this test with [this](https://github.com/kubernetes/perf-tests/blob/master/clusterloader2/testing/scheduler-throughput/config.yaml#L19) QPS set to `1000` and total pods [here](https://github.com/kubernetes/perf-tests/blob/master/clusterloader2/testing/scheduler-throughput/config.yaml#L1C4-L1C32 ) set to 50k
##### Test results: You would get roughly following latency and throughput numbers for `1.30v`
Latency:
```
{
"data": {
"Perc50": 4853.606009,
"Perc90": 6501.635529,
"Perc99": 7152.44798
},
"unit": "ms",
"labels": {
"Metric": "pod_startup"
}
},
{
"data": {
"Perc50": 3750.943,
"Perc90": 5345.859,
"Perc99": 5861.101
},
"unit": "ms",
"labels": {
"Metric": "create_to_schedule"
}
}
```
Throughput:
```
{
"perc50": 803,
"perc90": 856,
"perc99": 936,
"max": 936
}
```
##### Test results: You would get roughly following latency and throughput numbers for `1.31v`
Latency:
```
{
"data": {
"Perc50": 10675.556409,
"Perc90": 17805.60988,
"Perc99": 19445.20954
},
"unit": "ms",
"labels": {
"Metric": "pod_startup"
}
}
{
"data": {
"Perc50": 9579.25,
"Perc90": 16696.174,
"Perc99": 18275.346
},
"unit": "ms",
"labels": {
"Metric": "create_to_schedule"
}
},
```
Throughput:
```
{
"perc50": 638,
"perc90": 683,
"perc99": 713,
"max": 713
}
```
### Anything else we need to know?
You can see that on 1.31v, the latency for `create_to_schedule` phase increased 3X or more ( I have posted the one that has lowest latency and highest throughput among other tests that I have run) and Throughput has reduced significantly from ~936 to ~704 at peak/p99.
When I looked at the pprof of the runs on 1.30v and 1.31v, major differences showed up as following:
- Prometheus.(*gauge).Add [k8s.io/client-go/util/workqueue.ParallelizeUntil.func1] (main contributing factor)
- k8s.io/kubernetes/pkg/scheduler/framework/parallelize.Parallelizer.Until.func1
k8s.io/kubernetes/pkg/scheduler/framework/parallelize/parallelism.go ( overall this is slightly higher on 1.31v i.e; ~57% vs ~46% on 1.30v)
##### 1.31v pprof
<img width="1575" alt="Screenshot 2024-10-07 at 5 19 48 PM" src="https://github.com/user-attachments/assets/e6bfd4c8-4e6b-4dc0-8868-1c63d8435913">
<img width="1562" alt="Screenshot 2024-10-07 at 5 20 19 PM" src="https://github.com/user-attachments/assets/f82c4ca7-c881-435e-91b1-818856968323">
##### 1.30v pprof
<img width="1576" alt="Screenshot 2024-10-07 at 5 07 22 PM" src="https://github.com/user-attachments/assets/9d287529-fcfc-433a-84f5-89172c42d483">
<img width="1590" alt="Screenshot 2024-10-07 at 5 21 18 PM" src="https://github.com/user-attachments/assets/a73647cf-cf45-4fe0-a5e9-e549eab053c8">
You can see that % of cpu cycles/time spent is more than doubled for Prometheus operations for the same amount of work i.e 50k pods with 1K QPS ^^^
I can post flame graphs as well
#### Food for thought:
Generally we should `batch prometheus gauge operations` for performance improvement given the CPU cycles consumption in Scheduler_one go routine (as we schedule pods serially in one go routine given the nature of Scheduler) for better performance. Also, I don't think consumers would need precision at the current level that we are doing today. Generally users scrape prometheus metrics at the very least at 10sec, 30sec, 1min or 5mins interval. Would like to know what community thinks about this ? At least we should have a gating feature to configure the precision of omitting Prometheus metrics ?
### Kubernetes version
<details>
```console
$ kubectl version
sh-4.2$ kubectl version
Client Version: v1.30.4-eks-a737599
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.30.5-eks-ce1d5eb
```
```
sh-4.2$ kubectl version
Client Version: v1.31.0-eks-a737599
Kustomize Version: v5.4.2
Server Version: v1.31.0-eks-a737599
````
</details>
### Cloud provider
<details>
AWS
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
```
NAME="Amazon Linux"
VERSION="2"
ID="amzn"
ID_LIKE="centos rhel fedora"
VERSION_ID="2"
PRETTY_NAME="Amazon Linux 2"
ANSI_COLOR="0;33"
CPE_NAME="cpe:2.3:o:amazon:amazon_linux:2"
HOME_URL="https://amazonlinux.com/"
SUPPORT_END="2025-06-30"
```
$ uname -a
```
Linux ip-172-16-60-69.us-west-2.compute.internal 5.10.224-212.876.amzn2.x86_64 #1 SMP Thu Aug 22 16:55:24 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
```
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/scheduling,kind/regression,needs-triage | medium | Major |
2,571,797,651 | flutter | FlutterCallbackCache.lookupCallbackInformation returns nil | My goal is to start the flutter engine and run some dart code from a Notification Service Extension on iOS to prepare my app before the user receives a push notification.
Issue is `FlutterCallbackCache.lookupCallbackInformation(callbackHandle)` returns `nil`
The callback handle integer is generated (fetched in dart), stored in UserDefaults and retrieved correctly, I have printed out the value all the way through and it is identical.
But it appears that when running in the service extension the FlutterCallbackCache does not have any value for that handle.
Storing into the FlutterCallbackCache is handled automatically by flutter when you retrieve the handle originally (calling `PluginUtilities.getCallbackFromHandle(...)` so there cannot be any user error there.
Documentation on this topic is very limited. But I can only assume this is a bug to do with the additional complexity of running in a service extension. Some scope issue maybe?
### Steps to reproduce
Call FlutterCallbackCache.lookupCallbackInformation from a iOS Notification Service Extension
### Expected results
FlutterCallbackCache.lookupCallbackInformation should return FlutterCallbackInformation
### Actual results
FlutterCallbackCache.lookupCallbackInformation returns nil
### Code sample
<details open><summary>Code sample</summary>
```swift
override func didReceive(_ request: UNNotificationRequest, withContentHandler contentHandler: @escaping (UNNotificationContent) -> Void) {
self.logger.log("NotificationPreSync Service Extension called")
self.contentHandler = contentHandler
bestAttemptContent = (request.content.mutableCopy() as? UNMutableNotificationContent)
var dict = NotificationPreSyncUserDefaultsHelper.userDefaults.dictionaryRepresentation().values
print(dict)
guard let callbackHandle = NotificationPreSyncUserDefaultsHelper.getStoredCallbackHandle()
else {
self.logger.log("[\(String(describing: self))] no callback handle stored")
serviceExtensionTimeWillExpire()
return
}
guard let flutterCallbackInformation = FlutterCallbackCache.lookupCallbackInformation(callbackHandle)
else {
//flutterCallbackInformation is nil so function exits through here.
self.logger.log("[\(String(describing: self))] cannot look up callback information")
serviceExtensionTimeWillExpire()
return
}
...
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
<img width="1751" alt="Screenshot 2024-10-08 at 1 39 23 PM" src="https://github.com/user-attachments/assets/8a8eed74-5bf5-4082-a578-c672ef2585a7">
</details>
### Logs
```
request UNNotificationRequest 0x000000010808ca20
contentHandler () -> () 0x0000000104b568b8 NotificationPreSync.debug.dylib`partial apply forwarder for reabstraction thunk helper from @escaping @callee_unowned @convention(block) (@unowned __C.UNNotificationContent) -> () to @escaping @callee_guaranteed (@guaranteed __C.UNNotificationContent) -> () at <compiler-generated>
self NotificationPreSync.NotificationService 0x000000010813da40
UserNotifications.UNNotificationServiceExtension UNNotificationServiceExtension
contentHandler ((UNNotificationContent) -> ())? 0x0000000104b568b8 NotificationPreSync.debug.dylib`partial apply forwarder for reabstraction thunk helper from @escaping @callee_unowned @convention(block) (@unowned __C.UNNotificationContent) -> () to @escaping @callee_guaranteed (@guaranteed __C.UNNotificationContent) -> () at <compiler-generated>
bestAttemptContent UNMutableNotificationContent? 0x000000010818c8c0
backgroundEngine FlutterEngine? nil none
TAG String "DartFcmBackgroundExecutor"
logger os.Logger
dict [String : Any].Values
callbackHandle Int64 -5894632265597711899
exampleShowingNil FlutterCallbackInformation? nil none
```
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
Doctor summary (to see all details, run flutter doctor -v):
[✓] Flutter (Channel stable, 3.24.3, on macOS 14.6.1 23G93 darwin-arm64, locale en-NZ)
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
[✓] Xcode - develop for iOS and macOS (Xcode 16.0)
[✓] Chrome - develop for the web
[✓] Android Studio (version 2020.3)
[✓] Android Studio (version 2023.3)
[✓] IntelliJ IDEA Community Edition (version 2024.2.1)
[✓] VS Code (version 1.90.0)
[✓] VS Code (version 1.94.0)
[✓] Connected device (4 available)
[✓] Network resources
• No issues found!
```
</details>
| platform-ios,engine,P2,team-ios,triaged-ios | low | Critical |
2,571,809,151 | next.js | SWC minify bug | ### Link to the code that reproduces this issue
https://github.com/luixo/receipt-app/commits/swc-minify-bug
### To Reproduce
Unfortunately, this bug is quite hard to reproduce with a minimal version due to a bunch of mechanisms involved in building process.
You can reproduce the bug in my project repo or via injecting code in SWC minification.
With my project:
1. Checkout project on a given branch
1. `corepack enable && yarn install`
1. Copy `.env.example` as `.env.local`
1. `NODE_ENV=test npx dotenv -c -- yarn web:build`
1. `NODE_ENV=test npx dotenv -c -- yarn web:start`
With a SWC minification:
1. Create a next project (v14, it seems to be fixed in v15)
1. Get to SWC minify function (`node_modules/next/dist/build/swc/index.js`)
1. Add `const failingSrc = '...'` from additional context to the file
1. Add `minify(failingSrc, { compress: true, mangle: true, output: { comments: false } }).then(({code}) => console.log(code))` somewhere in the end of the file
1. Run `rm -rf .next && npm run build`
### Current vs. Expected behavior
Expected: Compiled version doesn't have an undefined variable
Current: Compiled version does have an undefined variable
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.0.0: Tue Sep 24 23:37:25 PDT 2024; root:xnu-11215.1.12~1/RELEASE_ARM64_T6030
Available memory (MB): 36864
Available CPU cores: 12
Binaries:
Node: 20.15.1
npm: 10.7.0
Yarn: 3.6.2
pnpm: N/A
Relevant Packages:
next: 14.2.14 // Latest available version is detected (14.2.14).
eslint-config-next: N/A
react: 18.2.0
react-dom: 18.2.0
typescript: 5.5.4
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
SWC
### Which stage(s) are affected? (Select all that apply)
next build (local)
### Additional context
The failing src code:
```js
const foo = function(fn) {
const options = {
param: {
value: "",
},
};
const store = (obj) => obj[options.param.value];
fn(() => store);
}
```
It's compiled to:
```js
let foo=function(e){let l=e=>e[options_param.value];e(()=>l)};
```
Or beautified:
```js
let foo = function(e) {
let l = e => e[options_param.value];
e(() => l)
};
```
You can witness that `options_param` is an undefined value that is lost from the context.
The real example is way bigger, so this is the smallest I could make still fail. | bug,SWC | low | Critical |
2,571,821,031 | flutter | [Web]: Tab key skips second textfield and jumps to third field directly in Korean language. | ### Steps to reproduce
The Tab key on the Flutter basically moves vertically. I've been using multiple TextFields vertically, and at some point in my Chrome browser, type Korean in TextField and press the Tab key to go directly to the 3rd TextField field instead of the next TextField.
It works fine with Safari browsers, and the problem only occurs with Macos' Chrome browser. (Windows's Chrome browser works fine.)
This problem does not occur when you enter English or Chinese.
The problem occurs only when Korean is entered, but I don't know how to solve it.
### Expected results


### Actual results
I keep having this issue even after updating my Chrome browser multiple times, and I've checked the Flutter Web on Macos
### Code sample
<details open><summary>Code sample</summary>
```dart
TextFormField(
autovalidateMode: AutovalidateMode.always,
style: TextStyle(fontSize: 18),
decoration: InputDecoration(
icon: const Icon(Icons.storefront),
hintText: '상호',
labelText: widget.customer['store_name']==""?"상호":widget.customer['store_name'],
labelStyle: const TextStyle(color: Colors.black, fontWeight: FontWeight.w200)
),
onSaved: (value) {
setState(() {
if (value==""){
value = widget.customer['store_name'];
} else{
widget.customer['store_name'] = value.toString();
}
});
},
onChanged: (value){
setState(() {
widget.customer['store_name'] = value.toString();
});
},
validator: (value) {
if (value == null) {
return '상호를 입력해 주세요.';
}
return null;
},
),
TextFormField(
autovalidateMode: AutovalidateMode.disabled,
style: TextStyle(fontSize: 18),
decoration: InputDecoration(
icon: const Icon(Icons.person_3),
hintText: '대표자명',
labelText: widget.customer['customer_name']==""?"대표자명":widget.customer['customer_name'],
labelStyle: const TextStyle(color: Colors.black, fontWeight: FontWeight.w200)
),
onSaved: (value) {
setState(() {
if (value==""){
value=widget.customer['customer_name'];
} else{
widget.customer['customer_name'] = value!;
}
});
},
),
TextFormField(
controller: bizNumController,
style: TextStyle(fontSize: 18),
decoration: InputDecoration(
icon: const Icon(Icons
.drive_file_rename_outline_rounded),
hintText: '사업자 번호는 - 없이 숫자만 입력 해주세요.',
labelText: widget.customer['biz_number']==""?'사업자 번호는 - 없이 숫자만 입력 해주세요.':widget.customer['biz_number'],
labelStyle: const TextStyle(color: Colors.black, fontWeight: FontWeight.w200),
helperText: bizNumLabelText,
helperStyle: const TextStyle(color: Colors.red, fontSize: 14, fontWeight: FontWeight.w200)
),
onSaved: (value) {
setState(() {
if (bizNumController.text==""){
widget.customer['biz_number']="";
} else{
if (bizNumRegex.hasMatch(value!)) {
widget.customer['biz_number'] = value!;
} else{
value=null;
}
}
});
},
validator: (value) {
if (value == null) {
return "사업자 번호는 - 없이 숫자만 입력 해주세요.";
}
return null;
},
onChanged: (value) {
setState(() {
if (bizNumRegex.hasMatch(value)) {
bizNumLabelText =
"올바른 형식의 사업자 등록 번호 입니다.";
} else {
bizNumLabelText =
"사업자 번호는 - 없이 숫자만 입력 해주세요.";
}
});
},
),
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.24.3, on macOS 15.0.1 24A348 darwin-arm64, locale ko-KR)
• Flutter version 3.24.3 on channel stable at /Users/seo/development/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 2663184aa7 (4 weeks ago), 2024-09-11 16:27:48 -0500
• Engine revision 36335019a8
• Dart version 3.5.3
• DevTools version 2.37.3
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /Users/seo/Library/Android/sdk
• Platform android-34-ext8, build-tools 34.0.0
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 16.0)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16A242d
• CocoaPods version 1.15.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.1)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
[✓] VS Code (version 1.79.2)
• VS Code at /Users/seo/Downloads/Visual Studio Code.app/Contents
• Flutter extension can be installed from:
🔨 https://marketplace.visualstudio.com/items?itemName=Dart-Code.flutter
```
</details>
| a: text input,platform-mac,a: internationalization,platform-web,has reproducible steps,P2,browser: chrome-desktop,team-web,triaged-web,fyi-text-input,found in release: 3.24,found in release: 3.26 | low | Minor |
2,571,827,594 | ollama | openai: support max_completion_tokens due to deprecation of max_tokens | max_tokens is now deprecated for max_completion_tokens. I suspect we should support both. One way is to define another field in our request object and then default if one or the other isn't set https://github.com/ollama/ollama/blob/defbf9425af8228f3420d567e9eeaa29d8ac87e3/openai/openai.go#L77
See https://platform.openai.com/docs/api-reference/chat/create#chat-create-max_tokens
See https://github.com/openai/openai-openapi/blob/10053bef25cd50a7424f5265ba51a7a63ba95b48/openapi.yaml#L9854-L9866 | feature request,api | low | Minor |
2,571,832,781 | flutter | [go_router_builder]: Since `go_router: 14.2.9`, generated locations are different for route paths with or without slash | ### Steps to reproduce
Define a route with child route . all child route contains '/' in their path
ex: "/abc", "/edg"
### Expected results
.g.dart file cannot generate correct path for child
### Actual results
generate correct path
### Code sample
<details open><summary>Code sample</summary>
```dart
TypedGoRoute<ProfileRoute>(
path: "/profile",
routes: [
TypedGoRoute<SettingRoute>(
path: "/setting",
....
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.24.3, on macOS 14.5 23F79 darwin-arm64, locale vi-VN)
• Flutter version 3.24.3 on channel stable at /Users/hieucg/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 2663184aa7 (4 weeks ago), 2024-09-11 16:27:48 -0500
• Engine revision 36335019a8
• Dart version 3.5.3
• DevTools version 2.37.3
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at /Users/hieucg/Library/Android/sdk
• Platform android-35, build-tools 35.0.0
• ANDROID_HOME = /Users/hieucg/Library/Android/sdk
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 16.0)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16A242d
• CocoaPods version 1.15.2
[✓] Android Studio (version 2024.1)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
[✓] VS Code (version 1.94.0)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.98.0
```
</details>
| package,has reproducible steps,P2,p: go_router,p: go_router_builder,team-go_router,triaged-go_router,found in release: 3.24 | low | Major |
2,571,868,977 | flutter | Need a health check that warns if a test is not covered by a GN rule | For C++ this is at least more obvious (it can't run locally either), but for other languages (i.e. Dart) it will be possible to run these tests locally but a GN rule might not exist, meaning (eventually) it won't run on CI. We should try to (including for C++) lint the repo for test entrypoints that do not have a test GN rule.
Needs a discussion before it can be started. | P3,c: tech-debt,team-engine,triaged-engine,e: engine-tool | low | Minor |
2,571,914,380 | ant-design | Cascader.Panel 组件希望能增加defaultExpand功能 | ### What problem does this feature solve?
当级联选择器当做面板功能使用时,希望能默认展开一个节点,而不是需要手动一层层点击才能展示末级节点
### What does the proposed API look like?
增加defaultExpand 功能,以便Cascader.Panel组件可以默认展开节点
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | 💡 Feature Request,Inactive | low | Major |
2,571,938,857 | pytorch | Add support for using NestedTensors with linalg operations | ### 🚀 The feature, motivation and pitch
I have a scenario where I'm currently calling `torch.linalg.ldl_factor` on a bunch of tensors and then repeatedly calling `torch.linalg.ldl_solve` as part of an iterative linear systems solver. Currently I have to do this in a loop for each tensor, as all my tensors may not have exactly the same dimensions. For example, one tensor may be 1024x1024 and another may be 1019x1019, so I can't create a batch and use that.
It would be really handy to be able to use NestedTensor's with these operations to batch the call (to take advantage of multiprocessing of batched operations, and to optionally ship it to the GPU), as I could then store a single NestedTensor with the factorisations (ideally having a slick, inbuilt method to store only the elements from the lower triangle, to save memory when there's no real need to store the upper triangle of zeros) and then be able to repeatedly call the solver during the iterations with the NestedTensor factorisations (and have it parse the lower triangle back to the solver as required).
### Alternatives
Currently has to be performed individually for each tensor in a loop.
### Additional context
_No response_
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ | triaged,module: linear algebra,module: nestedtensor | low | Minor |
2,571,954,057 | PowerToys | In a borderless mouse, the mouse can become one-way.(境界のないマウスにおいて、マウスが一方通行になってしまう。) | ### Microsoft PowerToys version
0.85.1 (Both PC)
### Installation method
Other (please specify in "Steps to Reproduce")
### Running as admin
Yes
### Area(s) with issue?
Mouse Without Borders
### Steps to reproduce
Always
### ✔️ Expected Behavior
Control a PC B using the mouse connected to PC A.
AのPCに接続しているマウスからBのPCを操作できるようにする。
### ❌ Actual Behavior
The mouse connected to PC A cannot be moved to PC B, while the mouse connected to PC B can be moved to PC A. If PC B is functioning normally but PC A is not, could this be a bug? This issue occurs constantly, not just at certain times.
The PowerToys version has been updated to the latest, the Firewall has been set to exclude, and the options are the same on both sides. The network IP addresses also match on the first three digits, which has been confirmed. The operating system being used is Windows 11 Home Edition.

※The following is the Japanese text translated into English:The text below is in Japanese.
AというPCに接続しているマウスはBのPCに移動できず、BというPCに接続しているマウスはAのPCに移動できます。Bが正常に機能しているのにAが正常に機能していないのは何かバグではないでしょうか? 何かのタイミングで発生する訳ではなく常時そのようになっております。
PowerToysのバージョンも最新に合わせており、Firewallも除外設定済み、オプションも両方同じ設定です。ネットワークも IPアドレスの頭3つは双方合っており同じであることは確認済です。使用しているのはWIN11 HomeEditionです。
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Critical |
2,571,961,507 | pytorch | overflow in `torch._C._linalg.linalg_cond` | ### 🐛 Describe the bug
When processing complex data type, `torch._C._linalg.linalg_cond` raises an overflow error.
Following [45259](https://github.com/pytorch/pytorch/pull/45259), may be we should consider adding a check for complex numbers.
Code:
```python
import torch
matrix = torch.tensor([[1 + 1j, 2 + 2j], [3 + 3j, 4 + 4j]])
opt_ord_complex = 2 + 2j
cond = torch.linalg.cond(matrix, opt_ord_complex)
```
Error info:
```
RuntimeError: value cannot be converted to type double without overflow
```
### Versions
PyTorch version: 2.4.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.8.19 (default, Mar 20 2024, 19:58:24) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA RTX A6000
Nvidia driver version: 545.23.08
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6444Y
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
Stepping: 8
Frequency boost: enabled
CPU max MHz: 3601.0000
CPU min MHz: 800.0000
BogoMIPS: 7200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.5 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 64 MiB (32 instances)
L3 cache: 90 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-15,32-47
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] torch==2.4.1
[pip3] triton==3.0.0
[conda] numpy 1.24.4 pypi_0 pypi
[conda] torch 2.4.1 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @malfet @ezyang @anjali411 @dylanbespalko @mruberry @Lezcano @nikitaved @amjames | low priority,module: error checking,triaged,actionable,module: edge cases | low | Critical |
2,571,974,828 | pytorch | overflow in `torch.quantize_per_tensor` | ### 🐛 Describe the bug
When processing complex data type, `torch.quantize_per_tensor` raises an overflow error.
Following [45259](https://github.com/pytorch/pytorch/pull/45259), may be we should consider adding a check for complex numbers.
Code:
```python
import torch
input_tensor = torch.randn(2, 2, dtype=torch.float32)
scale = torch.tensor([1+2j])
zero_point = torch.tensor([0], dtype=torch.int)
dtype = torch.qint8
quantized_tensor = torch.quantize_per_tensor(input_tensor, scale, zero_point, dtype)
```
Error Info:
```
RuntimeError: value cannot be converted to type double without overflow
```
### Versions
PyTorch version: 2.4.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.8.19 (default, Mar 20 2024, 19:58:24) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA RTX A6000
Nvidia driver version: 545.23.08
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6444Y
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
Stepping: 8
Frequency boost: enabled
CPU max MHz: 3601.0000
CPU min MHz: 800.0000
BogoMIPS: 7200.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.5 MiB (32 instances)
L1i cache: 1 MiB (32 instances)
L2 cache: 64 MiB (32 instances)
L3 cache: 90 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-15,32-47
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] torch==2.4.1
[pip3] triton==3.0.0
[conda] numpy 1.24.4 pypi_0 pypi
[conda] torch 2.4.1 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim @malfet | oncall: quantization,low priority,module: error checking,triaged,actionable | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.