id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k ⌀ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,574,430,577 | godot | Godot Editor keeps closing by itself. | ### Tested versions
- Reproducible on 4.3, 4.2 and 4.1
### System information
Godot v4.3.stable - Windows 10.0.22631 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3050 6GB Laptop GPU (NVIDIA; 32.0.15.6109) - 12th Gen Intel(R) Core(TM) i5-12450H (12 Threads)
### Issue description
I am developing a game in godot for some time now, and while I work at it, it tends to close on itself a lot, just working normally, then freezes for a sec and closes. Tried to run with console and verbose to see what could be happening, all I got was:
```Vulkan 1.3.280 - Forward+ - Using Device #0: NVIDIA - NVIDIA GeForce RTX 3050 6GB Laptop GPU```
On console every time it crashes, and on verbose:
```
glTF: accessor offset 0 view offset: 5356 total buffer len: 24684 view len 1104
glTF: Total meshes: 1
glTF: Creating mesh for: bote
Using present mode: Enabled
```
Verbose I tried only once. Appears similar to [#81670](https://github.com/godotengine/godot/issues/81670). But I am not sure and I don't have the same technical knowledge about Vulkan to retrace the steps easily.
### Steps to reproduce
Using the MRP I've linked together, just double click plate.glb, it will open the 3D viewer window, just messing around with the 3D model (clicking and dragging making it spin and roll) will make the editor freeze and close in a couple of seconds.
### Minimal reproduction project (MRP)
[MRP.zip](https://github.com/user-attachments/files/17300212/MRP.zip)
| bug,topic:editor,needs testing,crash | low | Critical |
2,574,465,564 | PowerToys | [Peek] Improve efficiency when displaying large image files | ### Description of the new feature / enhancement
The image previewer handles displaying a file by creating a `BitmapImage` and setting its source to the image file stream. This decodes the image at its original resolution and then scales it to fit within the control's bounds.
We can improve efficiency by supplying the desired dimensions to the decoder, taking advantage of internal optimisations to improve performance and memory usage. This could help with large and/or uncompressed files, e.g. high-resolution TIFFs.
### Scenario when this would be used?
This would be useful for general image viewing, as it ensures the most efficient decoding path, but especially for larger files and those much larger than the intended display size.
### Supporting information
More information can be found on the Image control page here: https://learn.microsoft.com/en-us/windows/windows-app-sdk/api/winrt/microsoft.ui.xaml.controls.image?view=windows-app-sdk-1.6#image-files-and-performance
We may be able to tap into the code that sizes the main window to discover the intended display size for the file, which would then be passed into the ImagePreviewer or otherwise bound to the BitmapImage's `DecodePixelWidth`. | Needs-Triage | low | Major |
2,574,479,990 | pytorch | `torch.linspace` doesn't agree with decomposition. | ### 🐛 Describe the bug
Code:
```python
import torch
import torch._refs
print('torch: ', torch.linspace(4.9, 3, 5, dtype=torch.int64))
print('decomp: ', torch._refs.linspace(4.9, 3, 5, dtype=torch.int64))
import numpy
print('numpy: ', numpy.linspace(4.9, 3, 5, dtype=numpy.int64))
```
Output:
```
(xla2) hanq-macbookpro:torch_xla hanq$ python decomp.py
torch: tensor([4, 3, 3, 3, 3])
decomp: tensor([4, 4, 3, 3, 3])
numpy: [4 4 3 3 3]
```
Expected:
torch and it's own decomposition should produce the same output. Ideally, that should also match numpy's behavior.
### Versions
(xla2) hanq-macbookpro:torch_xla hanq$ python collect_env.py
Collecting environment information...
PyTorch version: 2.4.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.7 (arm64)
GCC version: Could not collect
Clang version: 16.0.4
CMake version: version 3.30.2
Libc version: N/A
Python version: 3.10.14 (main, Mar 21 2024, 11:21:31) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-14.7-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Pro
Versions of relevant libraries:
[pip3] bert-pytorch==0.0.1a4
[pip3] functorch==1.14.0a0+b71aa0b
[pip3] mypy==1.10.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.0.1
[pip3] onnx==1.16.1
[pip3] optree==0.11.0
[pip3] pytorch-labs-segment-anything-fast==0.2
[pip3] pytorch-lightning==2.2.4
[pip3] torch==2.4.1
[pip3] torch_geometric==2.4.0
[pip3] torch_xla2==0.0.1
[pip3] torchao==0.1
[pip3] torchaudio==2.4.0
[pip3] torchmetrics==1.4.0
[pip3] torchmultimodal==0.1.0b0
[pip3] torchvision==0.19.0
[conda] bert-pytorch 0.0.1a4 dev_0 <develop>
[conda] functorch 1.14.0a0+b71aa0b pypi_0 pypi
[conda] numpy 2.0.1 pypi_0 pypi
[conda] optree 0.11.0 pypi_0 pypi
[conda] pytorch-labs-segment-anything-fast 0.2 pypi_0 pypi
[conda] pytorch-lightning 2.2.4 pypi_0 pypi
[conda] torch 2.4.1 pypi_0 pypi
[conda] torch-geometric 2.4.0 pypi_0 pypi
[conda] torch-xla2 0.0.1 pypi_0 pypi
[conda] torchao 0.1 pypi_0 pypi
[conda] torchaudio 2.4.0 pypi_0 pypi
[conda] torchbench 0.1 dev_0 <develop>
[conda] torchmetrics 1.4.0 pypi_0 pypi
[conda] torchmultimodal 0.1.0b0 pypi_0 pypi
[conda] torchvision 0.19.0 pypi_0 pypi
cc @jianyuh @nikitaved @pearu @mruberry @walterddr @xwang233 @Lezcano @SherlockNoMad | triaged,module: linear algebra,module: decompositions | low | Critical |
2,574,481,718 | godot | Shadows not properly rendered from OmniLight3D when using vertex shading | ### Tested versions
- Reproducible in v4.4.dev3.official [f4af8201b] Not Reproducible in v4.3.stable.official [77dcf97d8]
### System information
Godot v4.3.stable - Windows 10.0.22631 - Vulkan (Forward+) - dedicated GeForce GTX 1060 with Max-Q Design - Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz (12 Threads)
### Issue description
Upgrading from 4.3 to 4.4.dev seems to break omni light shadows

Building the scene from scratch in a new project has the desired effect:

### Steps to reproduce
I created a scene with 2 boxes (1 with faces flipped and 1 wish shadows only) inside an existing project and that new scene had omnilight shadows that behaved as expected in 4.3. Upgrading the project to 4.4 causes the lights to break. It's worth noting that creating a fresh project with just the scene shown above and upgrading that to 4.4 does not seem to create the issue.
### Minimal reproduction project (MRP)
N/A
I tried to create the scene described above in a new project but it succeeded. | topic:rendering,documentation,topic:3d | low | Major |
2,574,513,247 | vscode | "Stop Debugging" variant for terminating entire process tree | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
<img width="213" alt="image" src="https://github.com/user-attachments/assets/6f70c58b-d53b-49c5-beab-452b3dc247b5">
I'd really like for "Stop Debugging" (or a variant thereof) to terminate the entire process tree.
Debugging multi-process applications is common in Python / pytorch, but to "Stop" my application I need to press Stop _per subprocess_.
I have to press stop over 10 times to kill a realistic program — because each time I press Stop, it then takes me to the next subprocess, and we have subprocesses for dataloader workers, per-GPU model workers, model compilation workers… and it moves my focus every time. I get further and further from the line of code or problem that made me want to Stop the program, and by the end of the carousel my focus is left inside the torchrun wrapper script, which is never where I'm doing development.
the consequence right now is that I avoid as far as possible running my application in a realistic way. I reduce processes until it's down to just 3, so that I only have to press stop 3 times to terminate the program.
pressing "Restart" basically doesn't work, because a new process cannot be started until all processes from the current run have been killed. so for my most common task "run until I hit a problem, hit restart": I have to hit Stop 3 times then Run, when what I really want is to just hit restart once.
possible solutions:
- additional stop button (super stop, etc)
- little chevron next to stop button, which expands with a dropdown of other types of stop (which can be otherwise accessed via commands / keyboard shortcuts)
- no GUI, just a command that I could look for / keybind
- the same applies to the "restart" button
I think I'd want gentle termination of the subprocesses, but to disconnect the debugger so I don't have to watch the shutdown procedure and all the exceptions that get raised along the way.
thanks for any consideration you can give this! | feature-request,debug | low | Critical |
2,574,522,413 | rust | Tracking Issue for compiletest directive handling bugs and papercuts | This is a tracking issue to collect bugs and papercuts related to compiletest directive handling (incl. parsing/syntax, error reporting, validation and actual wiring up to how tests get run). This is explicitly not for feature requests like negative ui error annotations. | T-compiler,T-bootstrap,C-tracking-issue,A-compiletest,E-needs-design | low | Critical |
2,574,534,024 | neovim | directional text motions (like vim-ninja-feet) | ### Problem
Vim and Neovim have concise mappings for acting on text objects. Example: `vab` selects-and-includes a pair of `()`s, `di[` deletes the contents of `[]`s.
However there aren't many text motions available that incorporate the user's current cursor position. There are horizontal options like `dfB` to mean "delete from the cursor to the first letter B" but a user cannot as easily express "delete all text starting from this cursor position up to the end of this paragraph" as a mapping in a generic way.
### Expected behavior
As it happens, there's a plugin that does this, https://github.com/tommcdo/vim-ninja-feet
In short: It adds "operator-pending" mappings that consider the cursor's current position.
A typical text motion + object pair like `vap`, now has 2 more options `v[ap` and `v]ap`. `[` means "from the start of the text object to the current cursor" and `]` means "from the cursor to the end of the text object"
With built-in text objects or other plugins, examples like the following become easily possible:
`v[ic` means "select all lines that are comments from the current cursor position to the last comment"
`d]ip` means "delete from the current cursor line to the end of the paragraph"
`gc[ii` means "comment all of the lines from the cursor down to the last line of same indent"
Neovim has recently been added more meaningful text operations such as [built-in commenting](https://github.com/neovim/neovim/pull/28176), [unimpaired-style mappings](https://github.com/neovim/neovim/commit/bb7604eddafb31cd38261a220243762ee013273a), and there's a GitHub issue for [adding `gs` to sort lines](https://github.com/neovim/neovim/issues/19354#issue-1303638463). Please consider adding directional text motions to improve manipulating text. What do you think? | enhancement,mappings,normal-mode | low | Minor |
2,574,577,760 | flutter | When called from a Button child inside Tooltip, `ensureTooltipVisible()` does not work with long press | ### Steps to reproduce
1. Create a `Tooltip` widget with the following properties:
- A `triggerMode` of `TooltipTriggerMode.tap`
- A `child` that contains a `Button` whose `onPressed` calls `TooltipState.ensureTooltipVisible()`
2. Tap on the button
### Expected results
The tooltip should appear when the button is tapped, since `TooltipState.ensureTooltipVisible()` was called
Incidentally, other children of the tooltip aside from the button should also display the tooltip since `triggerMode` is `TooltipTriggerMode.tap`
### Actual results
The tooltip does not appear when the button is tapped. Or sometimes it does appear but this is very sporadic.
Based on some rudimentary debugging, it appears the `onTapCancel` gesture is triggered when the button is tapped as well. The tooltip is effectively dismissed immediately after `ensureTooltipVisible()` is called
Note that tapping on other children of the tooltip still works and the tooltip is displayed correctly.
Also, note that this issue appeared somewhat recently. We were previously on Flutter 3.13 and did not have this issue. When upgrading to flutter 3.24 this issue appeared.
I think ideally there is some logic so the `onTapCancel` logic does not trigger if the tooltip is showing due to `ensureTooltipVisible()`
I did not test other tooltip trigger modes such as `TooltipTriggerMode.longPress`, but it could also have this issue.
In the meantime, I found that adding a little delay before calling `ensureTooltipVisible()` works as a workaround
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Tooltip Issue',
theme: ThemeData(
colorScheme: ColorScheme.fromSeed(seedColor: Colors.deepPurple),
useMaterial3: true,
),
home: const HomePage(),
);
}
}
class HomePage extends StatefulWidget {
const HomePage({super.key});
@override
State<HomePage> createState() => _HomePageState();
}
class _HomePageState extends State<HomePage> {
final tooltipKey = GlobalKey<TooltipState>();
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(title: const Text('Tooltip Issue')),
body: Align(
alignment: Alignment.topCenter,
child: Tooltip(
key: tooltipKey,
textAlign: TextAlign.left,
richMessage: const TextSpan(
children: [
TextSpan(
text: 'Warning 1: ',
style: TextStyle(fontWeight: FontWeight.bold),
),
TextSpan(
text: 'Lorem ipsum dolor sit amet, \n'
'consectetur adipiscing elit.'),
TextSpan(text: '\n\n'),
TextSpan(
text: 'Warning 2: ',
style: TextStyle(fontWeight: FontWeight.bold),
),
TextSpan(text: 'Duis lacus mauris,\nefficitur non blandit a'),
],
),
triggerMode: TooltipTriggerMode.tap,
child: Row(
mainAxisSize: MainAxisSize.min,
children: [
const Text('Your device has 2 warnings.'),
IconButton(
onPressed: () =>
tooltipKey.currentState?.ensureTooltipVisible(),
icon: const Icon(Icons.warning),
),
],
),
),
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
https://github.com/user-attachments/assets/504a8677-d6f7-4cfb-b272-6acc9ac5fb7e
</details>
### Logs
_No response_
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.24.3, on macOS 15.0.1 24A348 darwin-arm64, locale en-US)
• Flutter version 3.24.3 on channel stable at /Users/KO16A46/Development/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 2663184aa7 (4 weeks ago), 2024-09-11 16:27:48 -0500
• Engine revision 36335019a8
• Dart version 3.5.3
• DevTools version 2.37.3
[✓] Android toolchain - develop for Android devices (Android SDK version 33.0.1)
• Android SDK at /Users/KO16a46/Library/Android/sdk
• Platform android-34, build-tools 33.0.1
• ANDROID_HOME = /Users/KO16a46/Library/Android/sdk
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 15.0.1)
• Xcode at /Applications/Xcode-15.0.1.app/Contents/Developer
• Build 15A507
• CocoaPods version 1.15.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.1)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
[✓] IntelliJ IDEA Community Edition (version 2022.3.1)
• IntelliJ at /Applications/IntelliJ IDEA CE.app
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
[✓] VS Code (version 1.93.1)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.98.0
[✓] Connected device (6 available)
• sdk gphone64 arm64 (mobile) • emulator-5554 • android-arm64 • Android 12 (API 32) (emulator)
• Brian’s iPhone (mobile) • 00008110-001824D90C33801E • ios • iOS 18.0.1 22A3370
• iPhone 15 Pro Max (mobile) • 7E5F4EB4-12C7-4F0E-A56F-250B5682FF40 • ios •
com.apple.CoreSimulator.SimRuntime.iOS-17-0 (simulator)
• macOS (desktop) • macos • darwin-arm64 • macOS 15.0.1 24A348 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 15.0.1 24A348 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 129.0.6668.90
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| c: new feature,framework,f: material design,f: gestures,has reproducible steps,P3,workaround available,team-design,triaged-design,found in release: 3.24,found in release: 3.27 | low | Critical |
2,574,581,844 | next.js | Turbopack does not import from exports correctly when *.ts and *.tsx are specified | ### Link to the code that reproduces this issue
https://github.com/RhysSullivan/bun-transpile-packages-turbopack
### To Reproduce
To validate it is broken with Turbo:
- bun dev:turbo
- navigate to http://localhost:3000/single , see that it compiles correctly
- navigate to http://localhost:3000/multiple , see that it fails to compile
The only difference between these packages is the exports section of package.json
To validate it works with Turbo disabled
- bun dev:no-turbo
- navigate to http://localhost:3000/single , see that it compiles correctly
- navigate to http://localhost:3000/multiple , see that it compiles correctly
### Current vs. Expected behavior
Turbopack should work with multiple file extensions in the exports field
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.4.0: Fri Mar 15 00:19:22 PDT 2024; root:xnu-10063.101.17~1/RELEASE_ARM64_T8112
Available memory (MB): 16384
Available CPU cores: 8
Binaries:
Node: 20.13.1
npm: 10.5.2
Yarn: N/A
pnpm: 9.1.2
Relevant Packages:
next: 15.0.0-canary.181 // Latest available version is detected (15.0.0-canary.181).
eslint-config-next: N/A
react: 19.0.0-rc-2d16326d-20240930
react-dom: 19.0.0-rc-2d16326d-20240930
typescript: 5.3.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Turbopack
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local), next start (local)
### Additional context
_No response_ | bug,Turbopack,linear: turbopack | low | Critical |
2,574,590,959 | rust | Cannot build bare metal mach-o binaries | <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I tried to compile a bare metal aarch64 Rust Mach-O binary with the following target json spec:
```json
{
"abi": "softfloat",
"arch": "aarch64",
"crt-objects-fallback": "false",
"data-layout": "e-m:o-i64:64-i128:128-n32:64-S128-Fn32",
"disable-redzone": true,
"features": "+v8a,+strict-align,-neon,-fp-armv8",
"is-builtin": false,
"linker": "ld64.lld",
"linker-flavor": "gnu-lld",
"llvm-target": "aarch64-unknown-none-macho",
"max-atomic-width": 128,
"metadata": {
"description": "Bare ARM64, softfloat",
"host_tools": false,
"std": false,
"tier": 2
},
"panic-strategy": "abort",
"relocation-model": "static",
"stack-probes": {
"kind": "inline"
},
"target-pointer-width": "64"
}
```
I expected to see this happen: No errors
Instead, this happened:
```
rustc-LLVM ERROR: Global variable '__rustc_debug_gdb_scripts_section__' has an invalid section specifier '.debug_gdb_scripts': mach-o section specifier requires a segment and section separated by a comma.
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.83.0-nightly (6f4ae0f34 2024-10-08)
binary: rustc
commit-hash: 6f4ae0f34503601e54680a137c1db0b81b56cc3d
commit-date: 2024-10-08
host: x86_64-unknown-linux-gnu
release: 1.83.0-nightly
LLVM version: 19.1.1
```
Unfortunately RUST_BACKTRACE=1 doesn't change anything so I cannot give backtrace at the moment. I will look into building rustc if needed.
| requires-custom-config,S-needs-info | low | Critical |
2,574,635,339 | kubernetes | K8s Compatibility Versions (`--emulated-version`) feature should have integration test to validate feature gate compatibility | ### What would you like to be added?
Currently the K8s Compatibility Versions feature has some tests associated with API compatibility such as `TestEnableEmulationVersion` (https://github.com/kubernetes/kubernetes/blob/master/test/integration/apiserver/apiserver_test.go#L3149) but no tests associated with feature gate compatibility. This issue tracks adding feature gate compatibility integration test(s) to fill this testing gap.
### Why is this needed?
This is needed as currently the Compatibility Versions feature (--emulated-version) has as a goal the ability to allow k8s to emulate a prior (up to n-3) version but the feature gate compatibility that is promised in the feature is currently not tested
/sig api-machinery
/assign
/triage accepted | sig/api-machinery,kind/feature,triage/accepted | low | Minor |
2,574,644,474 | kubernetes | PodSandbox cannot be created if the time in the server is changed by incident | ### What happened?
Recently the CMOS battery is faulty, we did a change and rebooted the node. After rebooting, we found almost of pods cannot be created by kubelet + containerd.
The timeline is like this
1. **_time: Sep 26 19:55:45 -07 2024_**. We have a pod created at **_2024-09-05T14:20:50.537022918-07:00_**
```
aff8aea34f502 4 weeks ago Ready x-generic22-kdw2k sdprod 0 (default)
```
2. Shutdown and change the CMOS battery, the time is set to **_Tue Nov 21 13:57 - 05:46_**.
3. After rebooting the node, kubelet started to work before time is synced by chrony. It will create a new sandbox 0b4c9da99f482 with timestamp _**2023-11-21T13:57:27.095899765-07:00**_
```
aff8aea34f502 4 weeks ago NotReady x-generic22-kdw2k sdprod 0 (default)
0b4c9da99f482 10 months ago Ready x-generic22-kdw2k sdprod 1 (default)
```
4. After the time is synced by chrony, kubelet will re-create the sandbox because the ready sandbox is not the latest one. The latest one is aff8aea34f502. The attempt is 0. So kubelet will use attempt +1 = 1 as the new attempt. but creating sandbox will fail.
```
aff8aea34f502 4 weeks ago NotReady x-generic22-kdw2k sdprod 0 (default)
0b4c9da99f482 10 months ago NotReady x-generic22-kdw2k sdprod 1 (default)
```
The error is like this.
```
kubelet[64266]: E1008 19:17:18.914095 64266 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to "CreatePodSandbox" for "x-generic22-kdw2k_sdprod(e7a70473-71a5-40de-a60e-5da5e756f7ff)\" with CreatePodSandboxError: "Failed to create sandbox for pod "x-generic22-kdw2k_sdprode7a70473-71a5-40de-a60e-5da5e756f7ff)": rpc error: code = Unknown desc = failed to reserve sandbox name "x-generic22-kdw2k_sdprod_e7a70473-71a5-40de-a60e-5da5e756f7ff_1": name "x-generic22-kdw2k_sdprod_e7a70473-71a5-40de-a60e-5da5e756f7ff_1" is reserved for\"0b4c9da99f48214325378cc779b799d8f2becae18e9ea9d0f3fb72c42d8fbd98"" pod="sdprod/ x-generic22-kdw2k" podUID="e7a70473-71a5-40de-a60e-5da5e756f7ff"
```
https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/kuberuntime/util/util.go#L42
```
func PodSandboxChanged(pod *v1.Pod, podStatus *kubecontainer.PodStatus) (bool, uint32, string) {
...
// Needs to create a new sandbox when readySandboxCount > 1 or the ready sandbox is not the latest one.
sandboxStatus := podStatus.SandboxStatuses[0]
if readySandboxCount > 1 {
klog.V(2).InfoS("Multiple sandboxes are ready for Pod. Need to reconcile them", "pod", klog.KObj(pod))
return true, sandboxStatus.Metadata.Attempt + 1, sandboxStatus.Id
}
if sandboxStatus.State != runtimeapi.PodSandboxState_SANDBOX_READY {
klog.V(2).InfoS("No ready sandbox for pod can be found. Need to start a new one", "pod", klog.KObj(pod))
return true, sandboxStatus.Metadata.Attempt + 1, sandboxStatus.Id
}
```
When we calculate the attempt, is it better to get the maximum value of all the sandboxStatus.Metadata.Attempt?
### What did you expect to happen?
Use attempt =2 to create the new sandbox and the pod can be up.
### How can we reproduce it (as minimally and precisely as possible)?
1. disable chrony
2. change the date to a few days ago. It must be before the sandbox creation timestamp.
3. reboot
4. kubelet will be up
5. new sandbox will be created
6. enable chrony
### Anything else we need to know?
_No response_
### Kubernetes version
```
1.18.12
```
### Cloud provider
<details>
none
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/node,lifecycle/stale,triage/needs-information,needs-triage | low | Critical |
2,574,685,880 | PowerToys | NewPlus supports using variables such as yyyyMMdd in template folder names or file names | ### Description of the new feature / enhancement
NewPlus supports using template strings such as yyyyMMdd (represent the current date) in template folder names or file names.
For example, in “%LocalAppData%\Microsoft\PowerToys\NewPlus\Templates” there are the following folders:
> "${yyyyMMdd}: Title"
> "${ParentFolderName}-PartA"
When I used the first template on 2024-10-10, it generated:
>"20241010: Title"
And when I used the second template inside the "Work" folder, it generated:
>"Work-PartA"
In addition, it should be possible to use specific variables to specify the part of the file name that needs to be modified after creating a new file. Usually, after creating a new file, it will switch to the rename file state and select the entire file name in the rename text box, as shown below.
<img width="107" alt="Snipaste_2024-10-09_10-33-08" src="https://github.com/user-attachments/assets/537eaba2-7f9a-4b75-a864-aedb726da6bd">
But most of the time, only specific parts need to be modified. For example, creating a template folder with variables above to get "20241010: Title", user only need to change the last word "Title" to the name of a specific item.
<img width="110" alt="Snipaste_2024-10-09_10-33-41" src="https://github.com/user-attachments/assets/0eded4bc-678f-4fa8-8756-67f4b16b4537">
So I want to be able to use a tag like # to specify the part of the file name that actually needs to be modified, to change the cursor/selection behavior when automatically renaming after creating the file.
The resulting template folder name may look like this:
> "${yyyyMMdd}: #Title#"
### Scenario when this would be used?
This method can expand the flexibility of the NewPlus module and reduce a large number of renaming operations. It does not introduce too many complex operations, and only requires changing the file name of the template file.
### Supporting information
no | Needs-Triage | low | Major |
2,574,687,180 | PowerToys | Preset of change typing keyboard language | ### Description of the new feature / enhancement
I have studied multiple languages and have had to install such as Japanese Chinese or French, the problem is when I switch to each language like English I need to switch languages back and forth to change it back to English it's very frustrating, so I think if we can do a preset of languages like choose the language would you like on this preset 1 and then do the same when we change the language by default
### Scenario when this would be used?
1. Frequent language switching: If you often switch between languages while typing, having a preset can save you time and reduce errors.
2. Specific tasks: For certain tasks, you might need to use a particular language. For example, if you're writing a document in French, you can set French as your default keyboard language.
3. Multiple languages in one document: If you're writing a document that includes text in multiple languages, a preset can help you quickly switch between them.
4. Learning a new language: A preset can help you practice typing in a new language.
5. Accessibility: For users with disabilities, a preset keyboard language can make typing easier by eliminating the need to manually switch languages.to switch languages manually
### Supporting information
_No response_ | Needs-Triage | low | Critical |
2,574,713,303 | electron | [Feature Request]: Extend the functionality of the `getAllWindows` function. | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a feature request that matches the one I want to file, without success.
### Problem Description
When developing multi-window applications, it is not easy to obtain specific windows. Although windows have an `id` property, this `id` is not fixed; it is generated based on the order in which windows are opened, and the `id` is not released when a window is closed. In a modular development scenario, exporting and importing windows can lead to a messy relationship (confusing: the next time you need to import a specific window, you have to search for the module file, and I may not remember where it is!). Without exporting windows, the only methods available are `getFocusedWindow` and `getAllWindows`, both of which still have limitations.
### Proposed Solution
Extend the `getAllWindows` method:
Add a `name` property to the `option` parameter of `new BrowserWindow(option)`. This `name` acts similarly to the `class` attribute in HTML tags, allowing it to be retrieved by the `getAllWindows` method. For example:
```javascript
const windowA = new BrowserWindow({ width: 800, height: 600, name: 'a' });
const windowB = new BrowserWindow({ width: 800, height: 600, name: 'b' });
const windowC = new BrowserWindow({ width: 200, height: 200, name: 'b' });
BrowserWindow.getAllWindows('a'); // => [ windowA ]
BrowserWindow.getAllWindows({ width: 800 }); // => [ windowA, windowB ]
BrowserWindow.getAllWindows({ name: 'b', width: 200 }); // => [ windowC ]
BrowerWindow.getAllWIndows({
has: { name: 'b' },
exclude: { width: 200 }
}) // => [ windowB ]
```
Add an `only` property to the `option`. When the `name` property is set, if the value of `only` is `true`, a new window will not be created. Instead, an existing window with the same `name` value will be returned and shown.
### Alternatives Considered
Add a `createWindow` method to `BrowserWindow`:
```javascript
BrowserWindow.createWindow(option, callback)
```
- `option` is the same as the `option` used in `new BrowserWindow(option)`.
- `callback` receives the `window` object, which can be used to add initialization behaviors to the window.
This method is used to create window instances without immediately generating the window. The window needs to be opened by calling the `open` method on the instance. This allows for centralized management of all windows within a module.
Example:
```javascript
// windows.js
const LoginWindow = BrowserWindow.createWindow({ width: 300, height: 200, show: false }, window => {
window.loadFile('index.html');
window.once('ready-to-show', () => { window.show(); });
});
const MainWindow = BrowserWindow.createWindow(foo, bar);
const SettingsWindow = BrowserWindow.createWindow(foo, bar);
export { LoginWindow, MainWindow, SettingsWindow };
---
// main.js
import { LoginWindow } from './windows.js';
app.whenReady().then(() => {
LoginWindow.open(); // Equivalent to the following code ↓
});
// Equivalent code
const loginWindow = new BrowserWindow({ width: 300, height: 200, show: false });
loginWindow.loadFile('index.html');
loginWindow.once('ready-to-show', () => { window.show(); });
```
If there is a `createWindow` method, a `setDefaultOption` method should also be added. The `option` parameter of `createWindow` would override the default options, resulting in the final options.
### Additional Information
This is a method I wrote to implement "Alternatives Considered," but through `CreateWindow.option`, not all properties can be obtained, only instance properties. It would be great if the `option` supported retrieving all window options.
```javascript
const { BrowserWindow } = require('electron')
const { join } = require('path');
const { format } = require('url')
class CreateWindow {
#callback
constructor(option, callback = new Function) {
this.option = option
this.window
this.#callback = callback
}
open() {
if (this.option?.title) {
for (let window of BrowserWindow.getAllWindows()) {
if (window.getTitle() === this.option.title) {
window.show()
return window
}
}
}
const optionDefault = {
width: 800,
height: 600,
show: false,
webPreferences: {
preload: join(__dirname, 'preload.js'),
},
parent: this.option?.modal ? BrowserWindow.getAllWindows()[0] : null,
autoHideMenuBar: MAIN_WINDOW_VITE_DEV_SERVER_URL ? !this.option?.menu : false,
}
// 创建窗体
const window = new BrowserWindow({
...optionDefault,
...this.option,
})
if (this.option?.menu) window.setMenu(this.option.menu)
window.once('ready-to-show', () => {
if (this.option?.title) window.setTitle(this.option.title)
this.show()
})
// and load the index.html of the app.
if (MAIN_WINDOW_VITE_DEV_SERVER_URL) {
const _url = new URL(MAIN_WINDOW_VITE_DEV_SERVER_URL)
_url.hash = this.option?.path ? this.option.path : ''
console.log(_url.toString())
window.loadURL(_url.toString())
// Open the DevTools.
window.webContents.openDevTools();
} else {
window.loadFile(
join(__dirname, `../renderer/${MAIN_WINDOW_VITE_NAME}/index.html`),
{ hash: this.option?.path ? format(this.option.path) : '' }
)
}
this.#callback(window)
this.window = window
return window
}
close() { this.window?.close() }
destroy() { this.window?.destroy() }
hide() { this.window?.hide() }
show() { this.window?.show() }
}
export { CreateWindow }
``` | stale | low | Minor |
2,574,715,039 | go | x/tools/gopls: add analyzer to simplify slice to array conversions | ### Go version
go1.23.1
### Output of `go env` in your module/workspace:
```shell
n/a
```
### What did you do?
Run `gofmt -s`
### What did you see happen?
Nothing.
### What did you expect to see?
Rewrite of `*(*logid.PublicID)(buf)` to `logid.PublicID(buf)`. Where `logid.PublicID` is an `[...]byte` type and `buf` is an `[]byte`.
Go 1.17 introduced conversion of slices to array pointers, which resulted in code of the `*(*A)(s)` pattern. Go 1.20 introduced direct conversion of a slice to an array, thus making the `*(*A)(s)` pattern redundant. | NeedsDecision,gopls,Tools | low | Major |
2,574,730,709 | vscode | problems with Undo/Redo in jsx/tsx file after pasting tag name |
Type: <b>Bug</b>
in any jsx/tsx file, make a component function (that returns React.JSX.Elemen)
VS Code version: Code 1.94.1 (e10f2369d0d9614a452462f2e01cdc4aa9486296, 2024-10-05T05:44:32.189Z)
OS version: Windows_NT x64 10.0.22631
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|13th Gen Intel(R) Core(TM) i9-13900HX (32 x 2419)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|31.75GB (16.61GB free)|
|Process Argv|--crash-reporter-id 7e614ff1-e8e4-4ae8-8a5a-1e013377f568|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (54)</summary>
Extension|Author (truncated)|Version
---|---|---
vscode-css-formatter|aes|1.0.2
atlascode|atl|3.0.10
vscode-django|bat|1.15.0
vscode-tailwindcss|bra|0.12.11
webvalidator|Cel|1.3.1
githistory|don|0.6.20
pug-formatter|duc|0.6.0
gitlens|eam|15.6.0
vscode-html-css|ecm|2.0.10
EditorConfig|Edi|0.16.4
prettier-vscode|esb|11.0.0
figma-vscode-extension|fig|0.3.5
auto-rename-tag|for|0.1.10
code-python-isort|fre|0.0.3
copilot|Git|1.236.0
copilot-chat|Git|0.21.1
live-sass|gle|6.1.2
vscode-test-explorer|hbe|2.22.1
vscode-htmlhint|HTM|1.0.5
rest-client|hum|0.25.1
svg|joc|1.5.4
vscode-python-test-adapter|lit|0.8.2
code-beautifier|mic|2.3.3
vscode-scss|mrm|0.10.0
vscode-docker|ms-|1.29.3
black-formatter|ms-|2024.2.0
debugpy|ms-|2024.10.0
isort|ms-|2023.10.1
python|ms-|2024.16.0
vscode-pylance|ms-|2024.10.1
jupyter|ms-|2024.9.1
jupyter-keymap|ms-|1.1.2
jupyter-renderers|ms-|1.0.19
vscode-jupyter-cell-tags|ms-|0.1.9
vscode-jupyter-slideshow|ms-|0.1.6
remote-containers|ms-|0.388.0
remote-wsl|ms-|0.88.4
test-adapter-converter|ms-|0.2.0
html-validator|Nar|2.0.2
style-sorter|Nat|0.1.3
gulptasks|nic|1.3.1
docker-compose|p1c|0.5.1
vscode-yaml|red|1.15.0
LiveServer|rit|5.7.9
jinjahtml|sam|0.20.0
vscode-scss-formatter|sib|3.0.0
vscode-stylelint|sty|1.4.0
vscode-w3cvalidation|Umo|2.9.1
vscode-css-variables|vun|2.7.1
gitblame|wad|11.1.0
jinja|who|0.0.8
eno|Wsc|2.3.53
JavaScriptSnippets|xab|1.8.0
zeplin|Zep|2.2.0
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805:30301674
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
a9j8j154:30646983
962ge761:30959799
pythongtdpath:30769146
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
accentitlementsc:30995553
dsvsc016:30899300
dsvsc017:30899301
dsvsc018:30899302
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
f3je6385:31013174
a69g1124:31058053
dvdeprecation:31068756
dwnewjupyter:31046869
newcmakeconfigv2:31071590
impr_priority:31102340
nativerepl1:31139838
refactort:31108082
pythonrstrctxt:31112756
flightc:31134773
wkspc-onlycs-t:31132770
wkspc-ranged-t:31151552
cf971741:31144450
autoexpandse:31146404
iacca2:31150323
showchatpanel:31153267
cc771715:31146322
```
</details>
<!-- generated by issue reporter --> | bug,help wanted,typescript,javascript | low | Critical |
2,574,740,959 | godot | Android crash in Production using OpenGL | ### Tested versions
4.3.stable
### System information
Android - Godot 4.3.stable - GLES3
### Issue description
Since I published my game, I have had several crashes in production I am not able to replicate. The issue happens with multiple devices, I could not find a pattern in either memory, chip or GPU.
The issue seems to be the same of this closed issue: https://github.com/godotengine/godot/issues/90884
The error detail is bellow:
Crash 1:
```
*** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
pid: 0, tid: 30466 >>> com.carameldogstudios.bbsurvivor <<<
backtrace:
#00 pc 0x0000000003335e3c /data/app/~~7yp9IGVtNq17ZAbyhI2xtQ==/com.carameldogstudios.bbsurvivor-jWaMsSte6UV9rCNVB65bkw==/split_config.arm64_v8a.apk!libgodot_android.so (ZSTD_CCtxParams_registerSequenceProducer) (BuildId: a45484a1a8923d4ef90f7b94de1cce2e)
#01 pc 0x000000000335897c /data/app/~~7yp9IGVtNq17ZAbyhI2xtQ==/com.carameldogstudios.bbsurvivor-jWaMsSte6UV9rCNVB65bkw==/split_config.arm64_v8a.apk!libgodot_android.so (ZSTD_CCtxParams_registerSequenceProducer) (BuildId: a45484a1a8923d4ef90f7b94de1cce2e)
#02 pc 0x00000000010523f0 /data/app/~~7yp9IGVtNq17ZAbyhI2xtQ==/com.carameldogstudios.bbsurvivor-jWaMsSte6UV9rCNVB65bkw==/split_config.arm64_v8a.apk!libgodot_android.so (Java_org_godotengine_godot_plugin_GodotPlugin_nativeEmitSignal) (BuildId: a45484a1a8923d4ef90f7b94de1cce2e)
#03 pc 0x00000000035d4248 /data/app/~~7yp9IGVtNq17ZAbyhI2xtQ==/com.carameldogstudios.bbsurvivor-jWaMsSte6UV9rCNVB65bkw==/split_config.arm64_v8a.apk!libgodot_android.so (ZSTD_CCtxParams_registerSequenceProducer) (BuildId: a45484a1a8923d4ef90f7b94de1cce2e)
#04 pc 0x0000000003335e58 /data/app/~~7yp9IGVtNq17ZAbyhI2xtQ==/com.carameldogstudios.bbsurvivor-jWaMsSte6UV9rCNVB65bkw==/split_config.arm64_v8a.apk!libgodot_android.so (ZSTD_CCtxParams_registerSequenceProducer) (BuildId: a45484a1a8923d4ef90f7b94de1cce2e)
#05 pc 0x0000000003339958 /data/app/~~7yp9IGVtNq17ZAbyhI2xtQ==/com.carameldogstudios.bbsurvivor-jWaMsSte6UV9rCNVB65bkw==/split_config.arm64_v8a.apk!libgodot_android.so (ZSTD_CCtxParams_registerSequenceProducer) (BuildId: a45484a1a8923d4ef90f7b94de1cce2e)
#06 pc 0x0000000003335e8c /data/app/~~7yp9IGVtNq17ZAbyhI2xtQ==/com.carameldogstudios.bbsurvivor-jWaMsSte6UV9rCNVB65bkw==/split_config.arm64_v8a.apk!libgodot_android.so (ZSTD_CCtxParams_registerSequenceProducer) (BuildId: a45484a1a8923d4ef90f7b94de1cce2e)
#07 pc 0x000000000335897c /data/app/~~7yp9IGVtNq17ZAbyhI2xtQ==/com.carameldogstudios.bbsurvivor-jWaMsSte6UV9rCNVB65bkw==/split_config.arm64_v8a.apk!libgodot_android.so (ZSTD_CCtxParams_registerSequenceProducer) (BuildId: a45484a1a8923d4ef90f7b94de1cce2e)
#08 pc 0x00000000010523f0 /data/app/~~7yp9IGVtNq17ZAbyhI2xtQ==/com.carameldogstudios.bbsurvivor-jWaMsSte6UV9rCNVB65bkw==/split_config.arm64_v8a.apk!libgodot_android.so (Java_org_godotengine_godot_plugin_GodotPlugin_nativeEmitSignal) (BuildId: a45484a1a8923d4ef90f7b94de1cce2e)
#09 pc 0x00000000035d4248 /data/app/~~7yp9IGVtNq17ZAbyhI2xtQ==/com.carameldogstudios.bbsurvivor-jWaMsSte6UV9rCNVB65bkw==/split_config.arm64_v8a.apk!libgodot_android.so (ZSTD_CCtxParams_registerSequenceProducer) (BuildId: a45484a1a8923d4ef90f7b94de1cce2e)
#10 pc 0x0000000003335e58 /data/app/~~7yp9IGVtNq17ZAbyhI2xtQ==/com.carameldogstudios.bbsurvivor-jWaMsSte6UV9rCNVB65bkw==/split_config.arm64_v8a.apk!libgodot_android.so (ZSTD_CCtxParams_registerSequenceProducer) (BuildId: a45484a1a8923d4ef90f7b94de1cce2e)
#11 pc 0x0000000003339958 /data/app/~~7yp9IGVtNq17ZAbyhI2xtQ==/com.carameldogstudios.bbsurvivor-jWaMsSte6UV9rCNVB65bkw==/split_config.arm64_v8a.apk!libgodot_android.so (ZSTD_CCtxParams_registerSequenceProducer) (BuildId: a45484a1a8923d4ef90f7b94de1cce2e)
#12 pc 0x0000000003335e8c /data/app/~~7yp9IGVtNq17ZAbyhI2xtQ==/com.carameldogstudios.bbsurvivor-jWaMsSte6UV9rCNVB65bkw==/split_config.arm64_v8a.apk!libgodot_android.so (ZSTD_CCtxParams_registerSequenceProducer) (BuildId: a45484a1a8923d4ef90f7b94de1cce2e)
#13 pc 0x00000000035d6fbc /data/app/~~7yp9IGVtNq17ZAbyhI2xtQ==/com.carameldogstudios.bbsurvivor-jWaMsSte6UV9rCNVB65bkw==/split_config.arm64_v8a.apk!libgodot_android.so (ZSTD_CCtxParams_registerSequenceProducer) (BuildId: a45484a1a8923d4ef90f7b94de1cce2e)
#14 pc 0x0000000001a6e500 /data/app/~~7yp9IGVtNq17ZAbyhI2xtQ==/com.carameldogstudios.bbsurvivor-jWaMsSte6UV9rCNVB65bkw==/split_config.arm64_v8a.apk!libgodot_android.so (WebPBlendAlpha) (BuildId: a45484a1a8923d4ef90f7b94de1cce2e)
#15 pc 0x0000000001a6e184 /data/app/~~7yp9IGVtNq17ZAbyhI2xtQ==/com.carameldogstudios.bbsurvivor-jWaMsSte6UV9rCNVB65bkw==/split_config.arm64_v8a.apk!libgodot_android.so (WebPBlendAlpha) (BuildId: a45484a1a8923d4ef90f7b94de1cce2e)
#16 pc 0x0000000001a72bd4 /data/app/~~7yp9IGVtNq17ZAbyhI2xtQ==/com.carameldogstudios.bbsurvivor-jWaMsSte6UV9rCNVB65bkw==/split_config.arm64_v8a.apk!libgodot_android.so (WebPBlendAlpha) (BuildId: a45484a1a8923d4ef90f7b94de1cce2e)
#17 pc 0x0000000001a729cc /data/app/~~7yp9IGVtNq17ZAbyhI2xtQ==/com.carameldogstudios.bbsurvivor-jWaMsSte6UV9rCNVB65bkw==/split_config.arm64_v8a.apk!libgodot_android.so (WebPBlendAlpha) (BuildId: a45484a1a8923d4ef90f7b94de1cce2e)
#18 pc 0x0000000003335e8c /data/app/~~7yp9IGVtNq17ZAbyhI2xtQ==/com.carameldogstudios.bbsurvivor-jWaMsSte6UV9rCNVB65bkw==/split_config.arm64_v8a.apk!libgodot_android.so (ZSTD_CCtxParams_registerSequenceProducer) (BuildId: a45484a1a8923d4ef90f7b94de1cce2e)
#19 pc 0x00000000035cda94 /data/app/~~7yp9IGVtNq17ZAbyhI2xtQ==/com.carameldogstudios.bbsurvivor-jWaMsSte6UV9rCNVB65bkw==/split_config.arm64_v8a.apk!libgodot_android.so (ZSTD_CCtxParams_registerSequenceProducer) (BuildId: a45484a1a8923d4ef90f7b94de1cce2e)
#20 pc 0x00000000035cddb8 /data/app/~~7yp9IGVtNq17ZAbyhI2xtQ==/com.carameldogstudios.bbsurvivor-jWaMsSte6UV9rCNVB65bkw==/split_config.arm64_v8a.apk!libgodot_android.so (ZSTD_CCtxParams_registerSequenceProducer) (BuildId: a45484a1a8923d4ef90f7b94de1cce2e)
#21 pc 0x0000000001ac0610 /data/app/~~7yp9IGVtNq17ZAbyhI2xtQ==/com.carameldogstudios.bbsurvivor-jWaMsSte6UV9rCNVB65bkw==/split_config.arm64_v8a.apk!libgodot_android.so (WebPBlendAlpha) (BuildId: a45484a1a8923d4ef90f7b94de1cce2e)
#22 pc 0x0000000000e4655c /data/app/~~7yp9IGVtNq17ZAbyhI2xtQ==/com.carameldogstudios.bbsurvivor-jWaMsSte6UV9rCNVB65bkw==/split_config.arm64_v8a.apk!libgodot_android.so (Java_org_godotengine_godot_plugin_GodotPlugin_nativeEmitSignal) (BuildId: a45484a1a8923d4ef90f7b94de1cce2e)
#23 pc 0x0000000000dfc6a0 /data/app/~~7yp9IGVtNq17ZAbyhI2xtQ==/com.carameldogstudios.bbsurvivor-jWaMsSte6UV9rCNVB65bkw==/split_config.arm64_v8a.apk!libgodot_android.so (BuildId: a45484a1a8923d4ef90f7b94de1cce2e)
#24 pc 0x0000000000e11440 /data/app/~~7yp9IGVtNq17ZAbyhI2xtQ==/com.carameldogstudios.bbsurvivor-jWaMsSte6UV9rCNVB65bkw==/split_config.arm64_v8a.apk!libgodot_android.so (Java_org_godotengine_godot_GodotLib_step+344)
#25 pc 0x0000000000306bf8 /data/misc/apexdata/com.android.art/dalvik-cache/arm64/boot.oat (art_jni_trampoline+104)
#26 pc 0x000000000077e708 /apex/com.android.art/lib64/libart.so (nterp_helper+152)
#27 pc 0x000000000033cfc4 /data/app/~~7yp9IGVtNq17ZAbyhI2xtQ==/com.carameldogstudios.bbsurvivor-jWaMsSte6UV9rCNVB65bkw==/base.apk (org.godotengine.godot.gl.GodotRenderer.onDrawFrame+20)
#28 pc 0x00000000007803e4 /apex/com.android.art/lib64/libart.so (nterp_helper+7540)
#29 pc 0x000000000033c29a /data/app/~~7yp9IGVtNq17ZAbyhI2xtQ==/com.carameldogstudios.bbsurvivor-jWaMsSte6UV9rCNVB65bkw==/base.apk (org.godotengine.godot.gl.GLSurfaceView$GLThread.guardedRun+946)
#30 pc 0x000000000077f5c4 /apex/com.android.art/lib64/libart.so (nterp_helper+3924)
#31 pc 0x000000000033c76c /data/app/~~7yp9IGVtNq17ZAbyhI2xtQ==/com.carameldogstudios.bbsurvivor-jWaMsSte6UV9rCNVB65bkw==/base.apk (org.godotengine.godot.gl.GLSurfaceView$GLThread.run+44)
#32 pc 0x0000000000362774 /apex/com.android.art/lib64/libart.so (art_quick_invoke_stub+612)
#33 pc 0x000000000034def0 /apex/com.android.art/lib64/libart.so (art::ArtMethod::Invoke(art::Thread*, unsigned int*, unsigned int, art::JValue*, char const*)+132)
#34 pc 0x0000000000943c28 /apex/com.android.art/lib64/libart.so (art::detail::ShortyTraits<(char)86>::Type art::ArtMethod::InvokeInstance<(char)86>(art::Thread*, art::ObjPtr<art::mirror::Object>, art::detail::ShortyTraits<>::Type...)+60)
#35 pc 0x000000000063ea1c /apex/com.android.art/lib64/libart.so (art::Thread::CreateCallback(void*)+1344)
#36 pc 0x000000000063e4cc /apex/com.android.art/lib64/libart.so (art::Thread::CreateCallbackWithUffdGc(void*)+8)
#37 pc 0x00000000000c163c /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+204)
#38 pc 0x0000000000054930 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+64)
```
Crash 2
```
*** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
pid: 0, tid: 7017 >>> com.carameldogstudios.bbsurvivor <<<
backtrace:
#00 pc 0x0000000003335e3c /data/app/~~SstVqPt66pd5YQuX02mIpQ==/com.carameldogstudios.bbsurvivor-3Aow31fyE23DmAffJX7fZQ==/split_config.arm64_v8a.apk!libgodot_android.so
#01 pc 0x000000000335897c /data/app/~~SstVqPt66pd5YQuX02mIpQ==/com.carameldogstudios.bbsurvivor-3Aow31fyE23DmAffJX7fZQ==/split_config.arm64_v8a.apk!libgodot_android.so
#02 pc 0x00000000010523f0 /data/app/~~SstVqPt66pd5YQuX02mIpQ==/com.carameldogstudios.bbsurvivor-3Aow31fyE23DmAffJX7fZQ==/split_config.arm64_v8a.apk!libgodot_android.so
#03 pc 0x00000000035d4248 /data/app/~~SstVqPt66pd5YQuX02mIpQ==/com.carameldogstudios.bbsurvivor-3Aow31fyE23DmAffJX7fZQ==/split_config.arm64_v8a.apk!libgodot_android.so
#04 pc 0x0000000003335e58 /data/app/~~SstVqPt66pd5YQuX02mIpQ==/com.carameldogstudios.bbsurvivor-3Aow31fyE23DmAffJX7fZQ==/split_config.arm64_v8a.apk!libgodot_android.so
#05 pc 0x0000000003339958 /data/app/~~SstVqPt66pd5YQuX02mIpQ==/com.carameldogstudios.bbsurvivor-3Aow31fyE23DmAffJX7fZQ==/split_config.arm64_v8a.apk!libgodot_android.so
#06 pc 0x0000000003335e8c /data/app/~~SstVqPt66pd5YQuX02mIpQ==/com.carameldogstudios.bbsurvivor-3Aow31fyE23DmAffJX7fZQ==/split_config.arm64_v8a.apk!libgodot_android.so
#07 pc 0x000000000335897c /data/app/~~SstVqPt66pd5YQuX02mIpQ==/com.carameldogstudios.bbsurvivor-3Aow31fyE23DmAffJX7fZQ==/split_config.arm64_v8a.apk!libgodot_android.so
#08 pc 0x00000000010523f0 /data/app/~~SstVqPt66pd5YQuX02mIpQ==/com.carameldogstudios.bbsurvivor-3Aow31fyE23DmAffJX7fZQ==/split_config.arm64_v8a.apk!libgodot_android.so
#09 pc 0x00000000035d4248 /data/app/~~SstVqPt66pd5YQuX02mIpQ==/com.carameldogstudios.bbsurvivor-3Aow31fyE23DmAffJX7fZQ==/split_config.arm64_v8a.apk!libgodot_android.so
#10 pc 0x0000000003335e58 /data/app/~~SstVqPt66pd5YQuX02mIpQ==/com.carameldogstudios.bbsurvivor-3Aow31fyE23DmAffJX7fZQ==/split_config.arm64_v8a.apk!libgodot_android.so
#11 pc 0x0000000003339958 /data/app/~~SstVqPt66pd5YQuX02mIpQ==/com.carameldogstudios.bbsurvivor-3Aow31fyE23DmAffJX7fZQ==/split_config.arm64_v8a.apk!libgodot_android.so
#12 pc 0x0000000003335e8c /data/app/~~SstVqPt66pd5YQuX02mIpQ==/com.carameldogstudios.bbsurvivor-3Aow31fyE23DmAffJX7fZQ==/split_config.arm64_v8a.apk!libgodot_android.so
#13 pc 0x00000000035d6fbc /data/app/~~SstVqPt66pd5YQuX02mIpQ==/com.carameldogstudios.bbsurvivor-3Aow31fyE23DmAffJX7fZQ==/split_config.arm64_v8a.apk!libgodot_android.so
#14 pc 0x0000000001a6e500 /data/app/~~SstVqPt66pd5YQuX02mIpQ==/com.carameldogstudios.bbsurvivor-3Aow31fyE23DmAffJX7fZQ==/split_config.arm64_v8a.apk!libgodot_android.so
#15 pc 0x0000000001a6e184 /data/app/~~SstVqPt66pd5YQuX02mIpQ==/com.carameldogstudios.bbsurvivor-3Aow31fyE23DmAffJX7fZQ==/split_config.arm64_v8a.apk!libgodot_android.so
#16 pc 0x0000000001a72bd4 /data/app/~~SstVqPt66pd5YQuX02mIpQ==/com.carameldogstudios.bbsurvivor-3Aow31fyE23DmAffJX7fZQ==/split_config.arm64_v8a.apk!libgodot_android.so
#17 pc 0x0000000001a729cc /data/app/~~SstVqPt66pd5YQuX02mIpQ==/com.carameldogstudios.bbsurvivor-3Aow31fyE23DmAffJX7fZQ==/split_config.arm64_v8a.apk!libgodot_android.so
#18 pc 0x0000000003335e8c /data/app/~~SstVqPt66pd5YQuX02mIpQ==/com.carameldogstudios.bbsurvivor-3Aow31fyE23DmAffJX7fZQ==/split_config.arm64_v8a.apk!libgodot_android.so
#19 pc 0x00000000035cda94 /data/app/~~SstVqPt66pd5YQuX02mIpQ==/com.carameldogstudios.bbsurvivor-3Aow31fyE23DmAffJX7fZQ==/split_config.arm64_v8a.apk!libgodot_android.so
#20 pc 0x00000000035cddb8 /data/app/~~SstVqPt66pd5YQuX02mIpQ==/com.carameldogstudios.bbsurvivor-3Aow31fyE23DmAffJX7fZQ==/split_config.arm64_v8a.apk!libgodot_android.so
#21 pc 0x0000000001ac0610 /data/app/~~SstVqPt66pd5YQuX02mIpQ==/com.carameldogstudios.bbsurvivor-3Aow31fyE23DmAffJX7fZQ==/split_config.arm64_v8a.apk!libgodot_android.so
#22 pc 0x0000000000e4655c /data/app/~~SstVqPt66pd5YQuX02mIpQ==/com.carameldogstudios.bbsurvivor-3Aow31fyE23DmAffJX7fZQ==/split_config.arm64_v8a.apk!libgodot_android.so
#23 pc 0x0000000000dfc6a0 /data/app/~~SstVqPt66pd5YQuX02mIpQ==/com.carameldogstudios.bbsurvivor-3Aow31fyE23DmAffJX7fZQ==/split_config.arm64_v8a.apk!libgodot_android.so
#24 pc 0x0000000000e11440 /data/app/~~SstVqPt66pd5YQuX02mIpQ==/com.carameldogstudios.bbsurvivor-3Aow31fyE23DmAffJX7fZQ==/split_config.arm64_v8a.apk!libgodot_android.so (Java_org_godotengine_godot_GodotLib_step+344)
#25 pc 0x000000000038abf8 /data/misc/apexdata/com.android.art/dalvik-cache/arm64/boot.oat (art_jni_trampoline+104)
#26 pc 0x000000000077e708 /apex/com.android.art/lib64/libart.so (nterp_helper+152)
#27 pc 0x000000000033cfc4 /data/app/~~SstVqPt66pd5YQuX02mIpQ==/com.carameldogstudios.bbsurvivor-3Aow31fyE23DmAffJX7fZQ==/base.apk (org.godotengine.godot.gl.GodotRenderer.onDrawFrame+20)
#28 pc 0x00000000007803e4 /apex/com.android.art/lib64/libart.so (nterp_helper+7540)
#29 pc 0x000000000033c29a /data/app/~~SstVqPt66pd5YQuX02mIpQ==/com.carameldogstudios.bbsurvivor-3Aow31fyE23DmAffJX7fZQ==/base.apk (org.godotengine.godot.gl.GLSurfaceView$GLThread.guardedRun+946)
#30 pc 0x000000000077f5c4 /apex/com.android.art/lib64/libart.so (nterp_helper+3924)
#31 pc 0x000000000033c76c /data/app/~~SstVqPt66pd5YQuX02mIpQ==/com.carameldogstudios.bbsurvivor-3Aow31fyE23DmAffJX7fZQ==/base.apk (org.godotengine.godot.gl.GLSurfaceView$GLThread.run+44)
#32 pc 0x0000000000362774 /apex/com.android.art/lib64/libart.so (art_quick_invoke_stub+612)
#33 pc 0x000000000034def0 /apex/com.android.art/lib64/libart.so (art::ArtMethod::Invoke(art::Thread*, unsigned int*, unsigned int, art::JValue*, char const*)+132)
#34 pc 0x0000000000943c28 /apex/com.android.art/lib64/libart.so (art::detail::ShortyTraits<(char)86>::Type art::ArtMethod::InvokeInstance<(char)86>(art::Thread*, art::ObjPtr<art::mirror::Object>, art::detail::ShortyTraits<>::Type...)+60)
#35 pc 0x000000000063ea1c /apex/com.android.art/lib64/libart.so (art::Thread::CreateCallback(void*)+1344)
#36 pc 0x000000000063e4cc /apex/com.android.art/lib64/libart.so (art::Thread::CreateCallbackWithUffdGc(void*)+8)
#37 pc 0x0000000000104fc4 /apex/com.android.runtime/lib64/bionic/libc.so (__pthread_start(void*)+208)
#38 pc 0x000000000009e764 /apex/com.android.runtime/lib64/bionic/libc.so (__start_thread+68)
```
### Steps to reproduce
I do not know how to reproduce it, but it is affecting over 6% of my user base at this point. If possible I would like to ask for any clues of what features could cause it to give me some idea of what I can try to do to replicate the issue.
### Minimal reproduction project (MRP)
Unfortunately I cannot provide an MRP, but please if any clue on how I can try to find the root cause in my project would be really helpful. | platform:android,needs testing,crash | low | Critical |
2,574,742,128 | godot | ScrollBar no highlight in 3D | ### Tested versions
Reproducible in:
- v4.4.dev.custom_build [4c4e67334]
- v4.3.stable.official [77dcf97d8]
Works in:
- v4.2.stable.official [46dc27791]
### System information
Godot v4.3.stable - Ubuntu 24.04.1 LTS 24.04 - Wayland - Vulkan (Forward+) - dedicated AMD Radeon RX 6600 (RADV NAVI23) - 12th Gen Intel(R) Core(TM) i5-12400F (12 Threads)
### Issue description
In the gui_in_3d demo the mouse over highlight works on buttons and sliders, but if you add a scroll bar it does not show mouseover highlight for scroll bar?
https://github.com/godotengine/godot-demo-projects/blob/master/viewport/gui_in_3d/gui_3d.gd
- It works when the ui is extracted to a 2d scene and run like that. scroll bar shows mouse over highlight. But in 3D not work?
- It works fine for buttons and sliders.
- It shows the highlight sometimes if you click around on the bar to move the grabber around a bit and then correctly un-highlights when move off bar… but should show on mouse over like it does for other controls?
Thanks.
### Steps to reproduce
- make a control scene with a scroll bar, note highlight works
- put it in 3d, note it not
### Minimal reproduction project (MRP)
https://github.com/rakkarage/test-grabber | bug,confirmed,topic:input,topic:gui | low | Minor |
2,574,762,579 | ant-design | 【BUG】TimeLine组件自定义Dot问题 | ### Reproduction link
[](https://stackblitz.com/edit/react-zcpcqe?file=demo.tsx)
### Steps to reproduce
如上代码示例 如果外层的元素不是白色背景 那自定义dot的白色背景色就很突兀

### What is expected?
预期没有这个白色背景 且自定义节点和时间轴可以连接起来
### What is actually happening?
实际结果 出现了白色背景
| Environment | Info |
| --- | --- |
| antd | 5.21.2 |
| React | 18.2.0 |
| System | Mac OS |
| Browser | Chrome 129.0.6668.58 |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | Inactive,improvement | low | Critical |
2,574,773,322 | next.js | [PPR] Dynamic route segment params are double encoded | ### Link to the code that reproduces this issue
https://github.com/marmalade-labs/ppr-double-encode
### To Reproduce
1. Navigate to https://ppr-double-encode.vercel.app/hello%20world
(note: this bug _only_ shows when deployed to Vercel, not when running locally).
### Current vs. Expected behavior
I expected `hello%20world` to be displayed but instead we see `hello%2520world`. When running locally or with PPR disabled, it works as expected.
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: #1 SMP PREEMPT_DYNAMIC Fri, 04 Oct 2024 21:51:11 +0000
Available memory (MB): 62093
Available CPU cores: 16
Binaries:
Node: 22.9.0
npm: 10.8.3
Yarn: 1.22.22
pnpm: 9.12.1
Relevant Packages:
next: 15.0.0-canary.181 // Latest available version is detected (15.0.0-canary.181).
eslint-config-next: N/A
react: 19.0.0-rc-2d16326d-20240930
react-dom: 19.0.0-rc-2d16326d-20240930
typescript: 5.3.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Partial Prerendering (PPR)
### Which stage(s) are affected? (Select all that apply)
Vercel (Deployed)
### Additional context
@wyattjoh, @ztanner - thought you might find this one interesting ! | bug,Partial Prerendering (PPR) | low | Critical |
2,574,805,477 | rust | Tracking Issue for gracefully handling broken pipes in the compiler | ### Context
std `print!` and `println!` by default will panic on a broken pipe if `-Zon-broken-pipe=kill` is not set when building rustc itself and left as default. If such a panic occurs and is not otherwise caught, it will manifest as an ICE. In bootstrap we build rustc with `-Zon-broken-pipe=kill` which terminates rustc to paper over issues like `rustc --print=sysroot | false` ICEing from the I/O panic from a broken pipe, but this is not always the desirabled behavior. As Nora said:
> `rustc --print=target-list | head -n5` should definitely work as expected and print only 5 targets and exit successfully, ICEing or erroring are not acceptable imo
> so kill should still be passed
> and `rustc --print=target-list >/dev/full` emitting an error instead of crashing would be neat too
See https://rust-lang.zulipchat.com/#narrow/stream/131828-t-compiler/topic/Internal.20lint.20for.20raw.20.60print!.60.20and.20.60println!.60.3F for a timeline of how we ended up with the `-Zon-broken-pipe=kill` paper.
### Prior Art
Cargo denies `print{,ln}!` usages via `clippy::print_std{err,out}`:
See:
- https://github.com/rust-lang/cargo/blob/15fbd2f607d4defc87053b8b76bf5038f2483cf4/Cargo.toml#L125-L126
- https://github.com/rust-lang/cargo/blob/15fbd2f607d4defc87053b8b76bf5038f2483cf4/src/doc/contrib/src/implementation/console.md?plain=1#L3-L5
### Steps
- [ ] Survey current usages of `print{,ln}!` macro usages in rustc.
- [ ] Classify desired behavior if we do handle I/O errors if we use some `safe_print{,ln}` alternative instead of panicking like `print{,ln}` (some might want to exit with success, some might want to error, but we probably never want to ICE).
- [ ] Open an MCP to propose migrating `print{,ln}!` macro usages to properly handle errors and adding an internal lint to deny (in CI, but allow locally to still allow printf debugging) raw usages of `print{,ln}!`.
- [ ] Fix existing `print{,ln}!` macro usages to properly handle errors.
- [ ] Drop `-Zon-broken-pipe=kill` when building rustc.
- [ ] Update `tests/run-make/broken-pipe-no-ice/rmake.rs` regression test.
- [ ] Implement the internal lint.
- [ ] Add documentation about `print{,ln}!` macro usages in the dev-guide. | E-hard,T-compiler,C-bug,C-tracking-issue | low | Critical |
2,574,819,989 | godot | OSX - Godot crashing when quitting or reloading project | ### Tested versions
4.3.stable
### System information
Apple M3 Pro - macOS Sequoia 15.0 (24A335)
### Issue description
Hello everyone
I'm having this quite annoying but not critical bug on my dev platform for a while
When quitting or reloading Godot, it crashes with the following errors:
<img width="1112" alt="Capture d’écran 2024-10-09 à 07 14 53" src="https://github.com/user-attachments/assets/e5bcf717-6dc3-4f39-ba92-2997422684bd">
<img width="1112" alt="Capture d’écran 2024-10-09 à 07 15 00" src="https://github.com/user-attachments/assets/63404c5c-d24d-420c-bd11-7c4fd31ffe22">
<img width="1112" alt="Capture d’écran 2024-10-09 à 07 36 31" src="https://github.com/user-attachments/assets/72c9b3e7-63ba-4f12-b393-d6fbf6e926e9">
<img width="1112" alt="Capture d’écran 2024-10-09 à 07 36 36" src="https://github.com/user-attachments/assets/882a70bb-847e-46e6-a0e6-2163e36a1012">
### Steps to reproduce
It's quite difficult to give you steps to reproduce as it may be linked to my project.
### Minimal reproduction project (MRP)
I have a 2 years project going-on :) I'm not able to give you a MRP but I can do some tests or give you system logs if needed | bug,platform:macos,topic:editor,crash | low | Critical |
2,574,830,131 | PowerToys | Layout switch activate on monitor with active-focus window instead of in-which mouse cursor in multiple monitors setup | ### Description of the new feature / enhancement
Please add option to activate layout switch on monitor with active-focus windows instead of in-which mouse cursor in multiple monitors setup.
### Scenario when this would be used?
Users don't have to move mouse to new monitor that they want to activate fast layout switch shortcut.
### Supporting information
Similar to that of launch option, which already integrated. | Needs-Triage | low | Minor |
2,574,844,236 | opencv | Support for OpenCL-OpenGL interoperability on Apple | ### Describe the feature and motivation
I would like to have support for cl_APPLE_gl_sharing extension which enables sharing data between OpenCL and OpenGL on apple as well. At the moment only android/egl, windows/wgl, linux/glx are supported (linux/egl support -> #22703).
### Additional context
_No response_ | feature | low | Minor |
2,574,851,531 | vscode | Resolver error: Error: Failed to download VS Code Server (Failed to fetch) | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.93.0
- OS Version: windows

```
090e3.tar.gz.done and vscode-server.tar.gz to exist
[13:50:20.174] Got request to download on client for {"artifact":"cli-alpine-x64","destPath":"/root/.vscode-server/vscode-cli-XXX.tar.gz"}
[13:50:20.175] server download URL: https://update.code.visualstudio.com/commit:XXX/cli-alpine-x64/stable
[13:50:20.175] Downloading VS Code server locally...
[13:50:20.189] >
>
[13:50:20.207] Resolver error: Error: Failed to download VS Code Server (Failed to fetch)
```
Steps to Reproduce:
1. download remote explorer on vscode
2. open vscode on windows
3. using remote explorer to ssh linux
4. It show the error
I don't know what happen.
But if it is the network, how to connect the vscode by myself without vsocde process automatically.
| bug,vscode-website,confirmation-pending,network | low | Critical |
2,574,908,726 | PowerToys | [Mouse Without Borders] Mouse pointer draw by PowerToys not follow pointer size in system setting | ### Microsoft PowerToys version
0.85.1
### Installation method
GitHub, PowerToys auto-update
### Running as admin
None
### Area(s) with issue?
Mouse Without Borders
### Steps to reproduce
1. Change system mouse pointer size to larger size than default `1`
2. Restart system without a physical mouse connected
### ✔️ Expected Behavior
Mouse pointer (cursor) draw by powertoys has right size set by system settings
### ❌ Actual Behavior
Mouse pointer draw by powertoys has size equal to default `1` size.
### Other Software
Windows 11 23H2 + PowerToys 0.85.1 on both connected machines | Issue-Bug,Needs-Triage | low | Minor |
2,574,914,529 | pytorch | vmap specializing on dynamic shapes with index_put_ | ### 🐛 Describe the bug
```python
from torch import Tensor
torch.set_default_device('cuda')
# @torch.compile(dynamic=True)
def _ordered_to_dense(num_blocks_in_row: Tensor, col_indices: Tensor):
num_rows = col_indices.shape[-2]
num_cols = col_indices.shape[-1]
batch_dims = num_blocks_in_row.shape[:-1]
device = num_blocks_in_row.device
def create_dense_one(kv_num_blocks, kv_indices):
dense_mask = kv_indices.new_zeros(num_rows, num_cols + 1, dtype=torch.int32)
row_indices = torch.arange(num_rows, dtype=torch.int, device=device).unsqueeze(
-1
)
col_range = torch.arange(num_cols, dtype=torch.int, device=device)
index_mask = col_range < kv_num_blocks.unsqueeze(-1)
# We write to one spot "out of bounds"
valid_indices = torch.where(index_mask, kv_indices, num_cols)
# set the values in 'a' to 1 where the indices are valid
# torch.ops.aten.index_put_(dense_mask, [row_indices, col_range], dense_mask.new_ones(()), accumulate=False)
dense_mask[row_indices, valid_indices] = 1
return dense_mask[:, :num_cols].contiguous()
create_dense_batched = create_dense_one
for _ in range(len(batch_dims)):
create_dense_batched = torch.vmap(create_dense_batched, in_dims=(0, 0))
out = create_dense_batched(num_blocks_in_row, col_indices)
return out
_ordered_to_dense = torch.compile(torch.vmap(torch.vmap(_ordered_to_dense)), dynamic=True)
num_blocks = torch.ones(4, 5, 8)
indices = torch.arange(9).expand(4, 5, 8, 9)
out = _ordered_to_dense(num_blocks, indices)
```
Running with `TORCH_LOGS="aot_graphs"` we see that the number 5 shows up (indicating it's specialized)
### Versions
N/A
cc @ezyang @chauhang @penguinwu @zou3519 @samdow @kshitij12345 @bobrenjc93 | triaged,oncall: pt2,module: functorch,module: dynamic shapes | low | Critical |
2,574,922,105 | pytorch | Segmentation fault would be triggered when using torch.jit.trace and torch.jit._fork | ### 🐛 Describe the bug
I encountered an `Segmentation fault` in PyTorch when attempting to use `torch.jit._fork` and `torch.jit.trace` . Below is a simplified version of the code that reproduces the issue:
```python
import torch
def foo(x1, x2):
return 2 * x1 + x2
x1 = torch.tensor([[0.0290, 0.4019, 0.2598, 0.3666],
[0.0583, 0.7006, 0.0518, 0.4681],
[0.6738, 0.3315, 0.7837, 0.5631]])
x2 = torch.tensor([[0.0290, 0.4019, 0.2598, 0.3666],
[0.0583, 0.7006, 0.0518, 0.4681],
[0.6738, 0.3315, 0.7837, 0.5631]])
func =lambda x1, x2: torch.jit._fork(foo, x1, x2)
torch.jit.trace(func, (torch.rand(4, 3), x2))
```
The error messages are as follows:
```
Segmentation fault (core dumped)
```
I confirm that the error is reproducible with the nightly build version `2.6.0.dev20241008`.Please find the [gist](https://colab.research.google.com/drive/10QR_nv7pCd1Ptw3IXwPmuWvSoNPhFkfA?usp=sharing) to reproduce the issue.
### Versions
PyTorch version: 2.5.0.dev20240815+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.3 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.10.14 | packaged by conda-forge | (main, Mar 20 2024, 12:45:18) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.11.0-27-generic-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: 11.7.99
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 525.116.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 64
On-line CPU(s) list: 0-63
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz
Stepping: 6
Frequency boost: enabled
CPU MHz: 900.000
CPU max MHz: 3500.0000
CPU min MHz: 800.0000
BogoMIPS: 5800.00
Virtualization: VT-x
L1d cache: 1.5 MiB
L1i cache: 1 MiB
L2 cache: 40 MiB
L3 cache: 48 MiB
NUMA node0 CPU(s): 0-15,32-47
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.5.0.dev20240815+cpu
[pip3] torchaudio==2.4.0.dev20240815+cpu
[pip3] torchvision==0.20.0.dev20240815+cpu
[conda] numpy 1.26.4 pypi_0 pypi
[conda] torch 2.5.0.dev20240815+cpu pypi_0 pypi
[conda] torchaudio 2.4.0.dev20240815+cpu pypi_0 pypi
[conda] torchvision 0.20.0.dev20240815+cpu pypi_0 pypi
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel | oncall: jit | low | Critical |
2,574,960,198 | excalidraw | Adding a Line in Mobile/Tablet View Generates Two Control Points even with a single tap. | When adding a line in mobile view, two control points are generated even with a single tap.
https://github.com/user-attachments/assets/6e714d0a-e7cf-4ff8-b12c-ffccf12b668b
| bug | low | Major |
2,574,973,662 | pytorch | Implement Incremental PCA with GPU support | ### 🚀 The feature, motivation and pitch
Feature: Implement Incremental PCA with native GPU support in PyTorch.
Motivation: I'm working on a large-scale machine learning project that requires dimensionality reduction on datasets too large to fit in memory. Currently, PyTorch lacks a native implementation of Incremental PCA, especially one that leverages GPU acceleration. I think this feature would benefit many users as I also found this existing issue about it: #40770
Pitch:
1. Memory efficiency: Allows processing of datasets larger than available RAM, addressing a common limitation in data analysis.
2. GPU acceleration: Native GPU support would significantly speed up Incremental PCA computations (compared to existing versions such as `sklearn.decomposition.IncrementalPCA`, enabling faster dimensionality reduction and feature extraction on large-scale datasets.
3. PyTorch integration: Seamless integration with PyTorch's ecosystem, allowing easy use with existing PyTorch models and workflows.
### Alternatives
Using scikit-learn's Incremental PCA by moving data to CPU. Very slow.
### Additional context
I submitted an RFC here: [https://github.com/pytorch/rfcs/pull/70](https://github.com/pytorch/rfcs/pull/70)
cc @ptrblck @msaroufim | feature,module: cuda,triaged | low | Major |
2,575,021,732 | godot | Controller names no longer contain brand / type | ### Tested versions
Tested in 4.3.stable
### System information
Godot v4.3.stable - macOS 15.0.1 - Vulkan (Mobile) - integrated Apple M3 Max - Apple M3 Max (14 Threads)
### Issue description
I don't know when this started happening – perhaps with a new macOS version – but somehow `print(Input.get_joy_name(device_id))` only returns names like `Controller` and `Wireless Controller` making it impossible for me to determine the type of controller and subsequently show the right icons in my game.
- Xbox One controller returns: `Controller`
- PS4 controller returns: `Wireless Controller`
- Haven't tested PS5 and Nintendo Switch yet.
**Showing the right icons is an important feature for compliance with stores and platforms.** There should be a solid way to at the very least differentiate between controllers from Microsoft, Sony, Nintendo, and generic controllers.
If there is a workaround, please let me know.
### Steps to reproduce
On macOS, connect any controller and print its name using `Input.get_joy_name(device_id)`.
### Minimal reproduction project (MRP)
n/a | bug,platform:macos,needs testing,topic:input | low | Minor |
2,575,024,651 | ui | [feat]: Snackbar | ### Feature description
Snackbars are designed to have interactions (like an "Undo" button).
Toasts are usually kept simple and meant to be passive (without interaction).
### Affected component/components
Toast
### Additional Context
According to design best practices, Toast should not include interactions, while Snackbars are intended to provide actionable buttons (e.g., "Undo", "Retry"). Please correct me if I am wrong.
PFA : Attachment of a Toast with an action.
<img width="321" alt="toast-with-action" src="https://github.com/user-attachments/assets/a7a34cc4-7daa-419e-acdf-07016cb77643">
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | low | Minor |
2,575,047,503 | opencv | cv::VideoWriter fails writing colorless images | ### System Information
**Ubuntu 22.04**
OpenCV version: 4.5.4
Compiler & compiler version: Clang 15.0.7
**Ubuntu 24.04 LTS**
OpenCV version: 4.6.0
Compiler & compiler version: GCC 13.2
### Detailed description
If an instance of `cv::VideoWriter` is used to write matrices to a file using the colorless format, the resulting video is empty and cannot be opened with any media player. This is not the case if the instance is set to use colored matrices.\
This problem has been tested and confirmed with the following input material:
1. Using artificially created matrices as shown in the example code below.
2. Loading matrices as images from any video using `cv::VideoCapture`
3. Converted ROS2 `sensor_msgs::msg::Image` messages.
### Steps to reproduce
CPP File:
```
#include <opencv2/videoio.hpp>
#include <iostream>
int main()
{
cv::Mat mat(480, 640, CV_8UC3, cv::Scalar::all(0));
mat.setTo(cv::Scalar(255, 0, 0));
// Setting the last parameter to true will write a blue mat
auto videoWriter = cv::VideoWriter("../result.mp4", cv::VideoWriter::fourcc('m', 'p', '4', 'v'), 30, mat.size(), false);
auto frameCount = 1;
// Write a 3-second-video
while (frameCount <= 90) {
std::cout << "Encoding frame " << frameCount << " of 90..." << std::endl;
videoWriter.write(mat);
frameCount++;
}
return 0;
}
```
CMakeLists.txt:
```
cmake_minimum_required(VERSION 3.8)
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
project(CVTest LANGUAGES CXX)
find_package(OpenCV REQUIRED)
add_executable(CVTest
${CMAKE_CURRENT_LIST_DIR}/main.cpp
)
target_link_libraries(CVTest
PRIVATE ${OpenCV_LIBS}
)
```
To build:
1. Copy the code into a `main.cpp` and `CMakeLists.txt`, respectively.
2. `mkdir build && cd build`
3. `cmake ..`
4. `make`
5. Start the application with `./CVTest` and check the resulting video.
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [x] I updated to the latest OpenCV version and the issue is still there
- [X] There is reproducer code and related data files (videos, images, onnx, etc) | bug,category: videoio | low | Minor |
2,575,080,393 | PowerToys | Frequent AMD GPU Driver Crashes Affecting PowerToys Functionality | ### Microsoft PowerToys version
0.85.1
### Installation method
PowerToys auto-update
### Running as admin
No
### Area(s) with issue?
PowerToys Run, FancyZones, FancyZones Editor
### Steps to reproduce
Dear Developers,
I've recently switched to an AMD GPU and have been experiencing frequent driver crashes. When the crash occurs, the screen goes black for 1-2 seconds, after which almost all software recovers except for certain PowerToys functionalities, specifically PowerToys Run and FancyZones. I haven't paid much attention to other components.
PowerToys Display Issue: Post-crash, FancyZones only shows a shaded area without any content, and PowerToys Run displays an empty candidate list.
Affected Versions: This issue affects both the latest version (0.85.1) and all previous versions that I have used.
Hypothesis: AMD GPU driver crashes typically generate a Windows event with ID 4101. One approach could be to monitor this event and re-render PowerToys when it is triggered.
Reproduction Steps: Due to the random nature of the driver crashes, I am unable to provide precise steps to reproduce the issue.
I hope this information helps in diagnosing and resolving the bug. Thank you for your assistance!
### ✔️ Expected Behavior
_No response_
### ❌ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Needs-Triage,Needs-Team-Response | low | Critical |
2,575,105,426 | tensorflow | tf.nn.conv2d terminates process with invalid input shape instead of raising an exception | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
binary
### TensorFlow version
2.17.0
### Custom code
Yes
### OS platform and distribution
Linux Ubuntu 22.04.3 LTS
### Mobile device
_No response_
### Python version
3.11.9
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
TensorFlow terminates the process when passing an invalid input shape to `tf.nn.conv2d`. Instead of raising a Python exception that can be caught with a try-except block.
I expected TensorFlow to raise a catchable Python exception indicating that the input tensor shape is invalid. This would allow the error to be handled in a try-except block, instead of terminating the process. The error message should clearly explain the shape mismatch issue.
### Standalone code to reproduce the issue
```shell
import tensorflow as tf
# Define invalid input tensor and kernel
input_tensor = [[1.0, 2.0, 3.0]]
kernel = [[0.5, 0.5], [0.5, 0.5]]
try:
# Create TensorFlow constants
input_tf = tf.constant(input_tensor, dtype=tf.float32)
kernel_tf = tf.constant(kernel, dtype=tf.float32)
# Attempt to perform convolution, expecting an error
output_tf = tf.nn.conv2d(
tf.expand_dims(input_tf, axis=0),
tf.expand_dims(kernel_tf, axis=0),
strides=[1, 1, 1, 1],
padding='VALID'
)
print("TensorFlow Output:", output_tf.numpy())
except Exception as e:
print("TensorFlow Error:", e)
```
### Relevant log output
```shell
2024-10-09 14:53:31.118589: F ./tensorflow/core/util/tensor_format.h:427] Check failed: index >= 0 && index < num_total_dims Invalid index from the dimension: 3, 0, C Aborted (core dumped)
```
| stat:awaiting tensorflower,type:bug,comp:ops,2.17 | low | Critical |
2,575,227,729 | tauri | [bug] maximized window config causes black flash on launch | ### Describe the bug
I have encountered an issue with the `maximized` window setting where this causes a black flash on application launch.
It only happens sometimes and the black flash is always on the right hand part of the screen:
Here is a screen recording:
https://github.com/user-attachments/assets/1cf93837-8791-482c-a719-f59068f91bd1
### Reproduction
Here is a repo with the default tauri-vue app which has just one setting changed:
in `tauri.conf.json` I added `"maximized": true`
https://github.com/mr-zwets/tauri-test
If reproduction is not possible on the first try, re-open the app multiple times again, as the bug only happens part of the time.
### Expected behavior
I expect the maximized window not to cause a black flash
### Full `tauri info` output
```text
[✔] Environment
- OS: Windows 10.0.22631 x86_64 (X64)
✔ WebView2: 129.0.2792.79
✔ MSVC: Visual Studio Community 2022
✔ rustc: 1.81.0 (eeb90cda1 2024-09-04)
✔ cargo: 1.81.0 (2dbb1af80 2024-08-20)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-x86_64-pc-windows-msvc (default)
- node: 20.18.0
- yarn: 1.22.22
- npm: 10.8.2
[-] Packages
- tauri 🦀: 2.0.2
- tauri-build 🦀: 2.0.1
- wry 🦀: 0.44.1
- tao 🦀: 0.30.3
- @tauri-apps/api : 2.0.2
- @tauri-apps/cli : 2.0.2
[-] Plugins
- tauri-plugin-shell 🦀: 2.0.1
- @tauri-apps/plugin-shell : 2.0.0
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist/spa
- devUrl: http://localhost:9000/
- framework: Vue.js (Quasar)
- bundler: Vite
```
### Stack trace
```text
/
```
### Additional context
_No response_ | type: bug,platform: Windows | low | Critical |
2,575,242,025 | pytorch | [feature request] Provide FlexAttention as a new available/selectable backend for SDPA | ### 🚀 The feature, motivation and pitch
Originally discussed here with @drisspg :
- https://github.com/pytorch/pytorch/pull/137526#issuecomment-2401115408
This would be good for exercising FlexAttention in existing SDPA workloads for perf testing.
Also wondering if FlexAttention could then be extended to work natively/fused with some quantized dtypes from torchao @msaroufim - to do fused quant/dequant or even quant-native computation (e.g. in int8) - supporting quantized dtypes would be a useful extension for existing SDPA api
Ideally, FlexAttention can also support `attn_bias=` argument of SDPA and hopefully outperform the existing math/mem_efficient backends (flash does not support attn_bias for now AFAIK)
### Alternatives
_No response_
### Additional context
_No response_
cc @jbschlosser @bhosmer @cpuhrsch @erichan1 @drisspg @mikaylagawarecki | triaged,enhancement,module: sdpa | low | Minor |
2,575,249,722 | flutter | When multiple blur effects are present on the screen, Impeller performs worse than Skia. | Actually, this issue is the same as https://github.com/flutter/flutter/issues/149989. Since this issue was closed, I switched to the latest Flutter master channel to verify, but it seems that the performance of Impeller has not improved.
It is possible that I did not provide sufficient information when I initially created the issue. If there is any additional information you need, please feel free to let me know.
### Steps to reproduce
1.Execute the test code in profile mode with both Impeller and Skia.
2.This issue occurs when swiping up and down on the screen to scroll through the list. The frame rate of the video decreases.
Here is the URL of the GitHub repository containing the test code.
https://github.com/HarukaOki/impeller_test
### Code sample
<details open><summary>Code sample</summary>
```dart
dart
import 'dart:math';
import 'dart:ui';
import 'package:flutter/material.dart';
final _random = Random();
void main() => runApp(const BackdropFilterDemo());
class BackdropFilterDemo extends StatelessWidget {
const BackdropFilterDemo({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
home: Scaffold(
backgroundColor: Colors.white,
body: Stack(
children: [
ListView.builder(
itemCount: 120, // 60 pairs of red and blue containers
itemBuilder: (context, index) {
return Container(
height: 100,
color: index % 2 == 0 ? Colors.red : Colors.blue,
);
},
),
Center(
child: Container(
width: 400,
height: 400,
decoration: BoxDecoration(
border: Border.all(color: Colors.black),
),
child: Image.network('https://picsum.photos/400'),
),
),
ListView.separated(
separatorBuilder: (_, __) => const SizedBox(height: 8),
itemBuilder: (context, index) => BlurEffect(
child: SizedBox(
height: 50,
child: Center(
child: Text(index.toString(),
style: const TextStyle(color: Colors.white)),
),
),
),
itemCount: 200,
),
Positioned.fill(
bottom: null,
child: BlurEffect(
child: Padding(
padding: EdgeInsets.only(
top: MediaQuery.of(context).viewPadding.top,
),
child: const SizedBox(height: 45),
),
),
),
Positioned.fill(
top: null,
child: BlurEffect(
child: Padding(
padding: EdgeInsets.only(
top: MediaQuery.of(context).viewPadding.bottom,
),
child: const SizedBox(height: 50),
),
),
),
],
),
),
);
}
}
class BlurEffect extends StatelessWidget {
final Widget child;
const BlurEffect({
required this.child,
super.key,
});
@override
Widget build(BuildContext context) {
return ClipRect(
child: BackdropFilter(
filter: ImageFilter.blur(
sigmaX: 40,
sigmaY: 40,
// tileMode: TileMode.mirror,
),
child: DecoratedBox(
decoration: BoxDecoration(color: Colors.black.withOpacity(.65)),
child: child,
),
),
);
}
}
```
</details>
### Performance profiling on master channel
- [X] The issue still persists on the master channel
### Timeline Traces
<details open><summary>Timeline Traces JSON</summary>
[dart_devtools_2024-10-09_17_44_11.423.json.zip](https://github.com/user-attachments/files/17305258/dart_devtools_2024-10-09_17_44_11.423.json.zip)
</details>
### Video demonstration
<details open>
<summary>Video demonstration</summary>
- Impeller
https://github.com/user-attachments/assets/c20c344f-d954-4549-9541-1ecbd3f6321d
(devtool)
<img width="1512" alt="impellerTest" src="https://github.com/user-attachments/assets/8f1da77b-5a57-4127-aece-37fd85f04b23">
- Skia
https://github.com/user-attachments/assets/36831d37-6b2c-4ed0-8a9b-e127f1d52d4e
(devtool)
<img width="1512" alt="skiaTest" src="https://github.com/user-attachments/assets/c1977377-edc0-4f93-9fbf-29a3d5a74d98">
</details>
### What target platforms are you seeing this bug on?
iOS
### OS/Browser name and version | Device information
Target OS version/browser: 16.1.2
Devices: iPhone X
### Does the problem occur on emulator/simulator as well as on physical devices?
Unknown
### Is the problem only reproducible with Impeller?
Yes
### Logs
<details open><summary>Logs</summary>
[log.txt.zip](https://github.com/user-attachments/files/17306358/log.txt.zip)
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel master, 3.26.0-1.0.pre.414, on macOS 14.3 23D56 darwin-arm64, locale ja-JP)
• Flutter version 3.26.0-1.0.pre.414 on channel master at /Users/harukaoki/development/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision c077b8c61a (9 hours ago), 2024-10-08 19:51:19 -0400
• Engine revision 0e7344ae24
• Dart version 3.6.0 (build 3.6.0-332.0.dev)
• DevTools version 2.40.0
[✓] Android toolchain - develop for Android devices (Android SDK version 33.0.0)
• Android SDK at /Users/harukaoki/Library/Android/sdk
• Platform android-35, build-tools 33.0.0
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11609105)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 15.4)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 15F31d
• CocoaPods version 1.15.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.1)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11609105)
[✓] VS Code (version 1.94.1)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.98.0
[✓] Connected device (5 available)
• iPhoneX (mobile) • 4ea2bc0bd4b563471b21892e371a3b307a3bb363 • ios • iOS 16.1.2 20B110
• iPhone (8) (mobile) • 00008020-001124CE3CE9002E • ios • iOS 17.6.1 21G93
• macOS (desktop) • macos • darwin-arm64 • macOS 14.3 23D56 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 14.3 23D56 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 129.0.6668.91
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| engine,c: performance,has reproducible steps,P2,e: impeller,team-engine,triaged-engine,found in release: 3.26 | low | Critical |
2,575,330,088 | vscode | Security Issue: Please allow the very simple delay of the installation of updates by n days |
Type: <b>Bug because of a Security Issue</b>
**VS code and plugin updates** are in a security-critical context:
1. on the one hand, you always want / need to install updates as fast as possible in order to close security issues.
2. on the other hand, there are situations in every project where you must not install updates of the tools used at any price. Because: It could have fatal consequences if a used function suddenly stops working, works different or engineers suddenly invest hours to get the IDE running again with the pipeline.
VS Code currently only offers to install updates “later”. EVERY TIME you restart the IDE, you are asked again to install the updates, because apparently “later” has already been reached.
If you have answered the question about the installation 10 times with “later”, this has the consequence that developers deactivate the automatic updates.
This makes the update logic of VS Code a security risk.
Please allow us to define in how many days we should be asked again when updating.
This solves both problems mentioned at the beginning and provides a professional solution.
Thanks a lot, kind regards,
Thomas
VS Code version: Code 1.93.1 (38c31bc77e0dd6ae88a4e9cc93428cc27a56ba40, 2024-09-11T17:20:05.685Z)
OS version: Windows_NT x64 10.0.19045
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|AMD Ryzen 9 5900X 12-Core Processor (24 x 3693)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|127.91GB (35.72GB free)|
|Process Argv|--crash-reporter-id 856f7e37-1fcf-4a79-b795-cf947af05bd0|
|Screen Reader|no|
|VM|0%|
</details>
<!-- generated by issue reporter --> | install-update,under-discussion | low | Critical |
2,575,361,231 | flutter | [image] Proposal to allow `errorBuilder` to be of type `Widget?` instead of `Widget` | ### Use case
Let's consider the following case:
```dart
OverflowBar(
spacing: 20,
overflowSpacing: 20,
overflowAlignment: OverflowBarAlignment.center,
children: [
Image.network(
image,
width: 50,
height: 50,
errorBuilder: (_, __, ___) {
return const SizedBox.shrink()
},
),
...widgets,
],
)
```
When the parsing of an image fails, the error builder will return a shrink box. The spacing of the overflowbar will add an empty space in the front.
### Proposal
Allow errorBuilder to be of type Widget? instead of Widget so this problem can be solved.
There is currently no other way to implement it if you want to have an OverflowBar. | c: new feature,framework,a: images,c: proposal,P3,team-framework,triaged-framework | low | Critical |
2,575,376,595 | pytorch | DISABLED test_constant_abi_compatible_cuda (__main__.AOTInductorTestABICompatibleCuda) | Platforms: rocm, inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_constant_abi_compatible_cuda&suite=AOTInductorTestABICompatibleCuda&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/31274716255).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_constant_abi_compatible_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `inductor/test_aot_inductor.py`
cc @clee2000 @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire | triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor | low | Critical |
2,575,396,328 | PowerToys | PowerRename displays erroneous results of enumerated renamed files | ### Microsoft PowerToys version
0.85.1
### Installation method
Other (please specify in "Steps to Reproduce")
### Running as admin
No
### Area(s) with issue?
PowerRename
### Steps to reproduce
Consider renaming the following files with the following settings (noting especially the enumeration feature) before clicking on `Apply`:

After clicking on `Apply`, the following results are displayed:

Observe the erroneous results displayed in the `Renamed (3)` column. It states that only 3 files were renamed when 4 were renamed in reality instead, and the renamed result for the first file/row is missing (which is seemingly the cause of the aforementioned miss-numbering).
Note importantly though that this issue only impacts the display results in PowerRename; in Windows Explorer, for example, it can be observed that the files were renamed correctly:

This issue seems to occur for any type and number of files.
Installation method: (can't remember)
### ✔️ Expected Behavior
I expected the 'Renamed' column to display the correctly renamed files (number renamed & new filenames).
### ❌ Actual Behavior
As mentioned above, PowerRename displays erroneous results in the `Renamed` column, although the actual files are renamed correctly.
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,575,400,163 | create-react-app | school | ### Is your proposal related to a problem?
<!--
Provide a clear and concise description of what the problem is.
For example, "I'm always frustrated when..."
-->
(Write your answer here.)
### Describe the solution you'd like
<!--
Provide a clear and concise description of what you want to happen.
-->
(Describe your proposed solution here.)
### Describe alternatives you've considered
<!--
Let us know about other solutions you've tried or researched.
-->
(Write your answer here.)
### Additional context
<!--
Is there anything else you can add about the proposal?
You might want to link to related issues here, if you haven't already.
-->
(Write your answer here.)
| issue: proposal,needs triage | low | Minor |
2,575,406,343 | vscode | The table format isn’t captured in the accessible view: A11y_Visual Studio Code Client_Usability | ### GitHub Tags:
#A11yTCS; #A11ySev4; #Visual Studio Code Client; #BM_Visual Studio Code Client_Win32_JULY2024; #DesktopApp; #FTP; #A11yUsablehigh; #Win32; #Benchmark;
### Environment and OS details:
Application Name: Visual Studio Code
OS: Windows 11 version 23H2 OS built: 22631.4169.
Version: 1.95.0-insider (user setup)
OS Build: 26100.1742
Version: 24H2
### Reproduction Steps:
1. Open Visual studio code insider editor.
2. Tab till copilot chat and ask copilot to generate Arg query for Indian region accounts
4. Press Alt + F2 to view accessible view.
5. Observed that the table format isn’t captured in the accessible view.
### Actual Result:
For some screen reader users, GitHub copilot’s accessible view is an integral tool and Table format is not communicated in that experience
### Expected Result:
This requires technical explorations to either introduce further formatting options in the accessible view. Also consider exploring allowing users to open content in a web view when they’ve requested HTML components.
### User Impact:
Users were unaware that a table could be generated and instead ended up asking for HTML code to put into a web view experience.
Attachment:
https://microsoft.sharepoint.com/:p:/t/FableEngagement/EcCC6L78IhNDmx5ABbNLJnMBCa0Yomng6xSCIYtpaqOF2Q?e=gM4Vpy&nav=eyJzSWQiOjEzOTcsImNJZCI6Mjg0ODc5MTQxOX0

| bug,panel-chat | low | Minor |
2,575,413,188 | PowerToys | 无界鼠标开启服务模式后文件拖放、共享剪切板功能失效 | ### Microsoft PowerToys version
0.85.1
### Installation method
Microsoft Store
### Running as admin
Yes
### Area(s) with issue?
Mouse Without Borders
### Steps to reproduce
打开无界鼠标的服务模式,或以管理员身份运行PowerToys后文件拖放、共享剪切板功能失效,Ctrl+Alt+Del Esc也不起作用。
经过反复测试,两台连接的设备中任意一台设备打开服务模式或以管理员身份运行PowerToys都会导致共享剪切板无法使用。

### ✔️ Expected Behavior
_No response_
### ❌ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,575,431,932 | PowerToys | powertoys run这个新的在1080P的情况下看着有些模糊,不知道怎么解决,麻烦解答一下,谢谢 | ### Description of the new feature / enhancement
想 这个更加清楚些
### Scenario when this would be used?
寻找软件的时候
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,575,485,030 | flutter | New Web Embedded mode [Multiview] doesn't support pinch to zoom or pull to refresh on mobile browsers. | ### Steps to reproduce
Using the [old](https://docs.flutter.dev/platform-integration/web/embedding-flutter-web#custom-element-hostelement) web embedding technique ("hostElement") which was working fine only up-till Flutter 3.22.2, the webpages produced worked like a charm on mobile phone browsers. I could pinch to zoom and pull to refresh on mobile phone browsers. It provided the native web browser experience.
Now, with the [new ](https://docs.flutter.dev/platform-integration/web/embedding-flutter-web#enable-multi-view-mode) recommended embedding technique ("Multiview Mode") as long as the Flutter app takes the whole web screen, you cannot pinch to zoom neither swipe to refresh. The web browser experience is fake and terrible.
### Expected results
Flutter web apps should have proper UX on mobile web browsers, allowing pinching to zoom and swipe to refresh.
### Actual results
UX on mobile web browsers does not allow pinching to zoom or swipe to refresh if the Flutter app takes up the whole screen space. The UX on mobile web browsers does not allow pinching to zoom or swipe to refresh if the Flutter app takes up all the
### Code sample
Try running Flutter's official **OLD** **Web embedding** code using Flutter 3.22.2 and open the website from a mobile browser, every thing will work perfectly with the native web browser experience.
https://github.com/flutter/samples/tree/main/web_embedding/element_embedding_demo
Now, try running it using newer versions of Flutter, it will no longer work properly on a mobile web browser.
Try running Flutter's official **NEW** **Web Embedding** sample on a mobile browser, it will not provide the previously available experience like pinching to zoom or swiping to refresh if the Flutter app takes the whole web page space.
https://github.com/goderbauer/mvp/tree/main/lib
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
[√] Flutter (Channel stable, 3.24.1, on Microsoft Windows [Version 10.0.22631.3737], locale en-US)
[√] Windows Version (Installed version of Windows is version 10 or higher)
[√] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
[√] Chrome - develop for the web
[√] Visual Studio - develop Windows apps (Visual Studio Build Tools 2022 17.7.6)
[√] Android Studio (version 2024.1)
[√] IntelliJ IDEA Community Edition (version 2024.2)
[√] VS Code (version 1.93.0)
[√] Connected device (3 available)
[√] Network resources
| platform-web,P2,browser: safari-ios,browser: chrome-android,team-web,triaged-web,browser: chrome-ios | low | Major |
2,575,526,346 | langchain | aindex + Chroma: aindex function fails due to missing delete method implementation | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain.indexes import SQLRecordManager, aindex
from langchain_community.document_loaders.generic import GenericLoader
from langchain_community.document_loaders.parsers.language.language_parser import LanguageParser
from langchain_ollama import OllamaEmbeddings
from langchain_chroma import Chroma
loader = GenericLoader.from_filesystem(
dir_path,
suffixes=[".py"],
parser=LanguageParser(None, 10),
show_progress=True,
)
vector_store = Chroma(
collection_name=collection_name,
embedding_function=OllamaEmbeddings(model="nomic-embed-text"),
persist_directory="./chroma_cache",
)
namespace = f"chromadb/{collection_name}"
record_manager = SQLRecordManager(
namespace,
db_url="sqlite:///record_manager_cache.sqlite",
)
async def main():
await record_manager.create_schema()
results = await aindex(
docs_source=loader,
record_manager=record_manager,
vector_store=vector_store,
cleanup="incremental",
source_id_key="source",
)
print("done indexing")
print(results)
print("-----------------")
asyncio.run(main())
```
### Error Message and Stack Trace (if applicable)
File "/home/artem/.local/lib/python3.12/site-packages/langchain_core/indexing/api.py", line 521, in aindex
raise ValueError("Vectorstore has not implemented the delete method")
ValueError: Vectorstore has not implemented the delete method
From langchain_core/indexing/api.py
```python
# If it's a vectorstore, let's check if it has the required methods.
if isinstance(destination, VectorStore):
# Check that the Vectorstore has required methods implemented
# Check that the Vectorstore has required methods implemented
methods = ["adelete", "aadd_documents"]
for method in methods:
if not hasattr(destination, method):
raise ValueError(
f"Vectorstore {destination} does not have required method {method}"
)
if type(destination).adelete == VectorStore.adelete:
# Checking if the vectorstore has overridden the default delete method
# implementation which just raises a NotImplementedError
raise ValueError("Vectorstore has not implemented the delete method")
```
### Description
I am trying to follow the documentation examples but it seems that `aindex` function is broken. Can someone tell about the correctness of the following check:
```python
if type(destination).adelete == VectorStore.adelete:
# Checking if the vectorstore has overridden the default delete method
# implementation which just raises a NotImplementedError
raise ValueError("Vectorstore has not implemented the delete method")
```
How is that supposed to work with langchain_chroma ?
It works just fine with index and doesn't want to work with aindex
### System Info
$ python -m langchain_core.sys_info
System Information
------------------
> OS: Linux
> OS Version: #45-Ubuntu SMP PREEMPT_DYNAMIC Fri Aug 30 12:02:04 UTC 2024
> Python Version: 3.12.3 | packaged by conda-forge | (main, Apr 15 2024, 18:38:13) [GCC 12.3.0]
Package Information
-------------------
> langchain_core: 0.3.6
> langchain: 0.3.1
> langchain_community: 0.3.1
> langsmith: 0.1.128
> langchain_chroma: 0.1.4
> langchain_ollama: 0.2.0
> langchain_text_splitters: 0.3.0
> langgraph: 0.2.34
> langserve: 0.3.0
Other Dependencies
------------------
> aiohttp: 3.10.5
> async-timeout: 4.0.3
> chromadb: 0.5.11
> dataclasses-json: 0.6.7
> fastapi: 0.115.0
> httpx: 0.27.2
> jsonpatch: 1.33
> langgraph-checkpoint: 2.0.1
> numpy: 1.26.4
> ollama: 0.3.3
> orjson: 3.10.7
> packaging: 24.1
> pydantic: 2.9.2
> pydantic-settings: 2.5.2
> PyYAML: 6.0.2
> requests: 2.32.3
> SQLAlchemy: 2.0.35
> sse-starlette: 1.8.2
> tenacity: 8.5.0
> typing-extensions: 4.12.2 | Ɑ: vector store,🔌: chroma | low | Critical |
2,575,549,236 | tensorflow | Backward compatibility issue: failure to load models saved in TensorFlow format (Keras 2) in TensorFlow 2.17 | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
source
### TensorFlow version
2.9.1 (model saved), 2.17.0 (model loaded)
### Custom code
Yes
### OS platform and distribution
(Official Docker Image) Ubuntu 22.04.4 LTS
### Mobile device
_No response_
### Python version
3.8.10 (model saved), 3.11.0rc1 (model loaded)
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
## Description
I have encountered a backward compatibility issue when loading models saved with Keras 2 in TensorFlow 2.9 into TensorFlow 2.17, which now uses Keras 3 API. This issue impacts various loading methods, and there does not appear to be a straightforward solution to resolve the errors.
## Steps to reproduce
1. **Train and export a model in TensorFlow 2.9 with Keras 2 API**:
- A simple Keras sequential model is created and trained on random data.
- The model is saved using both `tf.saved_model.save` and `tf.keras.models.save_model` with `tf` save format (which is unsupported in Keras 3).
2. **Attempt to load the models in TensorFlow 2.17 with Keras 3 API**:
- The models are loaded using TensorFlow’s `tf.saved_model.load`, `keras.layers.TFSMLayer`, and `tf.keras.models.load_model`.
3. **Observe the errors**:
- When loading using `tf.saved_model.load`, the error `'_UserObject' object has no attribute 'add_slot'` occurs.
- When loading using `keras.layers.TFSMLayer`, the same `'_UserObject' object has no attribute 'add_slot'` error is triggered.
- When loading using `tf.keras.models.load_model`, a different error appears: `File format not supported.` Because Keras 3 has dropped support for the default `tf` save format in version 2!
## Expected behavior
While I understand that issues related to loading legacy Keras models saved with the `tf` save format using Keras are out of scope for TensorFlow and should be addressed by the Keras team, the functionality surrounding TensorFlow's `tf.saved_model`, which uses the `SavedModel` bundle, is part of TensorFlow Core. Since this format is shared across different runtimes, it should remain backward compatible. Therefore, models saved in earlier versions of TensorFlow using the `SavedModel` format should load seamlessly in newer TensorFlow versions, without requiring users to rebuild their models or encountering compatibility errors.
### Standalone code to reproduce the issue
## Minimal example to reproduce the issue
A minimal code example to reproduce the issue is available in this repository: [Reproduce TF Model Compatibility Issue.](https://github.com/arianmaghsoudnia/reproduce-tf-model-compat-issue)
Please follow the steps in the README file.
### Relevant log output
_No response_ | type:bug,comp:keras,2.17 | low | Critical |
2,575,575,103 | ollama | Support for GrabbeAI | GrabbeAI is an AI model trained by 10th grade students of the german grammar school Grabbe-Gymnasium Detmold and helps users - especially students and teachers - with their (home)work! From what I know, it is the first LLM made entirely by students!
Since I personally trained the model with some other students with the (financial) assistance from some teachers, you can find the model both on Hugging Face and Ollama Model Hub.
The Hugging Face URL is: https://huggingface.co/grabbe-gymnasium-detmold/grabbe-ai
The Ollama hub URL is: https://ollama.com/grabbe-gymnasium/grabbe-ai
On Hugging Face, the model has about 9 million downloads, on Ollama hub it has more then 300, so I think it could be relevant for a huge group of people!
We trained the model based on real class test tasks and good homeworks and it works quiet fine. We are actively updating our LLM on Hugging Face and plan to publish a new build until December. This new release will double the trained dataset! | model request | low | Major |
2,575,579,868 | deno | `Temporal.ZonedDateTime#toLocaleString` with options includes unwanted fields | Version: Deno 2.0.0
With `--unstable-temporal` enabled, `Temporal.ZonedDateTime#toLocaleString` with options includes unwanted fields.
```ts
Temporal.ZonedDateTime.from('2024-09-04T23:10:20+08:00[Asia/Shanghai]')
.toLocaleString('en-US', { weekday: 'short' })
```
Expected: `"Wed"`
Actual: `"Wed, 9/4/2024, 11:10:20 PM GMT+8"`
Both of the polyfills `npm:@js-temporal/polyfill` and `npm:temporal-polyfill`, as well as the [reference implementation](https://tc39.es/proposal-temporal/docs/index.html), give the expected result. | bug,upstream,temporal | low | Minor |
2,575,605,165 | PowerToys | Fancy Zones: make it possible to make a FanzyZone that's got both Grid and Canvas combined | ### Description of the new feature / enhancement
I would like to have the ability to make a Grid Layout that also contains one or more Canvas Zones
### Scenario when this would be used?
My ideal layout is a 2-zone Grid with a Canvas zone that contains one app
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,575,690,349 | ui | [bug]: Issue with Drawer Height in iOS Safari for ShadCN Component | ### Describe the bug
Issue with Drawer Height in iOS Safari for ShadCN Component I’m working on implementing a form page within a drawer using the ShadCN component. The functionality involves clicking a button to open the drawer and automatically focusing on an input field.
This works perfectly on Android devices, but on iOS devices (specifically Safari), I'm encountering an issue with the drawer height when the keyboard opens. The height seems to shrink and doesn’t behave as expected.
### Affected component/components
Drawer
### How to reproduce
1. Implement a form page inside a drawer using ShadCN.
2. Set the drawer to open and focus on an input field when a button is clicked.
3. Test the behavior on Android and iOS devices (Safari).
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
ios devices(mobile)
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,575,711,665 | material-ui | [material-ui][StepButton] Focus indicator lacks contrast in dark mode | ### Steps to reproduce
Link to live example: https://mui.com/material-ui/react-stepper/#non-linear
Steps:
1. Nav to https://mui.com/material-ui/react-stepper/#non-linear
2. Set the theme to dark mode from the settings drawer located at the top right of the page
3. Use the keyboard to focus one of the step buttons in the "Non-linear" demo
### Current behavior
The focus indicator is barely visible to the user. The indicator is dark and blends in with the dark mode background.
### Expected behavior
The user should be able to clearly see what has focus when they tab to a StepButton. Button focus indicators are visible in dark mode; I envision StepButton focus working similar to that.
### Context
_No response_
### Your environment
_No response_
**Search keywords**: Stepper StepButton dark focus | bug 🐛,component: stepper,package: material-ui,design: material you | low | Major |
2,575,712,680 | pytorch | Expose underlying cudastream_t for torch.cuda.Stream | ### 🚀 The feature, motivation and pitch
I want to add PyTorch support to GStreamer project to support video analytics. GStreamer is written in C and Rust with Python bindings. The only way to pass a Cuda stream between GStreamer Python elements is to serialize the underlying handle to integer in one element and then recreate the stream in the next element.
If multiple elements can share the same cuda stream, we can get a big performance boost and increase PyTorch adoption in GStreamer community.
Being able to pass the underlying stream handle into PyTorch, and also to extract the underlying stream handle from `torch.cuda.Stream` would improve the interop capabilities when integrating with C/C++/Rust.
I am interested in creating the patch if it would be something acceptable to upstream.
Thanks!!
### Alternatives
I haven't found any other way of doing this.
### Additional context
_No response_
cc @svekars @brycebortree @sekyondaMeta @ptrblck @msaroufim @albanD | module: docs,module: cuda,triaged,enhancement,actionable,module: python frontend | medium | Major |
2,575,716,890 | vscode | The read aloud feature for responses had limitations on its implementation: A11y_VS Code: Copilot_Usability | ### GitHub Tags:
#A11yTCS; #A11ySev4; #DesktopApp; #A11yUsable; #DesktopApp; #Win32; #SH-VS Code Copilot-Win32-July23; #Visual Studio Code Client;
### Environment Details:
Application Name: VS Code: Copilot
Visual studio code Insiders Version: 1.95.0(User Setup)
Microsoft Windows 11 Enterprise 23H2 Build 22631.4169
### Repro Steps:
1. Launch "Visual Studio Code" copilot.
2. Verify the issue
### Actual:
The read aloud feature for responses had limitations on its implementation and in how it handles content:
- It did not read out the numbers for numbered lists
- There is no pauses or breaks when reading out content from a table
- Syntax is skipped despite its importance in a coding context
- There is no pause button for the read aloud feature and pressing more than once causes it to read twice at the same time
### Expected result:
The read aloud feature for responses should implement below things.
- It should read out the numbers for numbered lists
- There should be pauses or breaks when reading out content from a table
- Syntax should not be skipped despite its importance in a coding context
- There should be pause button for the read aloud feature and pressing more than once should not causes it to read twice at the same time.
### User Impact:
Users who uses read aloud will face difficulty and will not get correct information.
### Recommendation:
Consider supporting this functionality.
Consider options to use punctuation to allow pauses or how tables should verbally be handled.
Consider if this is feedback you wish to address.
Allow users to select the read aloud button to stop the function.
### Attachment:

| feature-request,accessibility,workbench-voice | low | Minor |
2,575,748,577 | material-ui | [core] Hydration error using nextjs turbopack and icons-material | ### Steps to reproduce
Link to live example: https://codesandbox.io/p/devbox/elegant-alex-xrjs5l
Steps:
1. Check out the example from https://github.com/mui/material-ui/tree/master/examples/material-ui-nextjs-pages-router-ts
2. Add an icon from @mui/icons-material to index.ts
3. Start with turbopack with `next dev --turbo`
4. See error in the console

Related to https://github.com/mui/material-ui/issues/34905
### Current behavior
Hydration Error:
```
Warning: Prop `className` did not match. Server: "MuiSvgIcon-root MuiSvgIcon-fontSizeMedium css-1umw9bq-MuiSvgIcon-root" Client: "MuiSvgIcon-root MuiSvgIcon-fontSizeMedium mui-4uxqju-MuiSvgIcon-root"
```
<details>
```
index.js:8 Warning: Prop `className` did not match. Server: "MuiSvgIcon-root MuiSvgIcon-fontSizeMedium css-20bmp1-MuiSvgIcon-root" Client: "MuiSvgIcon-root MuiSvgIcon-fontSizeMedium mui-1slccg-MuiSvgIcon-root"
at svg
at https://xrjs5l-3000.csb.app/_next/static/chunks/node_modules_975935._.js:1020:141
at SvgIcon (https://xrjs5l-3000.csb.app/_next/static/chunks/node_modules_975935._.js:6461:180)
at RefreshIcon
at div
at https://xrjs5l-3000.csb.app/_next/static/chunks/node_modules_975935._.js:1020:141
at Box (https://xrjs5l-3000.csb.app/_next/static/chunks/node_modules_975935._.js:12397:162)
at div
at https://xrjs5l-3000.csb.app/_next/static/chunks/node_modules_975935._.js:1020:141
at Container (https://xrjs5l-3000.csb.app/_next/static/chunks/node_modules_975935._.js:10685:23)
at Home
at DefaultPropsProvider (https://xrjs5l-3000.csb.app/_next/static/chunks/node_modules_975935._.js:11525:11)
at RtlProvider (https://xrjs5l-3000.csb.app/_next/static/chunks/node_modules_975935._.js:10829:11)
at ThemeProvider (https://xrjs5l-3000.csb.app/_next/static/chunks/node_modules_975935._.js:15567:13)
at ThemeProvider (https://xrjs5l-3000.csb.app/_next/static/chunks/node_modules_975935._.js:11631:13)
at CssVarsProvider (https://xrjs5l-3000.csb.app/_next/static/chunks/node_modules_975935._.js:11729:17)
at ThemeProvider (https://xrjs5l-3000.csb.app/_next/static/chunks/node_modules_975935._.js:4262:11)
at AppCacheProvider (https://xrjs5l-3000.csb.app/_next/static/chunks/node_modules_975935._.js:13350:11)
at MyApp (https://xrjs5l-3000.csb.app/_next/static/chunks/pages__app_tsx_035eb2._.js:22:13)
at PathnameContextProviderAdapter (https://xrjs5l-3000.csb.app/_next/static/chunks/node_modules_975935._.js:25194:11)
at ErrorBoundary (https://xrjs5l-3000.csb.app/_next/static/chunks/%5Bnext%5D_overlay_client_ts_eac06c._.js:6040:1)
at ReactDevOverlay (https://xrjs5l-3000.csb.app/_next/static/chunks/%5Bnext%5D_overlay_client_ts_eac06c._.js:6266:11)
at Container (https://xrjs5l-3000.csb.app/_next/static/chunks/node_modules_975935._.js:26292:1)
at AppContainer (https://xrjs5l-3000.csb.app/_next/static/chunks/node_modules_975935._.js:26375:11)
at Root (https://xrjs5l-3000.csb.app/_next/static/chunks/node_modules_975935._.js:26524:11)
```
</details>
### Expected behavior
No hydration error.
### Context
Trying to run NextJS with MUI and turbopack.
### Your environment
See codesandbox
**Search keywords**: nextjs hydration error mui icons turbopack | bug 🐛,package: icons,nextjs | low | Critical |
2,575,753,834 | storybook | [Bug]: type error while importing `svelte 5` component due to recent change in `svelte-check` | ### Describe the bug
Due to recent [change](https://github.com/sveltejs/language-tools/pull/2517) in `svelte-check` v4.0.3, following type error occurs while importing Svelte 5 component in svelte storybook file. This also ends up causing multiple type errors in the story args definition.
```typescript
Error: Type 'Component<Props, {}, "">' is not assignable to type 'ComponentType<Component<Props, {}, "">, any>'.
Type 'Component<Props, {}, "">' provides no match for the signature 'new (options: ComponentConstructorOptions<Component<Props, {}, "">>): { [x: string]: any; $destroy: () => void; $on: <K extends string>(type: K, callback: (e: any) => void) => () => void; $set: (props: Partial<...>) => void; }'.
```
As per [PR](https://github.com/sveltejs/language-tools/pull/2517), In svelte 5, components are functions. Hence with this PR only function type is exported in runes mode.
I tried to supply svelte 5 component to `Meta` as `Meta<ReturnType<typeof Button>>`. However, it throws following error after that change.
```typescript
Error: Type '{ $on?(type: string, callback: (e: any) => void): () => void; $set?(props: Partial<Props>): void; }' is not assignable to type 'ComponentType<{ $on?(type: string, callback: (e: any) => void): () => void; $set?(props: Partial<Props>): void; }, any>'.
Type '{ $on?(type: string, callback: (e: any) => void): () => void; $set?(props: Partial<Props>): void; }' provides no match for the signature 'new (options: ComponentConstructorOptions<{ $on?(type: string, callback: (e: any) => void): () => void; $set?(props: Partial<Props>): void; }>): { ...; }'.
title: "Example/Button",
component: Button as ReturnType<typeof Button>,
tags: ["autodocs"],
```
### Reproduction link
https://codesandbox.io/p/devbox/8p9rrz
### Reproduction steps
1. Run pnpm install
2. Run pnpm check
3. Type error will be thrown
4. If `svelte-check` is reverted to `4.0.2`, svelte-check passes with no errors.
### System
```
System:
OS: Linux 6.1 Debian GNU/Linux 12 (bookworm) 12 (bookworm)
CPU: (2) x64 AMD EPYC
Shell: Unknown
Binaries:
Node: 20.9.0 - /usr/local/bin/node
Yarn: 1.22.19 - /usr/local/bin/yarn <----- active
npm: 9.8.1 - /usr/local/bin/npm
pnpm: 8.10.2 - /usr/local/share/npm-global/bin/pnpm
npmPackages:
@storybook/addon-essentials: ^8.3.5 => 8.3.5
@storybook/addon-interactions: ^8.3.5 => 8.3.5
@storybook/addon-links: ^8.3.5 => 8.3.5
@storybook/addon-svelte-csf: ^4.1.7 => 4.1.7
@storybook/blocks: ^8.3.5 => 8.3.5
@storybook/svelte: ^8.3.5 => 8.3.5
@storybook/sveltekit: ^8.3.5 => 8.3.5
@storybook/test: ^8.3.5 => 8.3.5
storybook: ^8.3.5 => 8.3.5
```
### Additional context
_No response_ | bug,help wanted,typescript,svelte,sev:S3 | low | Critical |
2,575,757,762 | godot | Import options do not change if all options get reset to default. | ### Tested versions
4c4e67334412f73c9deba5e5d29afa8651418af2
### System information
Windows 11
### Issue description
In the Advanced Importer, if physics/collision is added to a mesh, then reimported, then you go back into the importer and remove it and attempt to reimport, the physics will still be assigned. This appears to be due to utilization of the _subresources parameter in the .import file which only gets overwritten if it contains data, meaning that if you revert an imported scene to a state where it has no custom parameters, it will be perpetually stuck.
### Steps to reproduce
* Import a FBX/GLTF file.
* Open the advanced import settings.
* Add physics to one mesh in the scene.
* Reimport the scene.
* Go into the advanced import settings again and remove the physics.
* Reimport the scene.
* Go into the advanced import settings yet again and observe how the physics are still assigned when the should be removed.
### Minimal reproduction project (MRP)
N/A | bug,topic:editor,topic:import | low | Minor |
2,575,760,422 | rust | static_mut_refs lint fires on `assert_eq` | ### Code
```rust
static mut S: i32 = 0;
fn main() {
unsafe {
assert!(S == 0);
assert_eq!(S, 0);
}
}
```
### Current output
```
warning: creating a shared reference to mutable static is discouraged
--> src/main.rs:6:20
|
6 | assert_eq!(S, 0);
| ^ shared reference to mutable static
|
= note: for more information, see <https://doc.rust-lang.org/nightly/edition-guide/rust-2024/static-mut-references.html>
= note: shared references to mutable statics are dangerous; it's undefined behavior if the static is mutated or if a mutable reference is created for it while the shared reference lives
= note: `#[warn(static_mut_refs)]` on by default
```
### Desired output
(none)
### Rationale and extra context
The two macro invocations do the same thing, but in the first case we recognize that the reference created for `==` is a "short-lived reference" and suppress the warning. In the second case, we do show a warning -- I suspect it has to do with the formatting? Indeed `println!("{}", S);` also triggers a warning. Maybe formatting macros where we know that the reference does not outlive the macro could be recognized by the lint?
An alternative would be to suggest using `{ S }` in these cases which will also avoid the warning.
### Other cases
_No response_
### Rust Version
current nightly
### Anything else?
_No response_ | A-diagnostics,T-compiler | low | Major |
2,575,786,142 | bitcoin | Distribute darknet node addresses via DNS seeds using AAAA records | ### Please describe the feature you'd like to see added.
Right now, Bitcoin Core can only receive IPv4 and IPv6 node addresses from DNS seeds. Adding support for darknet addresses would bring the advantages of using DNS seeds (more privacy, faster bootstrapping, etc.) for node discovery to darknet nodes.
### Is your feature related to a problem, if so please describe it.
_No response_
### Describe the solution you'd like
Before TorV2 addresses were deprecated, [Bitcoin Core](https://github.com/bitcoin/bitcoin/blob/5fe6878b5f7c1c97b5c8ae04be9592be73e840c1/src/netaddress.h#L64-L69) and seeds using [sipa's seeder](https://github.com/sipa/bitcoin-seeder/blob/ff482e465ff84ea6fa276d858ccb7ef32e3355d3/netbase.cpp#L535-L543) (and maybe others) supported distributing such addresses by encoding them in AAAA records: Since TorV2 addresses used only 80-bit hashes, they neatly fitted into the 128-bit data field of an AAAA record, leaving plenty of room for a 48bit prefix (from the not publicly routable fc00::/7 subnet) to signal the custom encoding.
TorV3 and I2p addresses use 256-bits of data, making them too large for single AAAA records. However, such addresses can be serialized using a trimmed BIP155 format (just the net id and address), concatenated, and broken into chunks that fit into AAAA record data fields.
In addition to a prefix to signal the custom encoding, correct deserialization requires a sequence number since public resolvers may reorder records.
To get a rough estimate for the numbers, consider the following: DNS messages should be at most 512B. The header has a fixed size of 12B. The question section has variable size because it contains the query domain. 30 chars should be sufficient for most seeds, which implies a size of 36B for the question section (32B for the 30 chars, plus 2B each for query type and class). This leaves 464B for the record section. Since AAAA records are 28B each (2B each for name pointer, type, class, length, 4B for TTL, and 128b/16B for data), a DNS reply for regular-length domains can at most contain 16 AAAA records. Hence four bits are required to encode sequence numbers.
The publicly unroutable fc00::/7 prefix used in the past uses 7 bits. Extending these by e.g. five more bits for versioning to accommodate for future updates results in a signaling-prefix size of 12 bits, leaving 112 bit (or 14B) for payload:
```
0 11 12 15 16 127
+------------+----------+------------------------------------------------------------+
| Prefix | Seq# | Payload |
| (12 bits) | (4 bits) | (112 bits) |
+------------+----------+------------------------------------------------------------+
```
At 16 AAAA records per DNS message, 14B of payload per record and 33B per TorV3/I2P address (net id plus 256-bit data), six TorV3/I2P addresses can be encoded per message.
CJDNS addresses, which are IPv6 addresses, could be encoded as well; at 17B per address (net id plus 128-bit data), 13 addresses could be encoded per message.
To avoid interfering with legacy software, darknet addresses could only be distributed via special subdomains (similar to `x` used for version bits), e.g. `ni`, where `n` stands for network id and `i` for a particular network id specified in BIP155. Example `x9.n4.seed.acme.com.`
Demo implementation: https://github.com/virtu/darkseed
Demo DNS seed: `dnsseed.21.ninja`
### Describe any alternatives you've considered
Originally, I considered using DNS NULL records, because their data fields allow for an arbitrary amount of binary data, resulting in a much better encoding efficiency than ~50% offered by the proposed AAAA encoding (14B of overhead-including payload per 28B record).
This approach, however, has the following shortcomings (as pointed out by [sipa in the forum](https://delvingbitcoin.org/t/hardcoded-seeds-dns-seeds-and-darknet-nodes/1123/6)):
- Depend on external DNS library or extra code to create DNS request with NULL query type
- Public recursive resolvers might not cache DNS NULL records
### Please leave any additional context
_No response_ | Feature | low | Critical |
2,575,805,723 | ollama | runner crashes with more than 15 GPUs | ### What is the issue?
I have deployed ollama using the docker image 0.3.10. Loading "big" models fails.
llama3.1 and other "small" models (e.g. codestral) fits into one GPU and works fine. llama3.1:70b is too big for one GPU and fails to load.
This is the output of docker logs:
```
2024/10/09 12:16:49 routes.go:1125: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:1h0m0s OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:1h0m0s OLLAMA_MAX_LOADED_MODELS:3 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:5 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
time=2024-10-09T12:16:49.993Z level=INFO source=images.go:753 msg="total blobs: 21"
time=2024-10-09T12:16:49.999Z level=INFO source=images.go:760 msg="total unused blobs removed: 0"
time=2024-10-09T12:16:50.001Z level=INFO source=routes.go:1172 msg="Listening on [::]:11434 (version 0.3.10)"
time=2024-10-09T12:16:50.002Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama1657300007/runners
time=2024-10-09T12:17:00.079Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [rocm_v60102 cpu cpu_avx cpu_avx2 cuda_v11 cuda_v12]"
time=2024-10-09T12:17:00.080Z level=INFO source=gpu.go:200 msg="looking for compatible GPUs"
time=2024-10-09T12:17:03.697Z level=INFO source=types.go:107 msg="inference compute" id=GPU-6f4b9a45-fcde-cd1a-0781-7246dcb622a6 library=cuda variant=v12 compute=7.0 driver=12.3 name="Tesla V100-SXM3-32GB" total="31.7 GiB" available="31.4 GiB"
time=2024-10-09T12:17:03.698Z level=INFO source=types.go:107 msg="inference compute" id=GPU-f7d94192-d68a-1967-f062-3d7dfcb64aea library=cuda variant=v12 compute=7.0 driver=12.3 name="Tesla V100-SXM3-32GB" total="31.7 GiB" available="31.4 GiB"
time=2024-10-09T12:17:03.698Z level=INFO source=types.go:107 msg="inference compute" id=GPU-8bc1b166-ee57-ccbb-961a-967cb4dfe3ab library=cuda variant=v12 compute=7.0 driver=12.3 name="Tesla V100-SXM3-32GB" total="31.7 GiB" available="31.4 GiB"
time=2024-10-09T12:17:03.698Z level=INFO source=types.go:107 msg="inference compute" id=GPU-54192c94-2cf5-455b-7ebf-a137a92ad0e9 library=cuda variant=v12 compute=7.0 driver=12.3 name="Tesla V100-SXM3-32GB" total="31.7 GiB" available="31.4 GiB"
time=2024-10-09T12:17:03.698Z level=INFO source=types.go:107 msg="inference compute" id=GPU-52f37851-1140-6ae4-955c-b550b2434d17 library=cuda variant=v12 compute=7.0 driver=12.3 name="Tesla V100-SXM3-32GB" total="31.7 GiB" available="31.4 GiB"
time=2024-10-09T12:17:03.698Z level=INFO source=types.go:107 msg="inference compute" id=GPU-fabd1d82-7b92-d92a-56f0-af4fa8c23359 library=cuda variant=v12 compute=7.0 driver=12.3 name="Tesla V100-SXM3-32GB" total="31.7 GiB" available="31.4 GiB"
time=2024-10-09T12:17:03.698Z level=INFO source=types.go:107 msg="inference compute" id=GPU-44a53644-d89b-bd78-cd7a-0456eb2fa1de library=cuda variant=v12 compute=7.0 driver=12.3 name="Tesla V100-SXM3-32GB" total="31.7 GiB" available="31.4 GiB"
time=2024-10-09T12:17:03.698Z level=INFO source=types.go:107 msg="inference compute" id=GPU-79a9983c-1b89-d512-d5cd-e72c9368033c library=cuda variant=v12 compute=7.0 driver=12.3 name="Tesla V100-SXM3-32GB" total="31.7 GiB" available="31.4 GiB"
time=2024-10-09T12:17:03.698Z level=INFO source=types.go:107 msg="inference compute" id=GPU-ccb73f64-5800-6fca-cb79-8bf896f6a47c library=cuda variant=v12 compute=7.0 driver=12.3 name="Tesla V100-SXM3-32GB" total="31.7 GiB" available="31.4 GiB"
time=2024-10-09T12:17:03.698Z level=INFO source=types.go:107 msg="inference compute" id=GPU-c2897dff-5316-1816-cdd8-976ca178f89c library=cuda variant=v12 compute=7.0 driver=12.3 name="Tesla V100-SXM3-32GB" total="31.7 GiB" available="31.4 GiB"
time=2024-10-09T12:17:03.698Z level=INFO source=types.go:107 msg="inference compute" id=GPU-bef71b66-e6a1-af72-917d-2e399d7f4543 library=cuda variant=v12 compute=7.0 driver=12.3 name="Tesla V100-SXM3-32GB" total="31.7 GiB" available="31.4 GiB"
time=2024-10-09T12:17:03.698Z level=INFO source=types.go:107 msg="inference compute" id=GPU-5a4ba200-8e83-da15-b9c3-11cc5be911ea library=cuda variant=v12 compute=7.0 driver=12.3 name="Tesla V100-SXM3-32GB" total="31.7 GiB" available="31.4 GiB"
time=2024-10-09T12:17:03.698Z level=INFO source=types.go:107 msg="inference compute" id=GPU-241017b3-8811-dc04-8a36-b9cae1333b87 library=cuda variant=v12 compute=7.0 driver=12.3 name="Tesla V100-SXM3-32GB" total="31.7 GiB" available="31.4 GiB"
time=2024-10-09T12:17:03.698Z level=INFO source=types.go:107 msg="inference compute" id=GPU-516ac106-2f5f-9fb2-7ef1-76c1790e6477 library=cuda variant=v12 compute=7.0 driver=12.3 name="Tesla V100-SXM3-32GB" total="31.7 GiB" available="31.4 GiB"
time=2024-10-09T12:17:03.698Z level=INFO source=types.go:107 msg="inference compute" id=GPU-923baeaa-e06d-7ffe-7eab-82a02a87eef4 library=cuda variant=v12 compute=7.0 driver=12.3 name="Tesla V100-SXM3-32GB" total="31.7 GiB" available="31.4 GiB"
time=2024-10-09T12:17:03.698Z level=INFO source=types.go:107 msg="inference compute" id=GPU-8b4d57a2-a60a-2da0-887a-581d3182e209 library=cuda variant=v12 compute=7.0 driver=12.3 name="Tesla V100-SXM3-32GB" total="31.7 GiB" available="31.4 GiB"
[GIN] 2024/10/09 - 12:17:03 | 200 | 80.006µs | 127.0.0.1 | HEAD "/"
[GIN] 2024/10/09 - 12:17:03 | 200 | 28.728333ms | 127.0.0.1 | POST "/api/show"
time=2024-10-09T12:17:06.297Z level=INFO source=sched.go:731 msg="new model will fit in available VRAM, loading" model=/root/.ollama/models/blobs/sha256-a677b4a4b70c45e702b1d600f7905e367733c53898b8be60e3f29272cf334574 library=cuda parallel=5 required="76.7 GiB"
time=2024-10-09T12:17:06.297Z level=INFO source=server.go:101 msg="system memory" total="1510.1 GiB" free="1320.9 GiB" free_swap="0 B"
time=2024-10-09T12:17:06.299Z level=INFO source=memory.go:326 msg="offload to cuda" layers.requested=-1 layers.model=81 layers.offload=81 layers.split=6,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5 memory.available="[31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB 31.4 GiB]" memory.gpu_overhead="0 B" memory.required.full="76.7 GiB" memory.required.partial="76.7 GiB" memory.required.kv="3.1 GiB" memory.required.allocations="[5.5 GiB 4.7 GiB 4.7 GiB 4.7 GiB 4.7 GiB 4.7 GiB 4.7 GiB 4.7 GiB 4.7 GiB 4.7 GiB 4.7 GiB 4.7 GiB 4.7 GiB 4.7 GiB 4.7 GiB 4.7 GiB]" memory.weights.total="39.0 GiB" memory.weights.repeating="38.2 GiB" memory.weights.nonrepeating="822.0 MiB" memory.graph.full="1.4 GiB" memory.graph.partial="1.4 GiB"
time=2024-10-09T12:17:06.306Z level=INFO source=server.go:391 msg="starting llama server" cmd="/tmp/ollama1657300007/runners/cuda_v12/ollama_llama_server --model /root/.ollama/models/blobs/sha256-a677b4a4b70c45e702b1d600f7905e367733c53898b8be60e3f29272cf334574 --ctx-size 10240 --batch-size 512 --embedding --log-disable --n-gpu-layers 81 --parallel 5 --tensor-split 6,5,5,5,5,5,5,5,5,5,5,5,5,5,5,5 --port 35043"
time=2024-10-09T12:17:06.306Z level=INFO source=sched.go:450 msg="loaded runners" count=1
time=2024-10-09T12:17:06.306Z level=INFO source=server.go:590 msg="waiting for llama runner to start responding"
time=2024-10-09T12:17:06.307Z level=INFO source=server.go:624 msg="waiting for server to become available" status="llm server error"
INFO [main] build info | build=1 commit="8962422" tid="139858080260096" timestamp=1728476226
INFO [main] system info | n_threads=48 n_threads_batch=48 system_info="AVX = 1 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | " tid="139858080260096" timestamp=1728476226 total_threads=96
INFO [main] HTTP server listening | hostname="127.0.0.1" n_threads_http="95" port="35043" tid="139858080260096" timestamp=1728476226
llama_model_loader: loaded meta data with 29 key-value pairs and 724 tensors from /root/.ollama/models/blobs/sha256-a677b4a4b70c45e702b1d600f7905e367733c53898b8be60e3f29272cf334574 (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Meta Llama 3.1 70B Instruct
llama_model_loader: - kv 3: general.finetune str = Instruct
llama_model_loader: - kv 4: general.basename str = Meta-Llama-3.1
llama_model_loader: - kv 5: general.size_label str = 70B
llama_model_loader: - kv 6: general.license str = llama3.1
llama_model_loader: - kv 7: general.tags arr[str,6] = ["facebook", "meta", "pytorch", "llam...
llama_model_loader: - kv 8: general.languages arr[str,8] = ["en", "de", "fr", "it", "pt", "hi", ...
llama_model_loader: - kv 9: llama.block_count u32 = 80
llama_model_loader: - kv 10: llama.context_length u32 = 131072
llama_model_loader: - kv 11: llama.embedding_length u32 = 8192
llama_model_loader: - kv 12: llama.feed_forward_length u32 = 28672
llama_model_loader: - kv 13: llama.attention.head_count u32 = 64
llama_model_loader: - kv 14: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 15: llama.rope.freq_base f32 = 500000.000000
llama_model_loader: - kv 16: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 17: general.file_type u32 = 2
llama_model_loader: - kv 18: llama.vocab_size u32 = 128256
llama_model_loader: - kv 19: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 20: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 21: tokenizer.ggml.pre str = llama-bpe
llama_model_loader: - kv 22: tokenizer.ggml.tokens arr[str,128256] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 23: tokenizer.ggml.token_type arr[i32,128256] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 24: tokenizer.ggml.merges arr[str,280147] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 25: tokenizer.ggml.bos_token_id u32 = 128000
llama_model_loader: - kv 26: tokenizer.ggml.eos_token_id u32 = 128009
llama_model_loader: - kv 27: tokenizer.chat_template str = {{- bos_token }}\n{%- if custom_tools ...
llama_model_loader: - kv 28: general.quantization_version u32 = 2
llama_model_loader: - type f32: 162 tensors
llama_model_loader: - type q4_0: 561 tensors
llama_model_loader: - type q6_K: 1 tensors
time=2024-10-09T12:17:06.558Z level=INFO source=server.go:624 msg="waiting for server to become available" status="llm server loading model"
llm_load_vocab: special tokens cache size = 256
llm_load_vocab: token to piece cache size = 0.7999 MB
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 128256
llm_load_print_meta: n_merges = 280147
llm_load_print_meta: vocab_only = 0
llm_load_print_meta: n_ctx_train = 131072
llm_load_print_meta: n_embd = 8192
llm_load_print_meta: n_layer = 80
llm_load_print_meta: n_head = 64
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_swa = 0
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 8
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 28672
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn = 131072
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: ssm_dt_b_c_rms = 0
llm_load_print_meta: model type = 70B
llm_load_print_meta: model ftype = Q4_0
llm_load_print_meta: model params = 70.55 B
llm_load_print_meta: model size = 37.22 GiB (4.53 BPW)
llm_load_print_meta: general.name = Meta Llama 3.1 70B Instruct
llm_load_print_meta: BOS token = 128000 '<|begin_of_text|>'
llm_load_print_meta: EOS token = 128009 '<|eot_id|>'
llm_load_print_meta: LF token = 128 'Ä'
llm_load_print_meta: EOT token = 128009 '<|eot_id|>'
llm_load_print_meta: max token length = 256
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 16 CUDA devices:
Device 0: Tesla V100-SXM3-32GB, compute capability 7.0, VMM: yes
Device 1: Tesla V100-SXM3-32GB, compute capability 7.0, VMM: yes
Device 2: Tesla V100-SXM3-32GB, compute capability 7.0, VMM: yes
Device 3: Tesla V100-SXM3-32GB, compute capability 7.0, VMM: yes
Device 4: Tesla V100-SXM3-32GB, compute capability 7.0, VMM: yes
Device 5: Tesla V100-SXM3-32GB, compute capability 7.0, VMM: yes
Device 6: Tesla V100-SXM3-32GB, compute capability 7.0, VMM: yes
Device 7: Tesla V100-SXM3-32GB, compute capability 7.0, VMM: yes
Device 8: Tesla V100-SXM3-32GB, compute capability 7.0, VMM: yes
Device 9: Tesla V100-SXM3-32GB, compute capability 7.0, VMM: yes
Device 10: Tesla V100-SXM3-32GB, compute capability 7.0, VMM: yes
Device 11: Tesla V100-SXM3-32GB, compute capability 7.0, VMM: yes
Device 12: Tesla V100-SXM3-32GB, compute capability 7.0, VMM: yes
Device 13: Tesla V100-SXM3-32GB, compute capability 7.0, VMM: yes
Device 14: Tesla V100-SXM3-32GB, compute capability 7.0, VMM: yes
Device 15: Tesla V100-SXM3-32GB, compute capability 7.0, VMM: yes
llm_load_tensors: ggml ctx size = 5.76 MiB
time=2024-10-09T12:17:08.014Z level=INFO source=server.go:624 msg="waiting for server to become available" status="llm server not responding"
time=2024-10-09T12:17:10.973Z level=INFO source=server.go:624 msg="waiting for server to become available" status="llm server loading model"
llm_load_tensors: offloading 80 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 81/81 layers to GPU
llm_load_tensors: CPU buffer size = 563.62 MiB
llm_load_tensors: CUDA0 buffer size = 2754.38 MiB
llm_load_tensors: CUDA1 buffer size = 2295.31 MiB
llm_load_tensors: CUDA2 buffer size = 2295.31 MiB
llm_load_tensors: CUDA3 buffer size = 2295.31 MiB
llm_load_tensors: CUDA4 buffer size = 2295.31 MiB
llm_load_tensors: CUDA5 buffer size = 2295.31 MiB
llm_load_tensors: CUDA6 buffer size = 2295.31 MiB
llm_load_tensors: CUDA7 buffer size = 2295.31 MiB
llm_load_tensors: CUDA8 buffer size = 2295.31 MiB
llm_load_tensors: CUDA9 buffer size = 2295.31 MiB
llm_load_tensors: CUDA10 buffer size = 2295.31 MiB
llm_load_tensors: CUDA11 buffer size = 2295.31 MiB
llm_load_tensors: CUDA12 buffer size = 2295.31 MiB
llm_load_tensors: CUDA13 buffer size = 2295.31 MiB
llm_load_tensors: CUDA14 buffer size = 2295.31 MiB
llm_load_tensors: CUDA15 buffer size = 2658.24 MiB
llama_new_context_with_model: n_ctx = 10240
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base = 500000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA0 KV buffer size = 240.00 MiB
llama_kv_cache_init: CUDA1 KV buffer size = 200.00 MiB
llama_kv_cache_init: CUDA2 KV buffer size = 200.00 MiB
llama_kv_cache_init: CUDA3 KV buffer size = 200.00 MiB
llama_kv_cache_init: CUDA4 KV buffer size = 200.00 MiB
llama_kv_cache_init: CUDA5 KV buffer size = 200.00 MiB
llama_kv_cache_init: CUDA6 KV buffer size = 200.00 MiB
llama_kv_cache_init: CUDA7 KV buffer size = 200.00 MiB
llama_kv_cache_init: CUDA8 KV buffer size = 200.00 MiB
llama_kv_cache_init: CUDA9 KV buffer size = 200.00 MiB
llama_kv_cache_init: CUDA10 KV buffer size = 200.00 MiB
llama_kv_cache_init: CUDA11 KV buffer size = 200.00 MiB
llama_kv_cache_init: CUDA12 KV buffer size = 200.00 MiB
llama_kv_cache_init: CUDA13 KV buffer size = 200.00 MiB
llama_kv_cache_init: CUDA14 KV buffer size = 200.00 MiB
llama_kv_cache_init: CUDA15 KV buffer size = 160.00 MiB
llama_new_context_with_model: KV self size = 3200.00 MiB, K (f16): 1600.00 MiB, V (f16): 1600.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 2.60 MiB
/go/src/github.com/ollama/ollama/llm/llama.cpp/ggml/src/ggml-backend.c:1864: GGML_ASSERT(n_backends <= GGML_SCHED_MAX_BACKENDS) failed
```
### OS
Linux, Docker
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.10 | feature request | low | Critical |
2,575,833,403 | go | x/build: update OpenBSD builders to 7.6 | OpenBSD 7.6 has just been released (https://www.openbsd.org/76.html) - this means that the only two supported OpenBSD releases are now 7.5 and 7.6. The current openbsd-amd64 and openbsd-386 builders are running 7.2 which is unsupported. As such, these should be updated to run 7.6. | OS-OpenBSD,Builders,NeedsFix,new-builder | low | Major |
2,575,860,551 | PowerToys | Send multiline text | ### Description of the new feature / enhancement
Hello,
is it possibel to send multiline text with the keyboard manager?
e.g. press Win+Ctrl+S to send a signature text?
### Scenario when this would be used?
...
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,575,879,857 | TypeScript | Iterator Helpers: `return` method typing is weaker than ideal | ### ⚙ Compilation target
ESNext
### ⚙ Library
esnext.iterator.d.ts
### Missing / Incorrect Definition
The `return` method always exists on iterators returned by `Iterator.from` (`iter` in below code) and iterators returned by Iterator Helpers methods (`iter2` in below code).
However, it is defined as optional in the current definition. This could be improved.
Please note that you can't just add a non-optional `return` method definition to the `IteratorObject` type -- iterators returned by builtin methods other than Iterator Helpers don't have that method (`iter3`).
_P.S. if you are interested in a motivating example, I don't have one. Sorry 🤪_
### Sample Code
```ts
const iter = Iterator.from({ next() { return { done: false, value: 0 }} });
iter.return(); // should not error
const iter2 = iter.map(x => x);
iter2.return(); // should not error
const iter3 = [].values();
iter3.return(); // Note: this error IS correct
```
[Playground](https://www.typescriptlang.org/play/?target=99&ts=5.6.2#code/MYewdgzgLgBAllApgJxgXhgSScghlEZAOgDNkQBbACgG8YxEAPKKgShjuUSgFdkwOMACbhEALhglcAGwiIANDABuMnuJgAGGAF9tO1gG4AUEYQoiXXvzYGYAejswIACxA9pQ+iFgpyyE6CQsGbIAEzo8DhEFLgADlSM6AB8MIyGJiGhFtx8YDb2ji5uHl4+yH4B4NCRKADMEQDaALpEKtJqEDYZOLXZVnmGBTAAct7qUM5wEDC+hFgAyjCg5YjAUCZAA)
### Documentation Link
https://tc39.es/proposal-iterator-helpers/
Spec states that below objects always have the `return` method:
- %WrapForValidIteratorPrototype%
- %IteratorHelperPrototype% | Suggestion,Awaiting More Feedback | low | Critical |
2,575,890,703 | flutter | popup_failed_to_open error on Safari with google_identity_services_web | ### Steps to reproduce
Use the above code with Google Identity Services Web (google_identity_services_web).
Attempt to request an access token by calling client.requestAccessToken on Safari.
The error popup_failed_to_open appears in the console, indicating that the popup window failed to open.
### Expected results
The popup should open correctly in Safari, allowing the user to grant consent and proceed with the Google sign-in process.
### Actual results
In Safari, the popup fails to open, and the error message popup_failed_to_open is thrown.
Workarounds Attempted:
I tried the following workarounds, but the issue persists:
Ensuring that the popup blocker is disabled.
Trying different configurations for the token request.
Affected Platforms:
Safari (macOS, iOS)
Other browsers (e.g., Chrome, Firefox) work fine.
### Code sample
<details open><summary>Code sample</summary>
```dart
void googleSignIn(
Function(String token) onSuccess, VoidCallback onError) async {
await gis.loadWebSdk();
final TokenClientConfig config = TokenClientConfig(
client_id: '',
scope: scopes,
enable_granular_consent: true,
callback: (token) => onTokenResponse(token, onSuccess, onError),
error_callback: (error) => onErrorResponse.call(error, onError),
);
final OverridableTokenClientConfig overridableCfg = OverridableTokenClientConfig(
scope: scopes + myConnectionsScopes,
);
final TokenClient client = oauth2.initTokenClient(config);
// Disable the Popup Blocker for this to work, or move this to a Button press.
client.requestAccessToken(overridableCfg);
}
Future<void> onErrorResponse(
GoogleIdentityServicesError? error, VoidCallback onError) async {
onError.call();
}
Future<void> onTokenResponse(TokenResponse token,
Function(String token) onSuccess, VoidCallback onError) async {
if (token.error != null) {
onError.call();
return;
}
await onSuccess.call(token.access_token ?? '');
oauth2.revoke(token.access_token!, (TokenRevocationResponse response) {
});
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.24.2, on macOS 14.5 23F79 darwin-x64, locale en-IR)
• Flutter version 3.24.2 on channel stable at /Users/alireza/Library/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 4cf269e36d (5 weeks ago), 2024-09-03 14:30:00 -0700
• Engine revision a6bd3f1de1
• Dart version 3.5.2
• DevTools version 2.37.2
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /Users/alireza/Library/Android/sdk
• Platform android-34, build-tools 34.0.0
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.9+0-17.0.9b1087.7-11185874)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 15.4)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 15F31d
• CocoaPods version 1.15.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2023.2)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.9+0-17.0.9b1087.7-11185874)
[✓] VS Code (version 1.93.1)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.88.0
[✓] Connected device (3 available)
• sdk gphone64 x86 64 (mobile) • emulator-5554 • android-x64 • Android 13 (API 33) (emulator)
• iPhone 15 (mobile) • 5C53B2B6-8755-45AC-A6EF-AD9483448B69 • ios •
com.apple.CoreSimulator.SimRuntime.iOS-17-2 (simulator)
• Chrome (web) • chrome • web-javascript • Google Chrome 129.0.6668.90
[✓] Network resources
• All expected network resources are available.
```
</details>
| platform-web,package,browser: safari-macos,P2,browser: safari-ios,team-web,triaged-web,p: google_identity_services_web | low | Critical |
2,575,897,028 | tensorflow | Tensorflow Distributed AlltoAll | ### Issue type
Feature Request
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
binary
### TensorFlow version
Tf 2.9
### Custom code
Yes
### OS platform and distribution
_No response_
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
Hi,
I am trying to find a workaround, or ideally an implementation of the MPI [AlltoAll](https://www.mpich.org/static/docs/v3.4/www3/MPI_Alltoall.html) primitive.
As far as I can tell, Tensorflow has an AlltoAll op, but only for TPUs: https://www.tensorflow.org/api_docs/python/tf/raw_ops/AllToAll.
In a distributed setup with 4 devices, this would allow me to go from
```bash
PerReplica:{
0: [0 1 2 3],
1: [0 1 2 3],
2: [0 1 2 3],
3: [0 1 2 3]
}
```
to
```
PerReplica:{
0: [0 0 0 0],
1: [1 1 1 1],
2: [2 2 2 2],
3: [3 3 3 3]
}
```
with a single communication call. I have managed to get this working by sharding with DTensor, but I'm trying to stay within the `tf.distribute` setting. Is there an easy workaround?
### Standalone code to reproduce the issue
```shell
import tensorflow as tf
import numpy as np
# tf.debugging.set_log_device_placement(True)
tf.config.set_visible_devices([], device_type='GPU')
# Create sharding mesh
def configure_virtual_cpus(ncpu):
phy_devices = tf.config.list_physical_devices('CPU')
tf.config.set_logical_device_configuration(phy_devices[0], [
tf.config.LogicalDeviceConfiguration(),
] * ncpu)
ndev = 4
configure_virtual_cpus(ndev)
devices = [f'CPU:{i}' for i in range(ndev)]
strategy = tf.distribute.MirroredStrategy(devices)
arr = np.arange(0,4)
arr_total = np.stack([arr]*4)
def value_fn(ctx):
return arr_total[ctx.replica_id_in_sync_group]
arr_tf = strategy.experimental_distribute_values_from_function(value_fn)
print(arr_tf)
# How to achieve the AllToAll operation here?
```
### Relevant log output
```shell
PerReplica:{
0: [0 1 2 3],
1: [0 1 2 3],
2: [0 1 2 3],
3: [0 1 2 3]
}
```
| type:feature,comp:ops | medium | Critical |
2,575,903,953 | rust | `assert_eq!` is not 100% hygienic | Not sure if this should be considered a bug or a diagnostic issue.
Having a `const left_val` or `const right_val` declared breaks `assert_eq!`. This has to do with its expansion and Rust's rules for macro hygiene: https://sabrinajewson.org/blog/truly-hygienic-let
Consider this code
```rust
fn main() {
let x: u8 = 0;
assert_eq!(x, 0);
}
```
according to `cargo expand` it expands to
```rust
#![feature(prelude_import)]
#[prelude_import]
use std::prelude::rust_2021::*;
#[macro_use]
extern crate std;
fn main() {
let x: u8 = 0;
match (&x, &0) {
(left_val, right_val) => {
if !(*left_val == *right_val) {
let kind = ::core::panicking::AssertKind::Eq;
::core::panicking::assert_failed(
kind,
&*left_val,
&*right_val,
::core::option::Option::None,
);
}
}
};
}
```
Since `assert_eq!` wants to use the value of the provided expressions twice (once for comparison, once for printing the result on failure), but it only wants to evaluate each expression once, it does a `match` to bind them to a pattern `(left_val, right_val)`. However, having a `const` named `left_val` or `right_val` in scope changes the meaning of the pattern.
```rust
fn main() {
let x: u8 = 0;
const left_val: i8 = -123;
assert_eq!(x, 0);
}
```
```
error[E0308]: mismatched types
--> src/main.rs:4:5
|
3 | const left_val: i8 = -123;
| ------------------ constant defined here
4 | assert_eq!(x, 0);
| ^^^^^^^^^^^^^^^^
| |
| expected `&u8`, found `i8`
| this expression has type `(&u8, &{integer})`
| `left_val` is interpreted as a constant, not a new binding
| help: introduce a new binding instead: `other_left_val`
|
= note: this error originates in the macro `assert_eq` (in Nightly builds, run with -Z macro-backtrace for more info)
error[E0614]: type `i8` cannot be dereferenced
--> src/main.rs:4:5
|
4 | assert_eq!(x, 0);
| ^^^^^^^^^^^^^^^^
|
= note: this error originates in the macro `assert_eq` (in Nightly builds, run with -Z macro-backtrace for more info)
```
The error message, admittedly, is not very helpful.
Thankfully, you can't use this to make `assert_eq` pass/fail when it shouldn't. The worst you can achieve is a cryptic error message from the compiler. I think. So this "bug" is not really exploitable, plus chances of *accidentally* breaking this are probably pretty low (`const`s are usually named in `UPPER_CASE` in Rust), but the diagnostic is admittedly not very helpful.
The article I've linked above (https://sabrinajewson.org/blog/truly-hygienic-let) offers a potential solution for this. TL;DR: due to shadowing shenanigans having a function named `left_val` will prevent `left_val` from being interpreted as a const in patterns.
@rustbot label A-macros A-diagnostics C-bug D-confusing D-terse | A-diagnostics,A-macros,T-compiler,C-bug,D-confusing,D-terse,T-libs,A-hygiene | medium | Critical |
2,575,919,936 | godot | Android Editor: Loading indicator is not visible in compatibility renderer | ### Tested versions
- Reproducible in 4.4.dev2, 4.4.dev3, 4.4.beta1
### System information
Godot v4.4.dev3 - Android - Single-window, 1 monitor - OpenGL ES 3 (Compatibility) - Mali-G52 - (8 threads)
### Issue description
AFAIK there was a loading indicator. The loading screen when exporting from the Android Editor seems to be missing same for when installing Export template. It feels like the Editor is unresponsive without the loading indicator
https://github.com/user-attachments/assets/50ab969a-30f2-435b-9d44-14ac05a08bdb
### Steps to reproduce
Just export anything project from the Android Editor
### Minimal reproduction project (MRP)
N/A | bug,platform:android,topic:editor | medium | Major |
2,575,992,679 | vscode | [json] support 'CodeActionContext.only' | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.94.1
- OS Version: Ubuntu24.04
Recently see that whenever we save any json file and it gives below wanring in **Output**
```log
2024-10-09 19:11:08.851 [warning] vscode.json-language-features - Code actions of kind 'source.fixAll' requested but returned code action is of kind 'source.sort.json'. Code action will be dropped. Please check 'CodeActionContext.only' to only return requested code actions.
2024-10-09 19:11:08.855 [warning] vscode.json-language-features - Code actions of kind 'source.organizeImports' requested but returned code action is of kind 'source.sort.json'. Code action will be dropped. Please check 'CodeActionContext.only' to only return requested code actions.
```

Steps to Reproduce:
1. Open _settings.json_
2. Add below settings in settings.json
```json
{
"files.autoSave": "afterDelay",
"[json][jsonc]": {
"editor.defaultFormatter": "vscode.json-language-features"
},
"editor.codeActionsOnSave": {
"source.fixAll.eslint": "explicit",
"source.fixAll": "explicit",
"source.organizeImports": "explicit"
}
}
```
4. Open Output Panel and select extension host
5. Open any json file and press Ctrl+S ( Save )
6. you can see the warning.
even one of warning coming from vscode-eslint extension
| bug,json | low | Critical |
2,575,995,129 | godot | Nested `CSGShape3D`' doesn't update aabb gizmo if visibility is disabled | ### Tested versions
4.3, current master
### System information
Godot v4.3.stable - Debian GNU/Linux trixie/sid trixie - Wayland - GLES3 (Compatibility) - Mesa Intel(R) HD Graphics 520 (SKL GT2) - Intel(R) Core(TM) i3-6006U CPU @ 2.00GHz (4 Threads)
### Issue description
If the **nested** `CSGShape3D` has disabled visibility it doesn't update it is aabb gizmo while it happens when it is visible
https://github.com/user-attachments/assets/cb3212c1-7434-437b-9db2-3994f81d9036
### Steps to reproduce
1. Create scene with two CSGShape's one inside another
2. Disable visibility of nested one
3. Select the nested one (to see aabb gizmo)
4. Change it is size
You expect to see size change
### Minimal reproduction project (MRP)
[csg_test.zip](https://github.com/user-attachments/files/17309230/csg_test.zip)
| needs testing,topic:3d | low | Minor |
2,576,043,700 | flutter | The pub roll is blocked by build failures | The roll here https://github.com/flutter/flutter/pull/156440
Caused failures like https://logs.chromium.org/logs/flutter/buildbucket/cr-buildbucket/8734597920760676097/+/u/run_platform_views_scroll_perf_ad_banners__timeline_summary/stdout
```
[2024-10-08 18:00:22.743431] [STDOUT] stdout: Resolving dependencies of `Podfile`
[2024-10-08 18:00:22.743457] [STDOUT] stdout: CDN: trunk Relative path: CocoaPods-version.yml exists! Returning local because checking is only performed in repo update
[2024-10-08 18:00:22.743485] [STDOUT] stdout: CDN: trunk Relative path: all_pods_versions_5_9_a.txt exists! Returning local because checking is only performed in repo update
[2024-10-08 18:00:22.743515] [STDOUT] stdout: CDN: trunk Relative path: Specs/5/9/a/Google-Mobile-Ads-SDK/11.2.0/Google-Mobile-Ads-SDK.podspec.json exists! Returning local because checking is only performed in repo update
[2024-10-08 18:00:22.743591] [STDOUT] stdout: CDN: trunk Relative path: all_pods_versions_4_2_c.txt exists! Returning local because checking is only performed in repo update
[2024-10-08 18:00:22.743630] [STDOUT] stdout: CDN: trunk Relative path: Specs/4/2/c/FlutterMacOS/3.16.0/FlutterMacOS.podspec.json exists! Returning local because checking is only performed in repo update
[2024-10-08 18:00:22.743647] [STDOUT] stdout: [!] CocoaPods could not find compatible versions for pod "Google-Mobile-Ads-SDK":
[2024-10-08 18:00:22.743695] [STDOUT] stdout: In Podfile:
[2024-10-08 18:00:22.743718] [STDOUT] stdout: google_mobile_ads (from `.symlinks/plugins/google_mobile_ads/ios`) was resolved to 5.2.0, which depends on
[2024-10-08 18:00:22.743859] [STDOUT] stdout: Google-Mobile-Ads-SDK (~> 11.10.0)
``` | team-infra,P1,triaged-infra | medium | Critical |
2,576,056,905 | ollama | CORS (Cross-Origin Resource Sharing) | ### What is the issue?
please enable CORS (Cross-Origin Resource Sharing) in rest api
| bug,api | low | Minor |
2,576,058,701 | godot | Nested `CSGShape` aabb gizmo isn't visible if the node invisible upon scene open | ### Tested versions
4.3, current master
### System information
Godot v4.3.stable - Debian GNU/Linux trixie/sid trixie - Wayland - GLES3 (Compatibility) - Mesa Intel(R) HD Graphics 520 (SKL GT2) - Intel(R) Core(TM) i3-6006U CPU @ 2.00GHz (4 Threads)
### Issue description
If the **nested** `CSGShape3D` has disabled visibility it doesn't show aabb gizmo when selected after opening scene
https://github.com/user-attachments/assets/8044e0d4-095f-4fe4-b4ed-965edcdcb9df
### Steps to reproduce
1. Create scene with two CSGShape's one inside another
2. Disable visibility of nested one
3. Close scene
4. Open scene
5. Select the nested shape (in the scene tree dock)
You expect to see aabb gizmo but there is no
6. Make the shape visible
7. Make shape invisible
Now you see aabb gizmo
### Minimal reproduction project (MRP)
[csg_test.zip](https://github.com/user-attachments/files/17309484/csg_test.zip)
| bug,topic:editor,needs testing,topic:3d | low | Minor |
2,576,097,317 | vscode | Modified image attachments don't update the thumbnail/hover | Repro:
1. Open chat
2. Attach an image file
3. Edit and save the image, 🐛 the thumbnail and hover aren't changed | bug,panel-chat,chat | low | Minor |
2,576,105,071 | go | x/tools/go/packages: support for fake GOROOTs | This issue is the continuation of discussions in [CL 615396](https://go-review.googlesource.com/c/tools/+/615396/comments/454f8fb1_42f3239b).
When I tried to migrate x/tools/go/ssa/interp away from the deprecated loader package in its test cases in [CL 615016](https://go-review.googlesource.com/c/tools/+/615016), I attempted to create a fake GOROOT with [some mocked std libs](https://github.com/golang/tools/tree/master/go/ssa/interp/testdata) so `packages.Load` could load the faked std libs, which will be complemented by the `go/ssa/interp` interpreter later.
Even though @adonovan has provided a better approach to use overlay and custom GOROOT in `x/tools/go/ssa/interp` doesn't make sense, however, there isn't a conclusion regarding "whether we should support the usage of fake GOROOTs." Probably we can discuss here more about whether we should support/recommend the usage of fake GOROOTs.
For me, I have tried to add some libraries with my own toolchain under GOROOT because I want to use internal std package features but do not want to use the go linkname directive (which is locked down in 1.23 and higher versions, #67401).
The second point to discuss (related to the custom GOROOT) is whether it's desirable for users to add new packages under GOROOT and let the compiler treat them as standard libraries as well?
@timothy-king @matloob @adonovan | NeedsInvestigation,Tools | low | Minor |
2,576,110,201 | tauri | [feat] add some twain scanner support | ### Describe the problem
There is no standard API to scan document from scanner. For some apps it's crucial to have something like scan invoice\print invoice.
### Describe the solution you'd like
I would like to have some native plugin, that will grant access to scan feature of host system
### Alternatives considered
there are some paid solutions, like: https://www.dynamsoft.com/codepool/tauri-document-scanning-desktop-app.html
### Additional context
https://twaindirect.org/twain-direct/ | type: feature request | low | Minor |
2,576,121,161 | kubernetes | SharedIndexInformer not calling event handlers during the first resync | This issue is the root cause for https://github.com/argoproj/argo-workflows/issues/10947
Based on my tests, the issue was introduced between client-go `v0.24.0-alpha.1` and `v0.24.0-alpha.2`.
Seems like when the first resync happens (20 minutes in case of argo-workflows), the event handlers are not being called.
I was able to identify a potential change that introduced this: https://github.com/kubernetes/client-go/commit/54928eef9f824667b23a938188498992d437156a#diff-dbac68d65bbc9814c2336df6ee91f22e807d84a9127a4d34a8b39c5657d0e447L551
Looking at the code, there is small behavior change when evaluating the isSync flag:
- resource version was checked only when `d.Type == Replaced`. Now it's being evaluated on all updates
- the `d.Type == Sync` set the isSync to true
- for any other cases: isSync is false
All an in one, there might be a chance of falsely setting isSync to true.
This flag is being used in `distribute`, which conditionally calls the listeners.
The issue also reproduces with latest client-go version: v0.29.0-alpha.2 | sig/api-machinery,triage/accepted | medium | Major |
2,576,201,252 | ui | [bug]: Property 'id' does not exist on type 'IntrinsicAttributes & SelectProps'. | ### Describe the bug
I want to add "id" property to the "Select" component like this
```jsx
<Select id="test">
// blabla
</Select>
```
It says:
Property 'id' does not exist on type 'IntrinsicAttributes & SelectProps'.
### Affected component/components
Select
### How to reproduce
Just try this:
```jsx
<Select id="test">
// blabla
</Select>
```
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Windows 11, Chrome
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,576,208,808 | pytorch | [Feature Request] Enable Saving and Loading of Compiled Models with torch.compile | ### 🚀 The feature, motivation and pitch
**Feature:** Add functionality to save and load models compiled with `torch.compile`.
**Motivation:** I'm working on a project with a large diffusion model that requires lengthy compilation on each startup. Saving compiled models would significantly reduce startup times and improve efficiency in production environments, development workflows, and resource-constrained devices.
**Pitch:** This feature would enhance performance by eliminating redundant compilation, and increase usability for deploying optimized models.
### Alternatives
1. TorchScript: Less flexible, may not capture all dynamic behaviors.
2. Using TORCHINDUCTOR_CACHE_DIR, but it is still slow: 327s vs 119s after setting cache dir.
### Additional context
_No response_
cc @ezyang @chauhang @penguinwu | awaiting response (this tag is deprecated),feature,triaged,oncall: pt2 | low | Major |
2,576,249,888 | kubernetes | Possible Impact of losing .io domains and alternatives | - Article: https://every.to/p/the-disappearance-of-an-internet-domain
- Discussion on slack: https://kubernetes.slack.com/archives/CCK68P2Q2/p1728467155678289
- Thread on mailing list: https://groups.google.com/a/kubernetes.io/g/steering/c/YY2-Omod1E0/m/gUtXFw05AAAJ
Quote from linked [article](https://www.computerworld.com/article/3552692/is-the-io-top-level-domain-headed-for-extinction.html):
```
According to Davies, one such change “may involve ensuring there is an operational nexus with Mauritius to meet certain policy requirements.
Should .io no longer be retained as a coding for this territory, it would trigger a [five-year retirement process](https://www.iana.org/help/cctld-retirement),
during which time registrants may need to migrate to a successor code or an alternate location.”
```
Link to policy from @sftim:
https://ccnso.icann.org/sites/default/files/field-attached/board-report-proposed-policy-retirement-cctlds-17sep21-en.pdf | sig/architecture,lifecycle/frozen,triage/accepted,sig/k8s-infra | medium | Critical |
2,576,319,582 | rust | ICE: jump threading: 2 != 1 | <!--
ICE: Rustc ./91964C880ED58D1F6B540BD061B1598C12BF59696D2284847B5E942DADBFC240.rs '-Zmir-opt-level=5 -Zvalidate-mir -ooutputfile -Zdump-mir-dir=dir' 'thread 'rustc' panicked at compiler/rustc_mir_transform/src/jump_threading.rs:741:9: 'assertion `left == right` failed'', 'thread 'rustc' panicked at compiler/rustc_mir_transform/src/jump_threading.rs:741:9: 'assertion `left == right` failed''
File: /home/gh-matthiaskrgr/im/91964C880ED58D1F6B540BD061B1598C12BF59696D2284847B5E942DADBFC240.rs
-->
auto-reduced (treereduce-rust):
````rust
fn check_multiple_lints_3(terminate: bool) {
while true {}
while !terminate {}
}
````
<details><summary><strong>original code</strong></summary>
<p>
original:
````rust
//@ check-pass
#![warn(unused)]
// This expect attribute should catch all lint triggers
#[expect(unused_variables)]
fn check_multiple_lints_1() {
let value_i = 0xff00ff;
let value_ii = 0xff00ff;
let value_iii = 0xff00ff;
let value_iiii = 0xff00ff;
let value_iiiii = 0xff00ff;
}
// This expect attribute should catch all lint triggers
#[expect(unused_mut)]
fn check_multiple_lints_2() {
let mut a = 0xa;
let mut b = 0xb;
let mut c = 0xc;
println!(
unused_mut,
//~^ WARNING this lint expectation is unfulfilled [unfulfilled_lint_expectations]
//~| NOTE `#[warn(unfulfilled_lint_expectations)]` on by default
//~| NOTE this `expect` is overridden by a `allow` attribute before the `unused_mut` lint is triggered
reason = "this `expect` is overridden by a `allow` attribute before the `unused_mut` lint is triggered"
);
}
// This expect attribute should catch all lint triggers
#[warn(
unused_mut,
//~^ NOTE the lint level is defined here
reason = "this overrides the previous `expect` lint level and warns about the `unused_mut` lint here"
)]
fn check_multiple_lints_3(terminate: bool) {
// `while_true` is an early lint
while true {}
while true {}
while true {
println!("I never stop")
}
while !terminate {
println!("Do you know what a spin lock is?")
}
while true {}
}
fn main() {
check_multiple_lints_1();
check_multiple_lints_2();
check_multiple_lints_3();
}
````
</p>
</details>
Version information
````
rustc 1.83.0-dev
binary: rustc
commit-hash: unknown
commit-date: unknown
host: x86_64-unknown-linux-gnu
release: 1.83.0-dev
LLVM version: 19.1.1
````
Command:
`/home/gh-matthiaskrgr/.rustup/toolchains/local-debug-assertions/bin/rustc -Zmir-opt-level=5 -Zvalidate-mir`
<!--
query stack:
#0 [optimized_mir] optimizing MIR for `check_multiple_lints_3`
#1 [analysis] running analysis passes on this crate
--> | I-ICE,T-compiler,C-bug,A-mir-opt,S-bug-has-test,requires-debug-assertions | low | Critical |
2,576,326,338 | ollama | Chat History persists with `ollama run` | ### What is the issue?

See the image. The chat history is persisting for [Falcon](https://ollama.com/Hudson/falcon-mamba-instruct), but not any other models (at least that I've tried so far). I would expect it to not persist. It even persists after re-creating the model:

### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.3.12 | bug | low | Major |
2,576,371,247 | godot | Godot 4.3 Can't Resume from Background on Android | ### Tested versions
- 4.3.stable
### System information
Android 9 - Godot v4.3.stable - Compatibility
### Issue description
Export the Platformer 2d sample project to Android. Open the exported game then bring it to background but can't resume the game when click to the game icon again
### Steps to reproduce
1. Export the Platformer 2d sample project to Android
2. Install the exported game on Android phone
3. Click the game icon to open
4. Bring the game to background
5. Click the game icon again to back to the game
6. Notice that the game's not resume and the flash screen is shown
7. Logcat prints error: Fatal signal 4 (SIGILL), code 1 (ILL_ILLOPC), fault addr 0x8a048a40 in tid 23512 (le.platformer2d), pid 23512 (le.platformer2d)...
<img width="1261" alt="Screenshot 2024-10-09 at 23 10 32" src="https://github.com/user-attachments/assets/b0870d12-b52b-42dd-b9e9-f5b4705df4a9">
### Minimal reproduction project (MRP)
The Platformer 2d sample project | bug,platform:android | low | Critical |
2,576,374,201 | rust | Compilation error when matching reference to empty enum | > [!NOTE]
> Useful background links which have been shared in comments:
> * The closely related https://github.com/rust-lang/rust/issues/119612#issue-2067102446 was stabilized a couple of months ago.
> * The full https://github.com/rust-lang/rust/issues/51085 feature, [particularly this post by Nadriedel summarising the subtleties](https://github.com/rust-lang/rust/issues/51085#issuecomment-743807861).
> * [Niko’s 2018 blog post](https://smallcultfollowing.com/babysteps/blog/2018/08/13/never-patterns-exhaustive-matching-and-uninhabited-types-oh-my/) on the never pattern for matching uninhabited types, and an "auto-never" desugaring.
> * [Never patterns RFC](https://github.com/rust-lang/rfcs/pull/3719) (in progress as of November 2024)
### Summary
<!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I tried this code:
```rust
enum A {}
fn f(a: &A) -> usize {
match a {}
}
```
I expected to see this happen: This compiles successfully.
Instead, this happened: (E0004 compiler error, expandable below)
<details><summary>E0004 Compiler Error</summary>
<p>
```text
error[E0004]: non-exhaustive patterns: type `&A` is non-empty
--> src/lib.rs:4:11
|
4 | match a {}
| ^
|
note: `A` defined here
--> src/lib.rs:1:6
|
1 | enum A {}
| ^
= note: the matched value is of type `&A`
= note: references are always considered inhabited
help: ensure that all possible cases are being handled by adding a match arm with a wildcard pattern as shown
|
4 ~ match a {
5 + _ => todo!(),
6 + }
|
For more information about this error, try `rustc --explain E0004`.
error: could not compile `playground` (lib) due to 1 previous error
```
</p>
</details>
Searching for "Rust E0004" links to docs that don't explain why references to uninhabited types need to be matched: https://doc.rust-lang.org/error_codes/E0004.html - you have to search for "references are always considered inhabited" which takes you to [this issue](https://github.com/rust-lang/rust/issues/78123) from 2020 - more on this history in the Background section below.
### Motivating example
This comes up commonly when creating macros which generate enums, e.g. a very simplified example ([playground link](https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=330d4162e876893ae70b6de80fe71983)):
```rust
macro_rules! define_enum {
(
$name: ident,
[
$(
$variant: ident => $string_label: expr
),* $(,)?
]$(,)?
) => {
enum $name {
$($variant,)*
}
impl $name {
fn label(&self) -> &'static str {
match self {
$(Self::$variant => $string_label,)*
}
}
}
}
}
// This compiles
define_enum!{
MyEnum,
[Foo => "foo", Bar => "bar"],
}
// This fails compilation with E0004
define_enum!{
MyEmptyEnum,
[],
}
```
<details><summary>Compiler Error</summary>
<p>
```text
error[E0004]: non-exhaustive patterns: type `&MyEmptyEnum` is non-empty
--> src/lib.rs:16:23
|
16 | match self {
| ^^^^
...
31 | / define_enum!{
32 | | MyEmptyEnum,
33 | | [],
34 | | }
| |_- in this macro invocation
|
note: `MyEmptyEnum` defined here
--> src/lib.rs:32:5
|
32 | MyEmptyEnum,
| ^^^^^^^^^^^
= note: the matched value is of type `&MyEmptyEnum`
= note: references are always considered inhabited
= note: this error originates in the macro `define_enum` (in Nightly builds, run with -Z macro-backtrace for more info)
help: ensure that all possible cases are being handled by adding a match arm with a wildcard pattern as shown
|
16 ~ match self {
17 + _ => todo!(),
18 + }
|
For more information about this error, try `rustc --explain E0004`.
error: could not compile `playground` (lib) due to 1 previous error
```
</p>
</details>
The fact that this doesn't work for empty enums is quite a gotcha, and I've seen this issue arise a few times as an edge case. In most cases, it wasn't caught until a few months after, when someone uses the macro to create an enum with no variants.
But why would we ever use such a macro to generate empty enums? Well this can fall out naturally when generating a hierarchy of enums, where some inner enums are empty, e.g.
```rust
define_namespaced_enums! {
Foo => {
Fruit: [apple, pear],
Vegetables: [],
}
Bar => {
Fruit: [banana],
Vegetables: [carrot],
}
}
```
### Workarounds
Various workarounds include:
* Add a blanket `_ => unreachable!("Workaround for empty enums: references to uninhabited types are considered inhabited at present")`
* In some cases where the enum is copy, use `match *self`
* Use an inner macro which counts the number of variants, and changes the code generated based on whether the enum is empty or not. And e.g. outputs one of:
* `match *self {}`
* `match self { ! }` in the body [as suggested by Ralf](https://github.com/rust-lang/unsafe-code-guidelines/issues/413#issuecomment-2401792481) which is non-stable.
### Background
This was previously raised in this issue: https://github.com/rust-lang/rust/issues/78123 but was closed as "expected behaviour" - due to the fact that:
> ... the reason that &Void (where Void is an uninhabited type) is considered inhabited is because unsafe code can construct an &-reference to an uninhabited type (e.g. with ptr::null() or mem::transmute()). I don't know enough to be sure though.
However, when I raised this as a motivating example in https://github.com/rust-lang/unsafe-code-guidelines/issues/413#issuecomment-2399672350, @RalfJung suggested I raise a new rustc issue for this, and that actually the `match` behaviour is independent of the UB semantics decision:
> So please open a new issue (in the rustc repo, not here) about the macro concern. That's a valid point, just off-topic here. Whether we accept that match code is essentially completely independent of what we decide here.
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.81.0 (eeb90cda1 2024-09-04)
binary: rustc
commit-hash: eeb90cda1969383f56a2637cbd3037bdf598841c
commit-date: 2024-09-04
host: aarch64-apple-darwin
release: 1.81.0
LLVM version: 18.1.7
```
Also works on nightly.
| T-compiler,C-bug,A-patterns,A-exhaustiveness-checking,T-types,F-exhaustive_patterns,T-opsem | medium | Critical |
2,576,413,622 | PowerToys | [Peek] FB2 support | ### Description of the new feature / enhancement
The ability to quickly view at least the first page of a document to understand what kind of document it is, who the author is, and what language it is in.
### Scenario when this would be used?

Currently, Windows users have no way to display a thumbnail of the first page of a .fb2 file, or use the quick view sidebar. There is no support for this format either natively or in the form of third-party modules for File Explorer, including proprietary commercial options.
### Supporting information
Of the universal e-book format viewers, [Calibre](https://github.com/kovidgoyal/calibre) is the most attractive in terms of its capabilities. But its viewer is quite heavy, and adds opened documents to the library, which in most cases I don’t need. | Needs-Triage | low | Minor |
2,576,421,190 | vscode | Support highlights coming from different `DocumentHighlightProvider` fro the same document selector | Multiple providers can be registered for a language. In that case providers are sorted by their [score](https://code.visualstudio.com/api/references/vscode-api#languages.match) and groups sequentially asked for document highlights. The process stops when a provider returns a `non-falsy` or `non-failure` result.
I'd be great to merge results from different providers together.
Example:
```java
@Query("SELECT DISTINCT owner FROM Owner owner left join owner.pets WHERE owner.lastName LIKE :lastName% ")
@Transactional(readOnly = true)
Page<Owner> findByLastName(@Param("lastName") String lastName, Pageable pageable);
```
The method parameter `lastName` is referenced in the HQL query inside the `@Query` annotation. Now if the cursor is at the `lastName` inside the query we can highlight the `lastName` method declaration parameter. However, if the cursor is at the `lastName` parameter inside the method declaration we cannot just add a highlight the `lastName` inside the query because we'd lose all highlights provided by the Java extension. There is a workaround of course by trying to highlight everything that Java would highlight but it is ugly and likely to have issues down the road | feature-request,editor-highlight | medium | Critical |
2,576,423,193 | PowerToys | Keyboard manager shortcut remapper issues | ### Microsoft PowerToys version
0.85.1
### Installation method
GitHub, PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
Keyboard Manager
### Steps to reproduce
To navigates on my computer when writing code or anything else I use shortcut remapper to remap Ctrl + i to be the up arrow, control + k to be down arrow and so on (writing on keyboard with 2 hands method)
video for exemple (sorry for my english)
https://github.com/user-attachments/assets/76016c28-a3c2-4a9e-9b3b-4d0e4f8faf7f
### ✔️ Expected Behavior
navigating letters with up down right and left buttons accesssible with shortcut remapped to ctrl + ijkl | and navigate between word with ctrl + arrows buttons with shortcut remapped to ctrl + hm (to not move my right arm to the arrows and staying on my 2 hand writing method position). I want to be able to navigate between letters and words to be faster. and do it fast without malfunctioning
### ❌ Actual Behavior
1: for ctrl + ijkl it works fine, but when I speed up it doesn't work fine, so I stay holding the ctrl button pressed and use ijkl to navigate, but when doing fast just as show in the video it is as if the ctrl button isn't detected as being pressed for some moment (**Found the issue and fix**, i use apex pro tkl keyboard and actuation of keyboard ws set very low so when switching to another letter while holding control it was considering 2 letters as pressed at same time, so from j to i it was considered the 2 pressed at same time because of actuation and it cancelled the effect of shortcut remapperd. So now I don't use very low actuation...).
2: Not only that, I recently added shortcut of ctrl + h to be ctrl + right arrow and ctrl + m to be ctrl + left arrow. but when I hold ctrl button pressed and navigates with hm, then staying with the same hold of ctrl button i navigate with ijkl it just stops working at all, the ctrl button isn't considered pressed anymore and in place of navigating it starts writing jiklmh everywhere on my code or text... quite unusable, only solution is to release the control button when switching from ctrl + jkli and ctrl + hm.
3: and sometimes i can't use the ctrl anymore and when cliking on the j letter it result in ctrl + left arrow which is a problem cause I can't type anymore and need to copy paste this letter and user right control instead of left control (**found fix**, disable keyboard manager and reanable).
Please fix theses issues I love Powertoys.
video for exemple (sorry for my english)
https://github.com/user-attachments/assets/76016c28-a3c2-4a9e-9b3b-4d0e4f8faf7f
image of shortcuts remapped

### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,576,436,738 | transformers | Support for torch._dynamo.export for Phi3 | ### Feature request
Compared to `symbolic_trace`, the new (but I assume, experimental) entrypoint in `torch._dynamo.export` seems to provide a more robust way to extract modular FX graphs, that can't have any graph breaks.
I have been experimenting with some networks (Pythia, OPT, Llama, Mistral), and they all go through.
It seems that Phi3 breaks because of this line:
https://github.com/huggingface/transformers/blob/36d410dab637c133f1bb706779c75d9021d403cf/src/transformers/models/phi3/modeling_phi3.py#L213
Where `self.inv_freq` is redefined at runtime in the forward pass.
This is a bit confusing, and I would recommend to drop `self` and use a `normal` runtime variable.
I'm not sure if this has potential side effects.
A similar patter seems to be repeated in other Embedding classes in Phi3.
To reproduce:
```python
model = AutoModelForCausalLM.from_pretrained("microsoft/Phi-3.5-mini-instruct")
model, guards = torch._dynamo.export(model)(**model.dummy_inputs)
```
@gante @ArthurZucker
### Motivation
Dropping the reference to `self.inv_freq` would allow to obtain a fullgraph with dynamo.
Having full FX graph is also a requirement for torch.export, although I have not tested that API.
### Your contribution
I can't directly contribute with a PR at the moment.
I could test a PR from my side to check compatibility with dynamo and potential side effects, once the PR is open. | Feature request,Deployment | low | Minor |
2,576,450,149 | go | proxy.golang.org: Intermittent TLS/Network errors with Google's Module Proxy | ### Go version
go version go1.23.1 darwin/arm64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE=''
GOARCH='arm64'
GOBIN=''
GOCACHE='/Users/amir/Library/Caches/go-build'
GOENV='/Users/amir/Library/Application Support/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='arm64'
GOHOSTOS='darwin'
GOINSECURE=''
GOMODCACHE='/Users/amir/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='darwin'
GOPATH='/Users/amir/go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/private/var/tmp/_bazel_amir/3dbd0b78d662a8a6e641b2d6e1f7442e/external/rules_go~~go_sdk~go_sdk'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/private/var/tmp/_bazel_amir/3dbd0b78d662a8a6e641b2d6e1f7442e/external/rules_go~~go_sdk~go_sdk/pkg/tool/darwin_arm64'
GOVCS=''
GOVERSION='go1.23.1'
GODEBUG=''
GOTELEMETRY='local'
GOTELEMETRYDIR='/Users/amir/Library/Application Support/go/telemetry'
GCCGO='gccgo'
GOARM64='v8.0'
AR='ar'
CC='clang'
CXX='clang++'
CGO_ENABLED='1'
GOMOD='/private/var/tmp/_bazel_amir/3dbd0b78d662a8a6e641b2d6e1f7442e/execroot/_main/go.mod'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -arch arm64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -ffile-prefix-map=/var/folders/l8/54vw83s15sn1h80rkn3xxl1c0000gn/T/go-build1795123535=/tmp/go-build -gno-record-gcc-switches -fno-common'
```
### What did you do?
Environment:
- Bazel with latest rules_go.
- GitHub actions
Ran a `bazel build ${some_target}`
Note that I'm not sure if this problem is limited to bazel, or happens with normal go invocations as well.
### What did you see happen?
Intermittently, some modules fail to download from Google's Go Module Proxy:
```
(22:42:44) ERROR: /home/runner/.bazel/external/gazelle~~go_deps~com_github_aws_aws_sdk_go_v2_service_s3/BUILD.bazel:5:11: @@gazelle~~go_deps~com_github_aws_aws_sdk_go_v2_service_s3//:s3 depends on @@gazelle~~go_deps~com_github_aws_aws_sdk_go_v2_service_internal_checksum//:checksum in repository @@gazelle~~go_deps~com_github_aws_aws_sdk_go_v2_service_internal_checksum which failed to fetch. no such package '@@gazelle~~go_deps~com_github_aws_aws_sdk_go_v2_service_internal_checksum//': gazelle~~go_deps~com_github_aws_aws_sdk_go_v2_service_internal_checksum: fetch_repo: github.com/aws/aws-sdk-go-v2/service/internal/checksum@v1.4.1: Get "https://proxy.golang.org/github.com/aws/aws-sdk-go-v2/service/internal/checksum/@v/v1.4.1.info": net/http: TLS handshake timeout
```
### What did you expect to see?
No errors. | NeedsInvestigation,proxy.golang.org | medium | Critical |
2,576,504,071 | TypeScript | Array.isArray() : a possible fix ? + discussion on type guards (frozen keyword + better type deduction) | ### 🔍 Search Terms
in:title frozen
in:title freeze
+ I don't remember for Array.isArray()
I searched for issues on `Array.isArray()` and found a *lot* of them, too much to list them all.
3 weeks ago I suggested something that could lead to a possible fix [on an existing issue](https://github.com/microsoft/TypeScript/issues/17002#issuecomment-2366749050).
### ✅ Viability Checklist
- [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
- [x] This wouldn't change the runtime behavior of existing JavaScript code
- [x] This could be implemented without emitting different JS based on the types of the expressions
- [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, new syntax sugar for JS, etc.)
- [x] This isn't a request to add a new utility type: https://github.com/microsoft/TypeScript/wiki/No-New-Utility-Types
- [x] This feature would agree with the rest of our Design Goals: https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals
### ⭐ Suggestion
The type guard for `Array.isArray()` is currently erroneous and the fix potentially quite complex.
In retrospect I think the potential fix I suggested previously is more a "workaround" (still, you can get a look at it), and that there are issues on type guards.
I'd like to discuss here what should be the behavior of `Array.isArray()` on different cases, and to discuss possibilities to simplify the current "workaround" by improving type guards behavior.
Summary :
- a `frozen` keywords putting `never` to some properties (alternatively could also be a `frozen` state e.g. to some functions, preventing their call/making them non-callable) to act as a constraint as opposed to `readonly` which is only a partial interface. This could help solving the current issue of using `readonly` as a constraint, which generate lot of unsoundness + would solve use of `readonly` arrays with `Array.isArray()`.
- better deducing generic types could also improve type guards.
First, for the sake of simplicity, let's assume :
```ts
function isArray(a: unknown): a is unknown[];
type A = typeof a;
Is <A, unknown[]> // the type of a if isArray() is TRUE
IsNot<A, unknown[]> // the type of a if isArray() is FALSE
```
In the general case :
```ts
Is <T, U> = T&U;
IsNot<T, U> = Exclude<T,U>;
```
Of course, type guards need to do more than that, and are currently doing more, but not enough.
**Union:**
```ts
Is<T1|T2, U> = Is<T1, U> | Is<T2, U>;
```
Currently, Type guards and `&` seem to behave as expected.
Currently, `(T1|T1)&U` is distributed as ` (T1&U) | (T2&U)` to remove some `never` then factorized when the type is printed.
**Child class :**
```ts
Is<T extends U, U> = T;
```
Currently, Type guards behave as expected, **but** `&` doesn't (but not an issue).
```ts
class A<T> extends Array<T> { /* ... */ }
type T = A<any> & Array<any>; // A<any> expected, got A<any> & any[].
```
**Base type**
```ts
Is<T, U extends T> = U;
```
Currently, Type guards behave as expected, **but** `&` doesn't (but not an issue).
```ts
interface A {
get length(): number;
}
type C = A & Array<any>; // expected Array<any>, got Array<any> & A.
```
**Readonly:**
There is 2 ways to see `readonly` :
- as a constraint : `readonly T & T = readonly T`.
- as a partial interface : `readonly T & T = T`.
For TS, it is saw as a partial interface, therefore : `readonly T & T = T`.
Which is quite confusing as, in practice, we mainly use it as a constraint, but this is a design choice, why not.
The issue is that the type of ̀`Object.freeze([])` is also a `readonly []`, when this is not a partial interface, BUT a **constraint**.
This is an inconsistency in the design, which could be solved with e.g. a `frozen` keyword : `frozen number[]` which would set some properties/methods as `never` instead of simply removing them :
```ts
type A = {
a: 3,
b: 4
}
type Excl<T, keys> = {
[K in keyof T]: K extends keys ? never : T[K]
}
type B = Excl<A, "b">;
// or type B = number [] & { push: never };
type C = B&A;
let c = f<C>();
c.b // never
```
Currently, `Array.isArray(readonly T[])`, assert `T` as being `any[]`, which is wrong for 2 reasons :
1. the generic type information is lost.
2. the `readonly` information is lost.
I argue that as `Array.isArray(Object.freeze([]))` returns true, so we shouldn't remove the `readonly` keyword.
But should it be at the type guard level, or at the `Array.isArray()` call signature level ?
On one side `readonly` is only a partial interface, and on the other side, it is often used as a constraint (a `frozen` keyword would solve this).
On another side, `readonly` is at the type level, when the type guard function is based on the value during execution.
Therefore, without `frozen` there is 4 solutions :
1. Add `readonly` at the `Array.isArray()` level, and require other devs to do so for their type guards.
2. Handle `readonly` at the type guard level, with `Is<readonly T, U> = readonly (T&U)`, which would be ambiguous as readonly isn't a constraint, but a partial interface.
3. In type guards, if `T extends readonly U`, makes `U` implicitly `readonly`. `Is<T extends readonly U, U> = T & readonly U` otherwise `Is<T,U> = T&U`, which might also be confusing.
4. Assume that type guards offer information on the runtime value, but not on the desired TS type, i.e. : `Is<T, U extends T> = T` and requires an explicit cast to get a `U` (which would always be legit), which would be horrible.
And here the good stuff, with generics...
**Base type + generics **
```ts
interface A {
push(...args: number[]): void;
pop(): number|undefined;
}
type X = Array<number> extends A ? true : false; // true
let a = f<A>();
if(Array.isArray(a) )
a; // any[] <- should be number[]
```
We should try to assert the generic types :
```ts
// with U<number> extends T;
Is<T, U<unknown>> = Is<Partial<U<number>>, U<unknown>> = U<unknown&number> = U<number>;
```
I think a type deduction is technically possible in lot of cases, an would simplify lot of type guards using generics.
The issue is to assert when the following step would be legal in a type guard :
```ts
Is<Partial<U<number>>, U<unknown>> = U<unknown&number>
```
Maybe if, and only if, `U<unknown&number> extends U<unknown>` ?
We could even be more generic :
```ts
// with U<number> extends Pick<T, keyof U>;
Is<T, U<unknown>> = Is<Partial<U<number>>, U<unknown>> & T = U<unknown&number> & T = U<number> & T;
```
When we can't deduce, I suggest:
- `Is<unknown, U<V>> = U<V>`
- `Is<any, U<V>> = U<V>`, but `U<any>` if `V = unknown`.
- `Is<{}, U<V>> = U<V>`.
This issue also occurs with `readonly unknown[]`, as it can be seen as a base type of `unknown[]`.
### 📃 Motivating Example
This would lead to more precise type deduction in type guards.
### 💻 Use Cases
1. What do you want to use this for?
Deduce type more precise types.
2. What shortcomings exist with current approaches?
Deduced types are incorrect/not precised.
3. What workarounds are you using in the meantime?
Complex type guards.
| Suggestion,Awaiting More Feedback | low | Minor |
2,576,527,200 | langchain | Dall-E Image Generator DallEAPIWrapper Error: openai.AuthenticationError" | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following Langchain code failed to generate image:
```python
from langchain_community.utilities.dalle_image_generator import DallEAPIWrapper
from langchain_core.prompts import PromptTemplate
from langchain_openai import OpenAI
import os
os.environ["OPENAI_API_KEY"] = "********"
from langchain.chains import LLMChain
llm = OpenAI(temperature=0.9)
prompt = PromptTemplate(
input_variables=["image_desc"],
template="Generate a detailed prompt to generate an image based on the following description: {image_desc}",
)
chain = LLMChain(llm=llm, prompt=prompt)
image_url = DallEAPIWrapper().run(chain.run("halloween night at a haunted museum"))
print(image_url)
```
The following OpenAI code generated image successfully:
``` python
from openai import OpenAI
import os
os.environ["OPENAI_API_KEY"] = "****"
client = OpenAI()
response = client.images.generate(
model="dall-e-3",
prompt="a white siamese cat",
size="1024x1024",
quality="standard",
n=1,
)
image_url = response.data[0].url
print(image_url)
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "D:\software\JetBrains\PyCharm Community Edition 2024.2.3\plugins\python-ce\helpers\pydev\pydevd.py", line 1570, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\software\JetBrains\PyCharm Community Edition 2024.2.3\plugins\python-ce\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "E:\workspace\python-projects\AI-Development\agents\dalle_image.py", line 21, in <module>
image_url = DallEAPIWrapper().run(chain.run("halloween night at a haunted museum"))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\workspace\python-projects\AI-Development\.venv\Lib\site-packages\langchain_community\utilities\dalle_image_generator.py", line 143, in run
response = self.client.generate(
^^^^^^^^^^^^^^^^^^^^^
File "E:\workspace\python-projects\AI-Development\.venv\Lib\site-packages\openai\resources\images.py", line 264, in generate
return self._post(
^^^^^^^^^^^
File "E:\workspace\python-projects\AI-Development\.venv\Lib\site-packages\openai\_base_client.py", line 1270, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\workspace\python-projects\AI-Development\.venv\Lib\site-packages\openai\_base_client.py", line 947, in request
return self._request(
^^^^^^^^^^^^^^
File "E:\workspace\python-projects\AI-Development\.venv\Lib\site-packages\openai\_base_client.py", line 1051, in _request
raise self._make_status_error_from_response(err.response) from None
openai.AuthenticationError: Error code: 401 - {'error': {'code': 'invalid_api_key', 'message': 'Incorrect API key provided: **********. You can find your API key at https://platform.openai.com/account/api-keys.', 'param': None, 'type': 'invalid_request_error'}}
### Description
I write a demo following up guideline https://python.langchain.com/docs/integrations/tools/dalle_image_generator/#run-as-a-chain, but it always failed with error :"openai.AuthenticationError: Error code: 401 - {'error': {'code': 'invalid_api_key', 'message': 'Incorrect API key provided: **********. You can find your API key at https://platform.openai.com/account/api-keys.', 'param': None, 'type': 'invalid_request_error'}}".
I debug the program into _base_client.py found the open AI images generations API url is wrong as the screen capture as below.
The same OPENAI_API_KEY can success in the OpenAI sample as above.

### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.19045
> Python Version: 3.12.6 (tags/v3.12.6:a4a2d2b, Sep 6 2024, 20:11:23) [MSC v.1940 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.3.10
> langchain: 0.3.3
> langchain_community: 0.3.2
> langsmith: 0.1.131
> langchain_anthropic: 0.2.3
> langchain_chroma: 0.1.4
> langchain_openai: 0.2.2
> langchain_text_splitters: 0.3.0
> langgraph: 0.2.34
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.9
> anthropic: 0.35.0
> async-timeout: Installed. No version info available.
> chromadb: 0.5.11
> dataclasses-json: 0.6.7
> defusedxml: 0.7.1
> fastapi: 0.115.0
> httpx: 0.27.2
> jsonpatch: 1.33
> langgraph-checkpoint: 2.0.0
> numpy: 1.26.4
> openai: 1.51.0
> orjson: 3.10.7
> packaging: 24.1
> pydantic: 2.9.2
> pydantic-settings: 2.5.2
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.35
> tenacity: 8.5.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
| 🤖:bug | low | Critical |
2,576,543,793 | flutter | Re-support backgrounded video play in Android (post-`ImageReader`) | As of the migration from `SurfaceTexture` to `ImageReader`, we inadvertently broke the ability to run videos in the background, because when the application is backgrounded it typically receives a `TRIM_MEMORY(BACKGROUND)` event, which in turn causes the surface to be released.
For example, [this option is no longer functional](https://pub.dev/documentation/video_player/latest/video_player/VideoPlayerOptions/allowBackgroundPlayback.html):
```dart
VideoPlayerOptions({
bool allowBackgroundPlayback = false,
})
```
@johnmccutchan mentioned that the particular workaround was for Android 14, and it's possible that in other versions (i.e. Android 15 or newer), we could support background playback (by not releasing the texture). However, I want to make sure (a) I understand this correctly, (b) we consider this a problem, and (c) have a plan on how to fix this - for now it's likely I need to just make a note this functionality is unsupported on Android (or folks will need to fallback to an older pre-ImageReader version of the plugin, which is not ideal). | c: regression,platform-android,p: video_player,P2,team-engine,triaged-engine,p: waiting for stable update | low | Major |
2,576,555,716 | flutter | `allowBackgroundPlayback` no longer works in `video_player_android` | This is a placeholder issue to track the fact that `allowBackgroundPlayback` is non-functional in `video_player_android`.
We are discussing technical solutions in https://github.com/flutter/flutter/issues/156488, this issue will track actually fixing the plugin. | c: regression,platform-android,p: video_player,P2,blocked,team-engine,triaged-engine | low | Minor |
2,576,557,078 | vscode | Multiline copy and repeated pasting | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
I'd like to be able to replicate what you can do in Sublime Text 4 with multiline copying and pasting.
When copying multiple lines and trying to paste them on exactly a multiple of lines (eg: copy 4 to 8 lines) I'd like the copied lines to be pasted repeatedly on all destination lines.
EG:
Copy these 4 lines
```
1
2
3
4
```
Paste them aside these 8 lines:
```
a
b
c
d
e
f
g
h
```
Obtain the following:
```
a1
b2
c3
d4
e1
f2
g3
h4
```
Unfortunately, VSCode does this:
```
a1
2
3
4
b1
2
3
4
c1
2
3
4
d1
2
3
4
e1
2
3
4
f1
2
3
4
g1
2
3
4
h1
2
3
4
``` | feature-request,editor-clipboard | low | Minor |
2,576,557,642 | pytorch | [RFC] Make Flight Recorder device generic | ### 🚀 The feature, motivation and pitch
Flight Recorder is embodied by `NCCLTraceBuffer` class.
A brief look seems to suggest that it may not be too hard to make it device generic.
For example:
`using Event = at::cuda::CUDAEvent` -->
`using Event = c10::Event`
We may need to articulate what the `record` API means for generic devices though, i.e. should FR creates event internally and record it, or accepting pre-made, device-specific events from outside. TBD.
### Additional context
cc: @fduwjj @c-p-i-o @wconstab
cc @XilunWu @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed,triaged | low | Minor |
2,576,621,217 | tauri | [bug] Resizing the dev tools on Kubuntu 24.04 causes a memory leak. | ### Describe the bug
Resizing the window when the dev tools are open or just the dev tools causes the program to continuously consume memory.
### Reproduction
1. create a new app with `bun create tauri-app@latest`
2. `bun install`
3. `bun run tauri dev`
4. Right click > Inspect Element
5. Resize the window continuously for a few seconds
You should see memory usage increase without ever going back down
### Expected behavior
The app should not consume and hold on to memory.
### Full `tauri info` output
```text
[✔] Environment
- OS: Ubuntu 24.4.0 x86_64 (X64)
✔ webkit2gtk-4.1: 2.44.3
✔ rsvg2: 2.58.0
✔ rustc: 1.83.0-nightly (0ee7cb5e3 2024-09-10)
✔ cargo: 1.83.0-nightly (c1fa840a8 2024-08-29)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: nightly-x86_64-unknown-linux-gnu (default)
- node: 21.4.0
- pnpm: 9.6.0
- yarn: 1.22.22
- npm: 10.2.5
- bun: 1.1.10
[-] Packages
- tauri 🦀: 2.0.2
- tauri-build 🦀: 2.0.1
- wry 🦀: 0.44.1
- tao 🦀: 0.30.3
- @tauri-apps/api : 2.0.2
- @tauri-apps/cli : 2.0.2
[-] Plugins
- tauri-plugin-shell 🦀: 2.0.1
- @tauri-apps/plugin-shell : 2.0.0
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:1420/
- framework: React
- bundler: Vite
```
```
### Stack trace
_No response_
### Additional context
This might be an issue with WebKitGTK, but I have no idea. | type: bug,status: upstream,platform: Linux,status: needs triage | low | Critical |
2,576,639,767 | rust | Compiler recognition error for variable changes | <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
I tried this code:
```rust
let data = match response.json::<serde_json::Value>().await {
Ok(json_data) => json_data,
Err(_) => json!({
"server_error": "Can't parse json"
})
};
if data.get("server_error").is_some() {
let error = data["server_error"].as_str().unwrap_or("unknown");
return ServerStatus::error(error.to_string());
}else{
match serde_json::from_value::<ServerStatus>(data.clone()) {
Ok(status) => status,
Err(_) => ServerStatus::from_json(data),
}
}
```
I expected to see this happen: *The compiler passed normally*
Instead, this happened:
*Compiler error:```value assigned to `data` is never read, maybe it is overwritten before being read?```*
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.81.0 (eeb90cda1 2024-09-04)
binary: rustc
commit-hash: eeb90cda1969383f56a2637cbd3037bdf598841c
commit-date: 2024-09-04
host: aarch64-apple-darwin
release: 1.81.0
LLVM version: 18.1.7
```
</p>
</details>
| S-needs-repro | low | Critical |
2,576,661,300 | TypeScript | Destructuring a possibly `undefined` object not caught, throws an error at runtime | ### 🔎 Search Terms
`destructuring`, `Cannot destructure as it is undefined`, `optional chaining destructuring`
### 🕗 Version & Regression Information
- This changed between versions ______ and _______
- This changed in commit or PR _______
- This is the behavior in every version I tried, and I reviewed the FAQ for entries about _________
- I was unable to test this on prior versions because _______
### ⏯ Playground Link
https://www.typescriptlang.org/play/?ts=5.6.2#code/MYewdgzgLgBAlmADgVygLhgbwLACgYEwCGA-BpjAEZlYzAbQBOCA5jAL4d6cC8tRGZGAAmAUwBmCUcK648oSLAoA6VZQ4w+CFFGWlllIA
### 💻 Code
```ts
const input: {
a?: { b?: { c: string } }
} = { a: undefined }
const { ...b } = input.a?.b
```
### 🙁 Actual behavior
No error reported by TypeScript.
### 🙂 Expected behavior
TypeScript should have an indicative error, since this expression might result in `undefined`, which cannot be destructured.
Similar to this:

> Rest types may only be created from object types.(2700)
### Additional information about the issue
The error in the JavaScript being ran is:
> Uncaught TypeError: Cannot destructure '(intermediate value)' as it is undefined. | Bug,Help Wanted | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.