id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k ⌀ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,469,797,100 | deno | node:http2 stream error when using npm:firebase-admin | I ran the same code maybe 2-4 weeks ago on deno deploy and it worked, but now I run it from my own server and get this error: I saw similar issues raised (error with node:http2) before like this but not the same error message.
I run with -A --unstable-kv --unstable-cron and i tried with just --unstable but it did also not work
and i use fresh: "$fresh/": "https://deno.land/x/fresh@1.6.8/",
Not sure if relevant but i have deno to output to port 8000, and then i ran:
```bash
iptables -t nat -A OUTPUT -o lo -p tcp --dport 443 -j REDIRECT --to-port 8080
```
And I have secure connection.
Version: Deno 1.45.5
```ts
import admin from "npm:firebase-admin";
import serviceAccount from "...[file path]..." with { type: "json" };
// Init
admin.initializeApp({
credential: admin.credential.cert(serviceAccount as admin.ServiceAccount)
});
// In some function later in the file
async function f() {
const messages = [
{
token: "......[the actual token].....",
notification: {
title: "...[some title]...",
body: "...[some text]..."
},
data: { surveys: "...[some data]..." }
}
]
await admin.messaging().sendEach(messages); // This errors
}
```
```bash
error: Uncaught (in promise) Error: stream error received: unspecific protocol error detected
at node:http2:735:50
at ClientHttp2Stream._read (node:http2:761:7)
at ClientHttp2Stream.Readable.read (ext:deno_node/_stream.mjs:2996:16)
at resume_ (ext:deno_node/_stream.mjs:3346:16)
at processTicksAndRejections (ext:deno_node/_next_tick.ts:33:15)
at runNextTicks (ext:deno_node/_next_tick.ts:71:3)
at eventLoopTick (ext:core/01_core.js:175:21)
``` | node:http | low | Critical |
2,469,802,928 | pytorch | Can pytorch add sparse linear solvers like scipy.sparse.linalg.gmres, scipy.sparse.linalg.bicg etc. | ### 🚀 The feature, motivation and pitch
I am a compute fluid dynamic software developer. Pytorch really helped me for using GPU to accelerate computing. However, in compute fluid dynamic or other mechanics domain, we usually store coefficients matrix in sparse format like CSR and solve the linear equation systems. I wondering if pytorch can develop a sparse solver which input the sparse coefficients matrix with a dense vector for the right hand of equations and output a dense vector represent the solve. In scipy there are already such solvers, but it does not support gpu acceleration. I hope pytorch can add such solvers.
### Alternatives
_No response_
### Additional context
_No response_
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @jcaip @jianyuh @mruberry @walterddr @xwang233 @Lezcano
```[tasklist]
### Tasks
```
| module: sparse,triaged,module: linear algebra | low | Major |
2,469,823,615 | next.js | Next.js bundles libraries only used in Server Components into the Client Bundle Chunks | ### Link to the code that reproduces this issue
https://github.com/phryneas/apollo-client-nextjs-reproduction-341
### To Reproduce
The reproduction is a blank create repo created with `yarn create next-app` - all differences to a new repo are [in this commit](https://github.com/phryneas/apollo-client-nextjs-reproduction-341/commit/8b39385ac546e1e3d5b5921bd758ef3bbe237724)
1. `npm install`
2. `npm run build`
3. `grep -rl watchQuery .next/static/chunks` -> `.next/static/chunks/959-17277ce4ae2a5d3b.js` (`watchQuery` is an Apollo Client api, so it's easily identifiable by that even when class names are mangled)
4. `npm run start`
5. open http://localhost:3000 in the browser
6. verify that `959-17277ce4ae2a5d3b.js` is indeed sent to the browser
### Current vs. Expected behavior
Apollo Client should not end up in the Client Chunks when only referenced from Server Components.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.5.0: Wed May 1 20:12:58 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6000
Available memory (MB): 16384
Available CPU cores: 8
Binaries:
Node: 22.6.0
npm: 10.8.2
Yarn: 1.22.22
pnpm: N/A
Relevant Packages:
next: 14.2.5 // Latest available version is detected (14.2.5).
eslint-config-next: N/A
react: 18.3.1
react-dom: 18.3.1
typescript: N/A
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
SWC
### Which stage(s) are affected? (Select all that apply)
next build (local), next start (local)
### Additional context
This was brought to our attention in https://github.com/apollographql/apollo-client-nextjs/issues/341 | bug,SWC | low | Minor |
2,469,831,082 | tauri | [bug] Cross-platform compilation from linux to Windows doesn't work | ### Describe the bug
Cross-platform compilation does not work:
```bash
cargo tauri build --runner cargo-xwin --target x86_64-pc-windows-msvc
```
The script keeps giving an error:
```bash
warning 5202: -OUTPUTCHARSET is disabled for non Win32 platforms.
Error failed to bundle project:
- `Invalid cross-device link (os error 18)
```
### Reproduction
0. Linux debian:bookworm-slim
1. Create a fresh project as discribed here https://v2.tauri.app/start/create-project/
2. Install required packages as described here https://tauri.app/v1/guides/getting-started/prerequisites#1-system-dependencies and here https://tauri.app/v1/guides/building/cross-platform#experimental-build-windows-apps-on-linux-and-macos
3. Try to build Windows .exe file from linux using script `cargo tauri build --ci --runner cargo-xwin --target x86_64-pc-windows-msvc`
### Expected behavior
Build must be successfully created
### Full `tauri info` output
```text
[✔] Environment
- OS: Debian 12.0.0 X64
✔ webkit2gtk-4.1: 2.44.2
✔ rsvg2: 2.54.7
✔ rustc: 1.80.1 (3f5fd8dd4 2024-08-06)
✔ cargo: 1.80.1 (376290515 2024-07-16)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: 1.80.1-x86_64-unknown-linux-gnu (environment override by RUSTUP_TOOLCHAIN)
- node: 18.19.0
- npm: 9.2.0
[-] Packages
- tauri [RUST]: 2.0.0-rc (no lockfile)
- tauri-build [RUST]: no manifest (no lockfile)
- wry [RUST]: no manifest (no lockfile)
- tao [RUST]: no manifest (no lockfile)
- tauri-cli [RUST]: 2.0.0-rc.3
- @tauri-apps/api [NPM]: 2.0.0-rc.0
- @tauri-apps/cli [NPM]: 2.0.0-rc.3
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:1420/
- framework: React
- bundler: Vite
```
```
### Stack trace
```text
Running makensis.exe to produce /app/src-tauri/target/x86_64-pc-windows-msvc/release/bundle/nsis/myapp.x.x.x_x64-setup.exe
warning 5202: -OUTPUTCHARSET is disabled for non Win32 platforms.
Error failed to bundle project:
- `Invalid cross-device link (os error 18)`
```
### Additional context
1. The Linux build runs correctly and works as expected, which is great
2. I am using rust 1.80.1 and also checked manually, that `TMPDIR=/var/tmp` (as was discussed here - https://github.com/tauri-apps/tauri/issues/4500). | type: bug,priority: 3 low,status: needs triage | low | Critical |
2,469,833,040 | node | Allow shake128/256 to produce outputs of unlimited length | ### What is the problem this feature will solve?
SHAKE128 is essentially a hash algorithm that can output an infinite-length stream. Rather than being just a hash algorithm, it's more like a stream cipher such as RC4. Support for SHAKE128/256 is currently available, but the implementation requires specifying an `outputSize` in advance, which is fixed and limited. This doesn't meet my needs, so I have to rely on third-party libraries.
### What is the feature you are proposing to solve the problem?
Support for SHAKE was first introduced in 2019, but I believe this support is incomplete.
https://github.com/nodejs/node/issues/28757
https://github.com/nodejs/node/pull/28805
### What alternatives have you considered?
_No response_ | crypto,feature request | low | Major |
2,469,864,553 | godot | Weird shadows on specific camera rotation degrees | ### Tested versions
Godot 4.3 stable, Godot 4.3 rc3
### System information
Windows 11 - Godot v4.3.stable - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3060ti (552.12 game ready driver) - AMD Ryzen 5 5600X CPU @ 4.00GHz (12 Threads)
### Issue description
I built a simple level with plane as a floor and cube as an object, added omni light and enabled shadows. I noticed, that when camera rotation is set about -90 degress, shadows become weird. If I rotate camera a few degress left or right, shadows become normal, but if I return camera rotation back to about -90 degrees, shadows become weird again.
### Steps to reproduce
Start MRP project
Using keyboard arrows rotate camera. (By default rotation degrees of camera is set to -90, weird shadow changes can be seen if the camera rotates a few degrees to the left or right)
### Minimal reproduction project (MRP)
[weird_shadows_MRP.zip](https://github.com/user-attachments/files/16635288/weird_shadows.zip)
| bug,topic:rendering,confirmed,topic:3d | low | Minor |
2,469,869,664 | pytorch | [torch.jit.trace] error encountered when tracing model with weight clamping during forward | ### 🐛 Describe the bug
I encountered an error while using `torch.jit.trace()` to trace a model that performs weight clamping during the forward pass. The simplified code is as follows:
```python
import torch
import torch.nn as nn
class ClampedFeedForward(nn.Module):
def __init__(self, input_size, hidden_size):
super(ClampedFeedForward, self).__init__()
self.fc1 = nn.Linear(input_size, hidden_size)
def clamp_weights(self):
with torch.no_grad():
for param in self.fc1.parameters():
clamped_param = torch.clamp(param, -1, 1)
param.copy_(clamped_param)
def forward(self, x):
x = self.fc1(x)
self.clamp_weights()
return x
input_size, hidden_size = 10, 5
model = ClampedFeedForward(input_size, hidden_size)
example_input = torch.randn(1, input_size)
traced_model = torch.jit.trace(model, example_input)
output = traced_model(example_input)
```
The error messages are as follows:
```
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
<ipython-input-4-83f5f7464ca1>(13): clamp_weights
<ipython-input-4-83f5f7464ca1>(18): forward
/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1726): _slow_forward
/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1747): _call_impl
/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1736): _wrapped_call_impl
/usr/local/lib/python3.10/dist-packages/torch/jit/_trace.py(1278): trace_module
/usr/local/lib/python3.10/dist-packages/torch/jit/_trace.py(698): _trace_impl
/usr/local/lib/python3.10/dist-packages/torch/jit/_trace.py(1002): trace
<ipython-input-4-83f5f7464ca1>(27): <cell line: 27>
/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py(3553): run_code
/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py(3473): run_ast_nodes
/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py(3257): run_cell_async
/usr/local/lib/python3.10/dist-packages/IPython/core/async_helpers.py(78): _pseudo_sync_runner
/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py(3030): _run_cell
/usr/local/lib/python3.10/dist-packages/IPython/core/interactiveshell.py(2975): run_cell
/usr/local/lib/python3.10/dist-packages/ipykernel/zmqshell.py(539): run_cell
/usr/local/lib/python3.10/dist-packages/ipykernel/ipkernel.py(302): do_execute
/usr/local/lib/python3.10/dist-packages/tornado/gen.py(234): wrapper
/usr/local/lib/python3.10/dist-packages/ipykernel/kernelbase.py(539): execute_request
/usr/local/lib/python3.10/dist-packages/tornado/gen.py(234): wrapper
/usr/local/lib/python3.10/dist-packages/ipykernel/kernelbase.py(261): dispatch_shell
/usr/local/lib/python3.10/dist-packages/tornado/gen.py(234): wrapper
/usr/local/lib/python3.10/dist-packages/ipykernel/kernelbase.py(361): process_one
/usr/local/lib/python3.10/dist-packages/tornado/gen.py(786): run
/usr/local/lib/python3.10/dist-packages/tornado/gen.py(825): inner
/usr/local/lib/python3.10/dist-packages/tornado/ioloop.py(738): _run_callback
/usr/local/lib/python3.10/dist-packages/tornado/ioloop.py(685): <lambda>
/usr/lib/python3.10/asyncio/events.py(80): _run
/usr/lib/python3.10/asyncio/base_events.py(1909): _run_once
/usr/lib/python3.10/asyncio/base_events.py(603): run_forever
/usr/local/lib/python3.10/dist-packages/tornado/platform/asyncio.py(195): start
/usr/local/lib/python3.10/dist-packages/ipykernel/kernelapp.py(619): start
/usr/local/lib/python3.10/dist-packages/traitlets/config/application.py(992): launch_instance
/usr/local/lib/python3.10/dist-packages/colab_kernel_launcher.py(37): <module>
/usr/lib/python3.10/runpy.py(86): _run_code
/usr/lib/python3.10/runpy.py(196): _run_module_as_main
RuntimeError: a leaf Variable that requires grad is being used in an in-place operation.
```
The error is reproducible with the nightly-build version `2.5.0.dev20240815+cpu` . Please find the colab [here](https://colab.research.google.com/drive/18KkdHcgheY-a309kZlrewWtXDwMpS5rD?usp=sharing).
### Versions
PyTorch version: 2.5.0.dev20240815+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.3 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.16.3
Libc version: glibc-2.31
Python version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-107-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 64
On-line CPU(s) list: 0-63
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Gold 6326 CPU @ 2.90GHz
Stepping: 6
CPU MHz: 3500.000
CPU max MHz: 3500.0000
CPU min MHz: 800.0000
BogoMIPS: 5800.00
Virtualization: VT-x
L1d cache: 1.5 MiB
L1i cache: 1 MiB
L2 cache: 40 MiB
L3 cache: 48 MiB
NUMA node0 CPU(s): 0-15,32-47
NUMA node1 CPU(s): 16-31,48-63
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI Syscall hardening, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] onnx==1.16.2
[pip3] onnxruntime==1.19.0
[pip3] onnxscript==0.1.0.dev20240816
[pip3] pytorch-triton==3.0.0+dedb7bdf33
[pip3] torch==2.5.0.dev20240815+cu121
[pip3] torch-xla==2.4.0
[pip3] torch_xla_cuda_plugin==2.4.0
[pip3] torchaudio==2.4.0.dev20240815+cu121
[pip3] torchvision==0.20.0.dev20240815+cu121
[pip3] triton==3.0.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] pytorch-triton 3.0.0+dedb7bdf33 pypi_0 pypi
[conda] torch 2.5.0.dev20240815+cu121 pypi_0 pypi
[conda] torch-xla 2.4.0 pypi_0 pypi
[conda] torch-xla-cuda-plugin 2.4.0 pypi_0 pypi
[conda] torchaudio 2.4.0.dev20240815+cu121 pypi_0 pypi
[conda] torchvision 0.20.0.dev20240815+cu121 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel | oncall: jit | low | Critical |
2,469,874,609 | go | proposal: x/sync/errgroup: add `(Try?)Go` variants that return channel closed after `f` terminates | ### Proposal Details
Add the following to `errgroup.Group` (note: I don't really like the names, bikeshedding is welcome):
```
// GoChan is like Go, but returns a channel that is closed after f has returned.
func (g *Group) GoChan(f func() error) <-chan struct{}
// TryGoChan is like TryGo but, if f was started, it returns a non-nil channel like GoChan.
func (g *Group) TryGoChan(f func() error) <-chan struct{}
```
The channel returned by these variants is closed after `f` terminates and, crucially, *after* `cancel` has been called on the context (if necessary).
I just sketched out the idea [here](https://github.com/golang/sync/commit/b3f13a47390c12f17aab32df1d17f9a1feca17e1) (untested! it's just to discuss the potential semantics).
My main usecase for this is in the following (simplified) scenario:
```go
ch := eg.GoChan(func() error {
// ...
})
eg.Go(func() error {
// do something ...
select {
case <-ctx.Done():
return ctx.Err()
case <-ch:
}
// do something else, but only after the first function has completed
})
```
(this example above is extremely simplified, in reality there are often many more calls, sometimes not even known statically, and with more complex relationships between functions, so manually finding an order becomes unwieldy fast - and especially maintaining that code becomes much harder) | Proposal | low | Critical |
2,469,883,386 | flutter | Slider tickMarkShape isn't drawn | ### Steps to reproduce
1. Create slider with custom tickMarkShape
2. Launch on ios
### Expected results
tickMarkShape does render on ios like it does on web/android
### Actual results
tickMarkShape doesn't render on ios
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'dart:math';
import 'package:flutter/material.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
theme: ThemeData(
colorScheme: ColorScheme.fromSeed(seedColor: Colors.deepPurple),
useMaterial3: true,
),
home: const MyHomePage(title: 'Flutter Demo Home Page'),
);
}
}
class MyHomePage extends StatefulWidget {
const MyHomePage({super.key, required this.title});
final String title;
@override
State<MyHomePage> createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
double? sliderValue = null;
@override
Widget build(BuildContext context) {
return Scaffold(
body: Center(
child: _StarSlider(
onChanged: (value) => setState(() => sliderValue = value),
value: sliderValue,
),
),
);
}
}
class _StarSlider extends StatelessWidget {
const _StarSlider({
required this.onChanged,
required this.value,
super.key,
});
final ValueChanged<double> onChanged;
final double? value;
@override
Widget build(BuildContext context) {
return SliderTheme(
data: SliderTheme.of(context).copyWith(
thumbShape: const _PolygonSliderThumb(radius: 16),
tickMarkShape: const _StarTickMarkShape(radius: 16),
valueIndicatorColor: value == null
? Colors.transparent
: Theme.of(context).colorScheme.tertiary,
activeTrackColor: Colors.transparent,
inactiveTrackColor: Colors.transparent,
activeTickMarkColor: value == null
? null
: Theme.of(context).colorScheme.onSurfaceVariant,
),
child: Slider(
value: value ?? 1,
onChanged: onChanged,
min: 1,
max: 5,
divisions: 4,
),
);
}
}
class _PolygonSliderThumb extends SliderComponentShape {
final double radius;
const _PolygonSliderThumb({required this.radius});
@override
Size getPreferredSize(bool isEnabled, bool isDiscrete) {
return Size.fromRadius(radius);
}
@override
void paint(
PaintingContext context,
Offset center, {
required Animation<double> activationAnimation,
required Animation<double> enableAnimation,
required bool isDiscrete,
required TextPainter labelPainter,
required RenderBox parentBox,
required SliderThemeData sliderTheme,
required TextDirection textDirection,
required double value,
required double textScaleFactor,
required Size sizeWithOverflow,
}) {
Paint paint = Paint()
..color = sliderTheme.valueIndicatorColor ?? Colors.blue
..style = PaintingStyle.fill;
final path = _getStarPath(radius: radius, center: center);
context.canvas.drawPath(path, paint);
}
}
class _StarTickMarkShape extends SliderTickMarkShape {
final double radius;
const _StarTickMarkShape({required this.radius});
@override
Size getPreferredSize({
required SliderThemeData sliderTheme,
required bool isEnabled,
}) {
assert(sliderTheme.trackHeight != null);
return Size.fromRadius(radius);
}
@override
void paint(
PaintingContext context,
Offset center, {
required RenderBox parentBox,
required SliderThemeData sliderTheme,
required Animation<double> enableAnimation,
required TextDirection textDirection,
required Offset thumbCenter,
required bool isEnabled,
}) {
assert(sliderTheme.disabledActiveTickMarkColor != null);
assert(sliderTheme.disabledInactiveTickMarkColor != null);
assert(sliderTheme.activeTickMarkColor != null);
assert(sliderTheme.inactiveTickMarkColor != null);
final double xOffset = center.dx - thumbCenter.dx;
// BUG: wierd bug, thumbCenter is not exactly center
if (xOffset.abs() < 3) {
center = thumbCenter;
}
final (Color? begin, Color? end) = switch (textDirection) {
TextDirection.ltr when xOffset > 0 => (
sliderTheme.disabledInactiveTickMarkColor,
sliderTheme.inactiveTickMarkColor,
),
TextDirection.rtl when xOffset < 0 => (
sliderTheme.disabledInactiveTickMarkColor,
sliderTheme.inactiveTickMarkColor,
),
TextDirection.ltr || TextDirection.rtl => (
sliderTheme.disabledActiveTickMarkColor,
sliderTheme.valueIndicatorColor,
),
};
final Paint paint = Paint()
..color = ColorTween(begin: begin, end: end).evaluate(enableAnimation)!;
// context.canvas.drawCircle(center, radius, paint);
if (radius > 0) {
final path = _getStarPath(radius: radius, center: center);
debugPrint('star: $path');
context.canvas.drawPath(path, paint);
}
}
}
Path _getStarPath({required double radius, required Offset center}) {
final path = Path();
// Define points for the star
for (int i = 0; i < 5; i++) {
var angle = (72 * i - 90) * (pi / 180.0);
var x = radius * cos(angle);
var y = radius * sin(angle);
if (i == 0) {
path.moveTo(center.dx + x, center.dy + y);
} else {
path.lineTo(center.dx + x, center.dy + y);
}
angle = (72 * i + 36 - 90) * (pi / 180.0);
x = (radius / 2) * cos(angle);
y = (radius / 2) * sin(angle);
path.lineTo(center.dx + x, center.dy + y);
}
path.close();
return path;
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>


</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.24.0, on macOS 14.5 23F79 darwin-arm64, locale en-RU)
• Flutter version 3.24.0 on channel stable at /opt/homebrew/Caskroom/flutter/3.10.4/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 80c2e84975 (2 weeks ago), 2024-07-30 23:06:49 +0700
• Engine revision b8800d88be
• Dart version 3.5.0
• DevTools version 2.37.2
[!] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /Users/geara0/Library/Android/sdk
✗ cmdline-tools component is missing
Run `path/to/sdkmanager --install "cmdline-tools;latest"`
See https://developer.android.com/studio/command-line for more details.
✗ Android license status unknown.
Run `flutter doctor --android-licenses` to accept the SDK licenses.
See https://flutter.dev/to/macos-android-setup for more details.
[✓] Xcode - develop for iOS and macOS (Xcode 15.4)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 15F31d
• CocoaPods version 1.15.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2023.3)
• Android Studio at /Users/geara0/Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11572160)
[✓] IntelliJ IDEA Ultimate Edition (version 2024.1.4)
• IntelliJ at /Users/geara0/Applications/IntelliJ IDEA Ultimate.app
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
[✓] VS Code (version 1.91.1)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.92.0
[✓] Connected device (4 available)
• iPhone 15 Pro Max (mobile) • DD0BD5AF-6F73-4B6F-B5E1-922247A5BC5F • ios • com.apple.CoreSimulator.SimRuntime.iOS-17-4 (simulator)
• macOS (desktop) • macos • darwin-arm64 • macOS 14.5 23F79 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 14.5 23F79 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 127.0.6533.120
! Error: Browsing on the local area network for iPhone HOME. Ensure the device is unlocked and attached with a cable or associated with the same local area network as this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code -27)
[✓] Network resources
• All expected network resources are available.
! Doctor found issues in 1 category.
```
</details>
| framework,f: material design,has reproducible steps,P2,team-design,triaged-design,found in release: 3.24 | low | Critical |
2,469,885,798 | material-ui | [joy-ui][Autocomplete] autoComplete, autoHighlight, autoSelect, includeInputInList and selectOnFocus are not working | ### Steps to reproduce
Link to live example: (required)
https://codesandbox.io/p/sandbox/autocomplete-bugs-3klkc2
Steps:
1.Go to sandbox link and try
### Current behavior
`autoComplete`, `autoHighlight`, `autoSelect`, `includeInputInList` and `selectOnFocus` are not working that are described.
Edit: In addition `disabledItemsFocusable` and `disablePortal ` prop.
[autocomplete-bugs.webm](https://github.com/user-attachments/assets/bc6ff4a5-2637-4d0e-8a90-b9fc2c550ac4)
### Expected behavior
_No response_
### Context
_No response_
### Your environment
<details>
<summary><code>npx @mui/envinfo</code></summary>
```
Don't forget to mention which browser you used.
Output from `npx @mui/envinfo` goes here.
```
</details>
**Search keywords**: AutoComplete autoComplete autoHighlight autoSelect includeInputInList selectOnFocus | on hold,component: autocomplete,package: joy-ui | low | Critical |
2,469,921,581 | transformers | split head_dim from hidden_size for llama like gemma or mistral | ### Feature request
split head_dim from hidden_size like gemma or mistral
### Motivation
make not to align head_dim with hidden_size
### Your contribution
slightly revise modeling code and submit PR. | Feature request | low | Minor |
2,469,956,722 | godot | Godot scrolls the script editor annoyingly. | ### Tested versions
v4.3.stable.official [77dcf97d8]
### System information
w10 64
### Issue description
Watch the video, every time I switch between the 2D editor and the script editor I have to reposition the scroll in the script editor.
When I switch to the 2D editor and back to the script editor, Godot moved the scroll in the script editor and now I have to reposition the scroll in the script editor to have it the way I decide.
https://github.com/user-attachments/assets/7f09127d-0638-420e-8a0c-15fadf226f15
### Steps to reproduce
watch the video
### Minimal reproduction project (MRP)
... | bug,topic:editor,confirmed | low | Major |
2,469,973,253 | tensorflow | tritonserver preload trt plugin got warning message and many core files : Failed to compile generated PTX with ptxas. Falling back to compilation by driver. | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
binary
### TensorFlow version
tf 2.16.2
### Custom code
No
### OS platform and distribution
linux Ubuntu 22.04
### Mobile device
_No response_
### Python version
3.10
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
tritonserver preload trt plugin got warning message and many core dump files
`
2024-08-16 10:09:14.975649: W tensorflow/compiler/mlir/tools/kernel_gen/transforms/gpu_kernel_to_blob_pass.cc:190] Failed to compile generated PTX with ptxas. Falling back to compilation by driver.
2024-08-16 10:09:16.033970: W tensorflow/compiler/mlir/tools/kernel_gen/transforms/gpu_kernel_to_blob_pass.cc:190] Failed to compile generated PTX with ptxas. Falling back to compilation by driver.
2024-08-16 10:09:16.701031: W tensorflow/compiler/mlir/tools/kernel_gen/transforms/gpu_kernel_to_blob_pass.cc:190] Failed to compile generated PTX with ptxas. Falling back to compilation by driver.
2024-08-16 10:09:17.498157: W tensorflow/compiler/mlir/tools/kernel_gen/transforms/gpu_kernel_to_blob_pass.cc:190] Failed to compile generated PTX with ptxas. Falling back to compilation by driver.
2024-08-16 10:09:18.328719: W tensorflow/compiler/mlir/tools/kernel_gen/transforms/gpu_kernel_to_blob_pass.cc:190] Failed to compile generated PTX with ptxas. Falling back to compilation by driver.
`

I have an ensmble model, the first part is the tf model, the second part is the trt model. I have a trt plugin, and I load it as LD_PRELOAD. It won't be a problem if I load the two models separately. But when I load them at the same time this warning comes up and produces a lot of coredump files. Why is that? I don't understand how the trt plugin will affect tf
### Standalone code to reproduce the issue
```shell
LD_PRELOAD=/app/lib/ops/libtrtplugin.so --model-repository=/opt/model-repo-copy --model-control-mode=explicit --load-model=first_model --load-model=second_model --load-model=ensmble_model --log-verbose=0 --http-port=xxx--grpc-port=xxx --metrics-port=xxx --backend-config=tensorflow,version=2 --backend-config=tensorrt,version-compatible=true --disable-auto-complete-config
```
### Relevant log output
_No response_ | stat:awaiting tensorflower,type:bug,TF 2.16 | low | Critical |
2,470,016,886 | opencv | is:issue is:open I have a Java Service running in Windows, the program used the dependency of "opencv_java455.dll",but now I want to deployee the service on ubantu, How do I do? | ### Describe the feature and motivation
I want compare two png picture similarity, I use the file "opencv_java455.dll" , but now I want to deployee the Java service in Ubantu System, How to replace it?
### Additional context
If you have the same problem, How to resolve it? | question (invalid tracker),incomplete | low | Minor |
2,470,044,842 | flutter | `ResidentWebRunner::attach` calls `ChromeTab::connect` without providing an error handler, risking top-level errors without stack traces | From https://github.com/flutter/flutter/issues/153298#issuecomment-2290900158 (shortened version):
> Looking at this code I also noticed that ResidentWebRunner neglects to pass onError callback to ChromeTab.connect. This seems like an oversight...This callback is used as an onError callback on the WebSocket [stream](https://github.com/google/webkit_inspection_protocol.dart/blob/119b877ae82bd2ca4cf7e5144d3a5ec104055164/lib/webkit_inspection_protocol.dart#L248-L252) once connection is established. Not passing onError means that all exceptions which occur later (after connection was established and upgraded to WebSocket) are going to be uncaught and will crash Flutter tool as well.
Said another way, we should pass an error handler in this `ChromeTab::connect` call:
https://github.com/flutter/flutter/blob/d23be7a07de3bd7f02db367fce52eff79021df24/packages/flutter_tools/lib/src/isolated/resident_web_runner.dart#L613
This error handler could simply rethrow the provided error using the provided stack trace using `Error.throwWithStackTrace(error, stackTrace)`. | P2,team-tool,triaged-tool | low | Critical |
2,470,055,320 | godot | Renaming the Skeleton3D does not correct fix track paths in AnimationPlayer | ### Tested versions
Reproducible: 4.3 stable mono
Working: 4.1.1 stable mono
### System information
Godot 4.3 mono - Windows 7
### Issue description
Renaming the Skeleton3D while AnimationPlayer was configured to use it (correct rootnode) did rename-fix track paths to point to the renamed skeleton. In 4.3 this functionality is missing.
### Steps to reproduce
Import .glb file with some animations.
Place it into a scene.
Rename the Skeleton3D node.
### Minimal reproduction project (MRP)
[animation rename broken.zip](https://github.com/user-attachments/files/16636499/animation.rename.broken.zip) | bug,topic:editor,regression,topic:animation | low | Critical |
2,470,126,871 | godot | Creating a signal with an argument tries to link it before automatically making the function. | ### Tested versions
Occurred in:
- v4.2.2-stable
- v4.3-stable
### System information
Godot v4.3.stable - Windows 10.0.19045 - GLES3 (Compatibility) - Intel(R) HD Graphics 620 (Intel Corporation; 31.0.101.2111) - Intel(R) Core(TM) i5-7300U CPU @ 2.60GHz (4 Threads)
### Issue description
When creating a signal with an argument and letting it automatically create the function in the linked script it gives this error right before creating the function.
`Cannot connect to 'focus_entered': the provided callable is not valid: Node2D(node_2d.gd)::_on_button_focus_entered`
- This occurs on any signal type.
- After this error it creates the function in the script.
- This error stops the signal from being created.
Upon attempting to make the signal again it wont give the error and correctly adds the signal, My guess is something is making it check if the function exists before it makes it.
Attached is a video of me recreating this bug but i forgot to record the signal creation screen, in the screen all i do is add a `bool` argument.
https://github.com/user-attachments/assets/678d0a19-4747-4d2a-8ded-cdba77dea08e
### Steps to reproduce
1. Create something a signal can be added to.
2. Link a script to your node.
3. Create a signal with an argument of any kind.
### Minimal reproduction project (MRP)
N/A | bug,topic:editor,needs testing | low | Critical |
2,470,152,108 | flutter | Selecting TextField or TextFormField causes app to freeze for 20-40 seconds on Linux Mint. | ### Steps to reproduce
1. Created a flutter project for linux only using the command `flutter create --platforms=linux textfield_broken`.
2. Copy pasted the code from the "Interactive example" section at the bottom of the page at https://docs.flutter.dev/cookbook/forms/text-input.
3. Run it on Linux Mint 22 Cinnamon.
### Expected results
App to run as shown on the cookbook page and work on every platform whether it be android, linux, windows, etc. i.e., TextField to receive text input and show it on screen and also focusNode to work smoothly.
### Actual results
Code compiled on Linux Mint 22. Worked as expected on web and android phone.
However, on the main machine itself, the app was freezing as soon as any of the TextField or TextFormField was being selected.
Sometimes, after 20-40 seconds, the inputs will be received and displayed on the screen and the app would start to work normally again but unfocusing and clicking on the same or any other TextField or widgets alike would cause the app to freeze for the same amount of time again. FocusNode freezes too and so does the dev tools.
Usually, `{app name} is not responding. Force Quit or Wait` dialogue box pops up.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() => runApp(const MyApp());
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
const appTitle = 'Form Styling Demo';
return MaterialApp(
title: appTitle,
home: Scaffold(
appBar: AppBar(
title: const Text(appTitle),
),
body: const MyCustomForm(),
),
);
}
}
class MyCustomForm extends StatelessWidget {
const MyCustomForm({super.key});
@override
Widget build(BuildContext context) {
return Column(
crossAxisAlignment: CrossAxisAlignment.start,
children: <Widget>[
const Padding(
padding: EdgeInsets.symmetric(horizontal: 8, vertical: 16),
child: TextField(
decoration: InputDecoration(
border: OutlineInputBorder(),
hintText: 'Enter a search term',
),
),
),
Padding(
padding: const EdgeInsets.symmetric(horizontal: 8, vertical: 16),
child: TextFormField(
decoration: const InputDecoration(
border: UnderlineInputBorder(),
labelText: 'Enter your username',
),
),
),
],
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
https://github.com/user-attachments/assets/4ca59f41-216f-4872-9ba4-2138b92bff83
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.24.0, on Linux Mint 22 6.8.0-40-generic, locale en_IN)
• Flutter version 3.24.0 on channel stable at /home/lav/dev/tooling/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 80c2e84975 (2 weeks ago), 2024-07-30 23:06:49 +0700
• Engine revision b8800d88be
• Dart version 3.5.0
• DevTools version 2.37.2
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at /home/lav/Android/Sdk
• Platform android-35, build-tools 35.0.0
• Java binary at: /home/lav/dev/tooling/android-studio/jbr/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
• All Android licenses accepted.
[✓] Chrome - develop for the web
• CHROME_EXECUTABLE = /usr/bin/brave-browser
[✓] Linux toolchain - develop for Linux desktop
• Ubuntu clang version 18.1.3 (1ubuntu1)
• cmake version 3.28.3
• ninja version 1.11.1
• pkg-config version 1.8.1
[✓] Android Studio (version 2024.1)
• Android Studio at /home/lav/dev/tooling/android-studio
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
[✓] VS Code (version 1.92.2)
• VS Code at /usr/share/code
• Flutter extension version 3.94.0
[✓] Connected device (2 available)
• Linux (desktop) • linux • linux-x64 • Linux Mint 22 6.8.0-40-generic
• Chrome (web) • chrome • web-javascript • Brave Browser 127.1.68.141
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| a: text input,e: device-specific,c: performance,platform-linux,e: OS-version specific,P2,team-linux,triaged-linux | low | Critical |
2,470,157,593 | ant-design | Customize overall icons | ### What problem does this feature solve?
End-user experience could be improved if icons for following components could be replaced: Select, Time / Date Pickers, Pagination etc. any component that has icon
### What does the proposed API look like?
I guess the components could have icon properties such as: customIcon: React.ReactNode
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | 💡 Feature Request,Inactive | low | Minor |
2,470,212,032 | ollama | OLLAMA_ORIGINS environment variables appends instead of sets | ### What is the issue?
Setting "OLLAMA_ORIGINS=*://localhost,*://127.0.0.1" will result in these entries being **added** the the allowed origins. Is this intended? I thought it should be overridden.
Before:
`OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*]`
After:
`OLLAMA_ORIGINS:[*://localhost *://127.0.0.1 http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*]`
Looking into https://github.com/ollama/ollama/blob/d29cd4c2ed104a1f6fba16a264c3cc7785a7d82f/envconfig/config.go#L66 it also clear why: "0.0.0.0" is iterated every time.
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.6 | bug | low | Minor |
2,470,218,537 | ollama | model xe/hermes3 doesn't correctly parse tool call tokens | ### What is the issue?
I uploaded Hermes3 to Ollama [here](https://ollama.com/xe/hermes3/blobs/afa6d473672a). The problem is that it isn't parsing the tool call syntax.
Hermes tool call syntax roughly looks like this:
```
<tool_call>
{"name": "code_interpreter", "arguments": {"code": "def reverse_list(lst):\n return lst[::-1]\n\noriginal = [1, 2, 3, 4, 5]\nreversed_list = reverse_list(original)\nprint('Original:', original)\nprint('Reversed:', reversed_list)"}}
</tool_call>
```
The raw response I get from ollama when doing a tool call for a code_interpreter tool that runs Python code looks like this:
```json
{"model":"xe/hermes3","created_at":"2024-08-16T12:38:56.878759Z","message":{"role":"assistant","content":"\n\u003ctool_call\u003e\n{\"name\": \"code_interpreter\", \"arguments\": {\"code\": \"def reverse_list(lst):\\n return lst[::-1]\\n\\noriginal = [1, 2, 3, 4, 5]\\nreversed_list = reverse_list(original)\\nprint('Original:', original)\\nprint('Reversed:', reversed_list)\"}}\n\u003c/tool_call\u003e"},"done_reason":"stop","done":true,"total_duration":2015048833,"load_duration":28571000,"prompt_eval_count":447,"prompt_eval_duration":490176000,"eval_count":77,"eval_duration":1492265000}
```
It looks like it's just literally returning the tool call tokens without parsing them. Here's the operative bit of [the template](https://ollama.com/xe/hermes3/blobs/afa6d473672a):
```jinja
<|im_start|>{{ .Role }}
{{- if and (eq .Role "assistant") .ToolCalls }}
{{- range .ToolCalls }}
<tool_call>
{"name": "{{ .Function.Name }}", "arguments": {{ .Function.Arguments }}}
</tool_call>
{{- end }}
{{- else }}
{{ .Content }}
{{- end }}<|im_end|>
```
What am I doing wrong here?
### OS
Linux, macOS
### GPU
Nvidia, Apple
### CPU
AMD, Apple
### Ollama version
0.3.6 | bug | low | Major |
2,470,252,229 | PowerToys | At the startup, auto launch a workspace | ### Description of the new feature / enhancement
With the new feature call [app layout](https://www.theverge.com/2024/8/16/24221639/microsoft-powertoys-workspaces-feature-demo) that you can save and launch quickly a bunch of app place were you want, I think it would be great to have the possibility to setup one by default that will open automatically at the startup of your PC.
### Scenario when this would be used?
Every time that you boot up your PC. That will help every body that will work always with the same setup to access to it quickly and be more productive.
### Supporting information
_No response_ | Idea-Enhancement,Status-In progress,Tracker,Product-Workspaces | low | Minor |
2,470,267,905 | godot | Corrupted scene when moving a GLB file and its animation files | ### Tested versions
4.3 stable
### System information
Windows 11
### Issue description
I have put all the files related to my character in a folder; that is a .tscn, a .glb and extracted animation.
After moving this folder to another folder, I get all the scenes that uses this character tagged as corrupted.
After some research, I can see that the `.import` did not update the `save_to_file/path` for extracted animation:
```
[remap]
importer="scene"
importer_version=1
type="PackedScene"
uid="uid://duds002tisele"
valid=false
[deps]
source_file="res://characters/enemies/melee/minion/small_ennemy_animated.glb"
[params]
nodes/root_type=""
nodes/root_name="Skin"
nodes/apply_root_scale=true
nodes/root_scale=1.0
nodes/import_as_skeleton_bones=false
meshes/ensure_tangents=true
meshes/generate_lods=true
meshes/create_shadow_meshes=true
meshes/light_baking=1
meshes/lightmap_texel_size=0.2
meshes/force_disable_compression=false
skins/use_named_skins=true
animation/import=true
animation/fps=30
animation/trimming=false
animation/remove_immutable_tracks=true
animation/import_rest_as_RESET=false
import_script/path=""
_subresources={
"animations": {
"Attack": {
"save_to_file/enabled": true,
"save_to_file/keep_custom_tracks": true,
"save_to_file/path": "res://enemies/melee/minion/Attack.res",
```
`res://enemies/melee/minion/Attack.res` should now be `res://characters/enemies/melee/minion/Attack.res`
To correct these paths fixed the issue.
Also, note that it might be good to give more details on why a scene is corrupted and tell the user that this is the glb import being corrupted.
### Steps to reproduce
- Import a GLB into a folder
- Save an inherited scene
- extract the animation using the Advanced Import, into the same folder or a child folder
- move the parent folder to a new place
### Minimal reproduction project (MRP)
NA | bug,needs testing,topic:import,topic:animation,topic:3d | low | Minor |
2,470,276,375 | flutter | pointerCount is not correct on GestureDetector.onScaleEnd() when quickly releasing both fingers | ### Steps to reproduce
1. Touch 1 finger
2. Touch 2 finger
3. Release both finger
### Expected results
If release both finger one by one => "onScaleEnd 1" -> "onScaleEnd 0" in console.
If release both finger at the same time => same as above.
### Actual results
If release both finger one by one => "onScaleEnd 1" -> "onScaleEnd 0" in console.
If release both finger at the same time => only "onScaleEnd 1" in console.
### Code sample
<details open><summary>Code sample</summary>
```dart
class Test extends StatefulWidget {
const Test({super.key});
@override
State<Test> createState() => _TestState();
}
class _TestState extends State<Test> {
@override
Widget build(BuildContext context) {
return Scaffold(
body: GestureDetector(
behavior: HitTestBehavior.opaque,
onScaleEnd: (detail) {
print("onScaleEnd ${detail.pointerCount}");
},
child: SizedBox.expand(),
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
Doctor summary (to see all details, run flutter doctor -v):
[!] Flutter (Channel stable, 3.22.3, on macOS 14.0 23A344 darwin-x64, locale en-VN)
! Warning: `flutter` on your path resolves to /Users/xuantung/Desktop/Develop/flutter/3.19.6/flutter/bin/flutter, which is not inside
your current Flutter SDK checkout at /Users/xuantung/Desktop/Develop/flutter/3.22.3/flutter. Consider adding
/Users/xuantung/Desktop/Develop/flutter/3.22.3/flutter/bin to the front of your path.
! Warning: `dart` on your path resolves to /Users/xuantung/Desktop/Develop/flutter/3.19.6/flutter/bin/dart, which is not inside your
current Flutter SDK checkout at /Users/xuantung/Desktop/Develop/flutter/3.22.3/flutter. Consider adding
/Users/xuantung/Desktop/Develop/flutter/3.22.3/flutter/bin to the front of your path.
[!] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
✗ cmdline-tools component is missing
Run `path/to/sdkmanager --install "cmdline-tools;latest"`
See https://developer.android.com/studio/command-line for more details.
✗ Android license status unknown.
Run `flutter doctor --android-licenses` to accept the SDK licenses.
See https://flutter.dev/docs/get-started/install/macos#android-setup for more details.
[✓] Xcode - develop for iOS and macOS (Xcode 15.1)
[✓] Chrome - develop for the web
[✓] Android Studio (version 2023.1)
[✓] VS Code (version 1.92.1)
[✓] Connected device (3 available)
! Error: Browsing on the local area network for iPhone. Ensure the device is unlocked and attached with a cable or associated with the
same local area network as this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code -27)
[✓] Network resources
! Doctor found issues in 2 categories.
```
</details>
| framework,f: gestures,has reproducible steps,P2,team-framework,triaged-framework,found in release: 3.24,found in release: 3.25 | low | Critical |
2,470,342,151 | PowerToys | Fancy Zones when restoring from "maximize window" | ### Description of the new feature / enhancement
I like to arrange multiple windows with fancy zones. When I need to view one of these windows with more details, I maximize it. Then restoring the window, it doesn't snap back to the fancy window layout, but to the position before fancy window positioning was used.
### Scenario when this would be used?
It would be great to have the option that the fancy window positioning is actually stored as current window position, so the window will snap back to this after maximizing.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,470,349,766 | kubernetes | featuregates_linter fails to update the corresponding files | ### Repro
```diff
diff --git a/pkg/features/kube_features.go b/pkg/features/kube_features.go
index 80a25132bca..fb598b95ddd 100644
--- a/pkg/features/kube_features.go
+++ b/pkg/features/kube_features.go
@@ -1125,7 +1125,7 @@ var defaultKubernetesFeatureGates = map[featuregate.Feature]featuregate.FeatureS
MinDomainsInPodTopologySpread: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, // remove in 1.32
- MultiCIDRServiceAllocator: {Default: false, PreRelease: featuregate.Beta},
+ MultiCIDRServiceAllocator: {Default: true, PreRelease: featuregate.Beta},
NewVolumeManagerReconstruction: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, // remove in 1.32
```
```sh
hack/update-featuregates.sh
found 158 features in FeatureSpecMap var defaultKubernetesFeatureGates in file: /usr/local/google/home/aojea/src/kubernetes/pkg/features/kube_features.go
found 2 features in FeatureSpecMap var defaultKubernetesFeatureGates in file: /usr/local/google/home/aojea/src/kubernetes/staging/src/k8s.io/apiextensions-apiserver/pkg/features/kube_features.go
found 37 features in FeatureSpecMap var defaultKubernetesFeatureGates in file: /usr/local/google/home/aojea/src/kubernetes/staging/src/k8s.io/apiserver/pkg/features/kube_features.go
found 3 features in FeatureSpecMap of func featureGates in file: /usr/local/google/home/aojea/src/kubernetes/staging/src/k8s.io/component-base/logs/api/v1/kube_features.go
found 1 features in FeatureSpecMap of func featureGates in file: /usr/local/google/home/aojea/src/kubernetes/staging/src/k8s.io/component-base/metrics/features/kube_features.go
found 3 features in FeatureSpecMap var cloudPublicFeatureGates in file: /usr/local/google/home/aojea/src/kubernetes/staging/src/k8s.io/controller-manager/pkg/features/kube_features.go
panic: feature MultiCIDRServiceAllocator changed with diff: cmd.featureInfo{
Name: "MultiCIDRServiceAllocator",
FullName: "",
VersionedSpecs: []cmd.featureSpec{
{
- Default: false,
+ Default: true,
LockToDefault: false,
PreRelease: "Beta",
Version: "",
},
},
}
goroutine 1 [running]:
k8s.io/kubernetes/test/featuregates_linter/cmd.updateFeatureListFunc(0xc0001f4d00?, {0x69953d?, 0x4?, 0x6994ed?})
/usr/local/google/home/aojea/src/kubernetes/test/featuregates_linter/cmd/feature_gates.go:108 +0x91
github.com/spf13/cobra.(*Command).execute(0xc0001fc608, {0x8c8000, 0x0, 0x0})
/usr/local/google/home/aojea/src/kubernetes/vendor/github.com/spf13/cobra/command.go:989 +0xa91
github.com/spf13/cobra.(*Command).ExecuteC(0x8a3dc0)
/usr/local/google/home/aojea/src/kubernetes/vendor/github.com/spf13/cobra/command.go:1117 +0x3ff
github.com/spf13/cobra.(*Command).Execute(...)
/usr/local/google/home/aojea/src/kubernetes/vendor/github.com/spf13/cobra/command.go:1041
k8s.io/kubernetes/test/featuregates_linter/cmd.Execute()
/usr/local/google/home/aojea/src/kubernetes/test/featuregates_linter/cmd/root.go:32 +0x1a
main.main()
/usr/local/google/home/aojea/src/kubernetes/test/featuregates_linter/main.go:22 +0xf
```
`verifyFeatureDeletionOnly` does not consider the command is being executed to update the files and fails, so it never updates the corresponding files
| kind/bug,sig/api-machinery,triage/accepted | low | Major |
2,470,370,373 | ollama | Vision model prompt eval count in response is null or 1, what should be expected? | ### What is the issue?
When calling the chat completion for the following vision models, I get the following prompt eval count value.
llava:7b-v1.6-mistral-q4_0 -> prompt eval count `1`
llava-llama3:8b-v1.1-q4_0 -> prompt eval count `null`
llava-phi3:3.8b-mini-fp16 -> prompt eval count `1`
I don't know what I should expect for the prompt eval count when calling vision models. I'm not sure why llava-llama3 returned null when it generated a response.
What should be expected when calling vision models?
An example of the prompt is the following. The same image is used.
System
You are a helpful AI assistant
User
Describe what is in this image
Image
<an image>
### OS
Linux
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.3.6 | bug | low | Minor |
2,470,415,847 | rust | `compiletest::runtest`: `aggressive_rm_rf` does not like symlinks on Windows | While reviewing #128562 and testing I discovered that `runtest.rs`'s `aggressive_rm_rf` is not aggressive enough, unfortunately, because symlink-to-dir and symlink-to-file on Windows need different handling.
Have not considered junction points and hard links.
This can cause a rmake.rs test that creates symlinks to have its output artifacts fail to be removed on non-fresh test runs. | O-windows,T-compiler,T-bootstrap,C-bug,A-compiletest,A-run-make | low | Minor |
2,470,418,799 | godot | CollisionPolygon2D sprite doesn't follow parent when set to "Top Level" while "Visible Collision Shapes" is on, although parent still collides. | ### Tested versions
Done in 4.3, vaguely remember this not being an issue in 4.2.x
### System information
Godot v4.3.stable (77dcf97d8) - Windows 10.0.22631 - Vulkan (Forward+) - integrated AMD Radeon(TM) Graphics (Advanced Micro Devices, Inc.; 31.0.12042.4) - AMD Ryzen 5 5600G with Radeon Graphics (12 Threads)
### Issue description
As the title says. This doesn't happen when set to "Show Behind", nor does it happen with CollisionShape2D.

As you can see, I can still manipulate the small object (which has gravity) by balancing it on top of me, showing I still have collision, while the sprite of the CollisionPolygon2D just hangs there.

This also happens in the editor when moving the parent, which the child should follow. (The marker is for the Node2D and background)
### Steps to reproduce
Just make a CharacterBody2D with some movement code and give it a CollisionPolygon2D as a child on the top level, while having visible collision shapes. The parent still collides according to the shape of the CollisionPolygon2D, but the sprite doesn't move. Just a small oversight in the source code, I assume.
### Minimal reproduction project (MRP)
[ayo-4.3.zip](https://github.com/user-attachments/files/16638325/ayo-4.3.zip)
No clue if I did it right. The description above should suffice though. | bug,needs testing,topic:2d | low | Minor |
2,470,458,053 | PowerToys | Improvements to the new feature to be released (Workspace App Layouts ) | ### Description of the new feature / enhancement
It has been announced that the following new feature will be released (Workspace App Layouts)

I'm suggesting the following improvements for this feature (if there is something that will be included, please ignore it)
**1-** Start the programs saved in the layouts with Windows (with the possibility of choosing which ones to start)
**2-** Virtual Desktop support, so that each layout can be started in its respective Virtual Desktop
**3-** Custom/automatic icon for the shortcut created on the desktop/taskbar, with small icons of the grouped programs that are in the layout
**4-** Choose whether you want the started programs to be grouped together in the taskbar with a single icon, or the way it already works (each program with its own icon in the taskbar).
**5-** Support for the Windows Snap Assist feature so that when we resize windows, their new position/size is automatically saved in the saved layouts.
### Scenario when this would be used?
In multitasking, we need to have our programs open in their respective positions automatically.
When we restart Windows, the positions and sizes of the programs are not remembered, nor is the order in which they are inserted in the Virtual Desktop. So it takes a lot of time to organize each of the programs in their respective Virtual Desktop groups or not.
### Supporting information
_No response_ | Needs-Triage | low | Major |
2,470,459,438 | godot | Projectile-like RigidBody3D loses it's initial velocity when launched from non-zero position with specific direction | ### Tested versions
- Reproducible in 4.3.x
### System information
Godot v4.3.rc2 - Windows 10.0.19045 - Vulkan (Forward+) - integrated AMD Radeon(TM) R3 Graphics (Advanced Micro Devices, Inc.; 27.20.20913.2000) - AMD A4-9120 RADEON R3, 4 COMPUTE CORES 2C+2G (2 Threads)
### Issue description
*Sorry for the bad English*
I was doing the mechanics of firing physical bullets for my project. When I was shooting in all directions, I noticed that the bullets were quite slow when I shot at the lower and upper angles, equal, as it were, 45 degrees.
## Normal velocity, when looking at this direction

## Low velocity, when looking at THIS direction

Also, I found things that somehow make this bug disappear. Take a look at `Gun.gd` script fragment:
```gdscript
func Launch_Rigid_Body_Projectile(Collision_Data, _proj):
var _Point = Collision_Data[1]
var _Norm = Collision_Data[2]
#var _proj = _projectile.instantiate() # IGNORE THIS LINE
#$bullet_hint.add_child(_proj) # UNCOMMENT THIS
add_child(_proj) # AND COMMENT THIS, TO ENABLE BUG
print(_proj.position)
#$bullet_hint.call_deferred("add_child", _proj) # IGNORE THIS TOO
_proj.position = $"../Gun/bullet_hint".position # ALSO DISABLING THIS = NO BUG
var _Direction = (_Point - _proj.global_transform.origin).normalized()
print(_Direction*1000)
_proj.set_as_top_level(true)
_proj.set_linear_velocity(_Direction*1000)
```
The script consists of parts that I didn't write, so it looks bad.
So, is this a bug, or do I not know something?
### Steps to reproduce
- Download project, launch
-- Take into account `output` and `Gun.gd`. Script prints some position and velocity at the moment of a shot.
### Minimal reproduction project (MRP)
[velocity trouble.zip](https://github.com/user-attachments/files/16638544/velocity.trouble.zip) | topic:physics,needs testing,topic:3d | low | Critical |
2,470,473,927 | flutter | Utilize the `crashreport` (Android SDK) tool for crashing emulator tests | Forked from https://github.com/flutter/flutter/issues/153445.
We rolled to a new version of AVD (https://github.com/flutter/flutter/pull/153520), and it should be possible to use the `sdk/emulator/crashreport` tool (something new) to read and send the contents of `emu-crash.db` to Android (go/crash). One suggestion would be to do this during any CI test that uses an emulator and fails.
I have to return to other work, but I'll keep this ticket open and come back to it - but maybe someone else will first 🤞🏼 | platform-android,dependency: android,team-infra,P1,blocked,c: tech-debt,triaged-infra | medium | Critical |
2,470,498,446 | pytorch | [torchbind x PT2] Handle vllm's ScalarType | There's a use case for TorchBind objects that are just constant metadata.
There's two options here:
- we develop some mechanism for unflattening/flattening the object so that it does not appear in the schema
- we add an option to mark a TorchBind class as side-effect-free. If it is side-effect-free, then we don't need to generate effect tokens for it. | triaged,module: torchbind,vllm-compile | low | Minor |
2,470,502,284 | deno | `deno doc --html` generates incorrect links when searching from a subpage | When typing a search term into the search bar in a subpage (as in, not `index.html`) of the documentation generated using `deno doc --html <entryPoint.ts>`, the search results may link to the page in the wrong directory level.
Steps
1. Generate the documentation for a project using `deno doc --html <entryPoint.ts>`. This should generate the doc files in `<projectRoot>/docs`
2. [Correct Link] Open `<projectRoot>/docs/index.html` in a browser and search for an existing symbol in the project. Clicking one of the results correctly opens the link to `<projectRoot>/docs/~/<result>.html` The href link will be `./~/<result>.html`. `index.html`'s content data for the meta element `doc-current-file`, which is used for calculating the path for the link according to the code in `search.js`, is an empty string.
3. [Incorrect Link] Open any subpage, say `<projectRoot>/docs/Foo.html`. Search for an existing symbol in the project. Clicking one of the results opens a link to `<projectRoot>/~/<result>.html` which doesn't exist. The href link will be `../.././~/<result>.html` (there's an extra "../" here). `Foo.html`'s content data for the meta element `doc-current-file` is `"."`.
Version: Deno 1.45.5 | bug | low | Minor |
2,470,506,663 | deno | WebSocket memory leak | Version: Deno 1.45.5
OS: Amazon Linux or Ubuntu Linux
Steps: start the server and the client scripts, the server mem should quickly go up, stop the client script to disconnect. Because the server has disconnected the socket I expect the memory eventually to go down, but it keeps high for very long time.
Here is the log in my case:
<img width="567" alt="Screenshot 2024-08-16 at 23 35 58" src="https://github.com/user-attachments/assets/3b025a64-db0f-479e-ae67-896686263c18">
Server: spam a client with numerous messages containing a long string. Notice I don’t check the socket’s `bufferedAmount` field.
```typescript
const LONG_STRING = 'x'.repeat(1024 * 1024);
const handlers: ClientHandler[] = [];
setInterval(() => {
const mem = (Deno.memoryUsage().rss / 1024 / 1024).toFixed();
console.log('srv_stat', { mem });
}, 2500);
Deno.serve({ port: 3333 }, (req) => {
if (req.headers.get('upgrade') != 'websocket') {
return new Response(null, { status: 501 });
}
const wsUpgrade = Deno.upgradeWebSocket(req);
handlers.push(new ClientHandler(wsUpgrade.socket));
return wsUpgrade.response;
});
class ClientHandler {
socket: WebSocket | null;
isSubscribed = false;
constructor(socket: WebSocket) {
this.socket = socket;
this.socket.addEventListener('close', this.handleClose);
this.socket.addEventListener('error', this.handleError);
this.socket.addEventListener('open', this.handleOpen);
this.socket.addEventListener('message', this.handleMessage);
}
close() {
this.socket?.close();
this.socket = null;
}
async runSendLoop() {
while (true) {
if (this.socket === null) break;
if (this.socket.readyState === WebSocket.CLOSED) break;
if (this.socket.readyState === WebSocket.CLOSING) break;
for (let i = 0; i < 10; i++) {
this.socket.send(LONG_STRING);
}
await this.delay(0);
}
console.log('closed_send_loop');
}
delay(ms: number) {
return new Promise((resolve) => setTimeout(resolve, ms));
}
handleClose = () => {
console.log('ws_close');
if (this.socket === null) return;
this.socket.removeEventListener('close', this.handleClose);
this.socket.removeEventListener('error', this.handleError);
this.socket.removeEventListener('open', this.handleOpen);
this.socket.removeEventListener('message', this.handleMessage);
this.socket = null;
};
handleOpen = () => {
console.log('ws_open');
};
handleError = (_ev: Event) => {
const errEv = _ev as unknown as ErrorEvent;
console.log('ws_err', errEv.message);
};
handleMessage = (ev: MessageEvent) => {
const data = ev.data as string;
if (data === 'subscribe' && !this.isSubscribed) {
this.isSubscribed = true;
console.log('subscribed');
this.runSendLoop();
return;
}
};
}
Deno.addSignalListener('SIGINT', () => {
for (const handler of handlers) {
handler.close;
}
Deno.exit();
});
```
Client: very simple, just send a "subscribe” string and keep receiving the messages from the server
```typescript
class Client {
socket?: WebSocket;
msgCount = 0;
msgTotLen = 0;
handleOpen = () => {
console.log('ws_open');
this.socket!.send('subscribe');
};
handleClose = () => {
console.log('ws_close');
};
handleMessage = (ev: MessageEvent) => {
this.msgCount += 1;
this.msgTotLen += (ev.data as string).length;
};
handleError = (e: Event) => {
const errEvent = e as unknown as ErrorEvent;
console.log('ws_err', errEvent.message);
};
open() {
this.socket = new WebSocket('http://localhost:3333');
this.socket.addEventListener('open', this.handleOpen);
this.socket.addEventListener('close', this.handleClose);
this.socket.addEventListener('error', this.handleError);
this.socket.addEventListener('message', this.handleMessage);
}
close() {
this.socket?.close();
}
}
const client = new Client();
client.open();
Deno.addSignalListener('SIGINT', () => {
client.close();
setTimeout(() => Deno.exit(), 1000);
});
setInterval(() => {
const mem = (Deno.memoryUsage().rss / 1024 / 1024).toFixed();
console.log('stats', {
mem,
cnt: client.msgCount,
len: (client.msgTotLen / 1e6).toFixed(1) + 'm',
});
}, 2500);
``` | bug,ext/websocket | low | Critical |
2,470,514,082 | pytorch | TORCHDYNAMO_REPRO_AFTER=dynamo TORCHDYNAMO_REPRO_LEVEL=4 for accuracy debugging fails on extremely simple example | ### 🐛 Describe the bug
Hello again. During debugging NaN issues with torch.compile, I've found out that simple nn.Sequential inheritance with "forward" redefinition throws error.
### Error logs
/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_inductor/compile_fx.py:150: UserWarning: TensorFloat32 tensor cores for float32 matrix multiplication available but not enabled. Consider setting `torch.set_float32_matmul_precision('high')` for better performance.
warnings.warn(
W0816 18:40:55.872000 139890329966400 torch/fx/experimental/symbolic_shapes.py:4075] [0/0] Failing guard allocated at:
W0816 18:40:55.872000 139890329966400 torch/fx/experimental/symbolic_shapes.py:4075] [0/0]
E0816 18:40:55.873000 139890329966400 torch/_guards.py:262] [0/0] Error while creating guard:
E0816 18:40:55.873000 139890329966400 torch/_guards.py:262] [0/0] Name: ''
E0816 18:40:55.873000 139890329966400 torch/_guards.py:262] [0/0] Source: shape_env
E0816 18:40:55.873000 139890329966400 torch/_guards.py:262] [0/0] Create Function: SHAPE_ENV
E0816 18:40:55.873000 139890329966400 torch/_guards.py:262] [0/0] Guard Types: None
E0816 18:40:55.873000 139890329966400 torch/_guards.py:262] [0/0] Code List: None
E0816 18:40:55.873000 139890329966400 torch/_guards.py:262] [0/0] Object Weakref: None
E0816 18:40:55.873000 139890329966400 torch/_guards.py:262] [0/0] Guarded Class Weakref: None
E0816 18:40:55.873000 139890329966400 torch/_guards.py:262] [0/0] Traceback (most recent call last):
E0816 18:40:55.873000 139890329966400 torch/_guards.py:262] [0/0] File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_guards.py", line 260, in create
E0816 18:40:55.873000 139890329966400 torch/_guards.py:262] [0/0] return self.create_fn(builder, self)
E0816 18:40:55.873000 139890329966400 torch/_guards.py:262] [0/0] File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_dynamo/guards.py", line 1717, in SHAPE_ENV
E0816 18:40:55.873000 139890329966400 torch/_guards.py:262] [0/0] guards = output_graph.shape_env.produce_guards(
E0816 18:40:55.873000 139890329966400 torch/_guards.py:262] [0/0] File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/fx/experimental/symbolic_shapes.py", line 4084, in produce_guards
E0816 18:40:55.873000 139890329966400 torch/_guards.py:262] [0/0] issue_guard(guard)
E0816 18:40:55.873000 139890329966400 torch/_guards.py:262] [0/0] File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/fx/experimental/symbolic_shapes.py", line 4048, in issue_guard
E0816 18:40:55.873000 139890329966400 torch/_guards.py:262] [0/0] guard_expr = ShapeGuardPrinter(symbol_to_source, source_ref, self.var_to_sources).doprint(expr)
E0816 18:40:55.873000 139890329966400 torch/_guards.py:262] [0/0] File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/sympy/printing/printer.py", line 292, in doprint
E0816 18:40:55.873000 139890329966400 torch/_guards.py:262] [0/0] return self._str(self._print(expr))
E0816 18:40:55.873000 139890329966400 torch/_guards.py:262] [0/0] File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/sympy/printing/printer.py", line 331, in _print
E0816 18:40:55.873000 139890329966400 torch/_guards.py:262] [0/0] return printmethod(expr, **kwargs)
E0816 18:40:55.873000 139890329966400 torch/_guards.py:262] [0/0] File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/sympy/printing/str.py", line 778, in _print_Relational
E0816 18:40:55.873000 139890329966400 torch/_guards.py:262] [0/0] return '%s %s %s' % (self.parenthesize(expr.lhs, precedence(expr)),
E0816 18:40:55.873000 139890329966400 torch/_guards.py:262] [0/0] File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/sympy/printing/str.py", line 38, in parenthesize
E0816 18:40:55.873000 139890329966400 torch/_guards.py:262] [0/0] return self._print(item)
E0816 18:40:55.873000 139890329966400 torch/_guards.py:262] [0/0] File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/sympy/printing/printer.py", line 331, in _print
E0816 18:40:55.873000 139890329966400 torch/_guards.py:262] [0/0] return printmethod(expr, **kwargs)
E0816 18:40:55.873000 139890329966400 torch/_guards.py:262] [0/0] File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/sympy/printing/str.py", line 364, in _print_Mul
E0816 18:40:55.873000 139890329966400 torch/_guards.py:262] [0/0] a_str = [self.parenthesize(x, prec, strict=False) for x in a]
E0816 18:40:55.873000 139890329966400 torch/_guards.py:262] [0/0] File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/sympy/printing/str.py", line 364, in <listcomp>
E0816 18:40:55.873000 139890329966400 torch/_guards.py:262] [0/0] a_str = [self.parenthesize(x, prec, strict=False) for x in a]
E0816 18:40:55.873000 139890329966400 torch/_guards.py:262] [0/0] File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/sympy/printing/str.py", line 38, in parenthesize
E0816 18:40:55.873000 139890329966400 torch/_guards.py:262] [0/0] return self._print(item)
E0816 18:40:55.873000 139890329966400 torch/_guards.py:262] [0/0] File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/sympy/printing/printer.py", line 331, in _print
E0816 18:40:55.873000 139890329966400 torch/_guards.py:262] [0/0] return printmethod(expr, **kwargs)
E0816 18:40:55.873000 139890329966400 torch/_guards.py:262] [0/0] File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/fx/experimental/symbolic_shapes.py", line 1491, in _print_Symbol
E0816 18:40:55.873000 139890329966400 torch/_guards.py:262] [0/0] assert self.symbol_to_source.get(expr), (
E0816 18:40:55.873000 139890329966400 torch/_guards.py:262] [0/0] AssertionError: s0 (could be from ['__meta_utils_unknown_tensor10.size()[0]']) not in {s1: [], s0: []}. If this assert is failing, it could be due to the issue described in https://github.com/pytorch/pytorch/pull/90665
E0816 18:40:55.875000 139890329966400 torch/_guards.py:264] [0/0] Created at:
E0816 18:40:55.875000 139890329966400 torch/_guards.py:264] [0/0] File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 564, in transform
E0816 18:40:55.875000 139890329966400 torch/_guards.py:264] [0/0] tracer = InstructionTranslator(
E0816 18:40:55.875000 139890329966400 torch/_guards.py:264] [0/0] File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2360, in __init__
E0816 18:40:55.875000 139890329966400 torch/_guards.py:264] [0/0] output=OutputGraph(
E0816 18:40:55.875000 139890329966400 torch/_guards.py:264] [0/0] File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 313, in __init__
E0816 18:40:55.875000 139890329966400 torch/_guards.py:264] [0/0] self.init_ambient_guards()
E0816 18:40:55.875000 139890329966400 torch/_guards.py:264] [0/0] File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_dynamo/output_graph.py", line 452, in init_ambient_guards
E0816 18:40:55.875000 139890329966400 torch/_guards.py:264] [0/0] self.guards.add(ShapeEnvSource().make_guard(GuardBuilder.SHAPE_ENV))
Traceback (most recent call last):
File "/home/user//test.py", line 21, in <module>
out = model(x)
File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1551, in _wrapped_call_impl
return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 433, in _fn
return fn(*args, **kwargs)
File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1116, in __call__
return self._torchdynamo_orig_callable(
File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 948, in __call__
result = self._inner_convert(
File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 472, in __call__
return _compile(
File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_utils_internal.py", line 84, in wrapper_function
return StrobelightCompileTimeProfiler.profile_compile_time(
File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_strobelight/compile_time_profiler.py", line 129, in profile_compile_time
return func(*args, **kwargs)
File "/home/user/anaconda3/envs/python310/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 817, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_dynamo/utils.py", line 231, in time_wrapper
r = func(*args, **kwargs)
File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 726, in compile_inner
check_fn = CheckFunctionManager(
File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_dynamo/guards.py", line 2130, in __init__
guard.create(builder)
File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_guards.py", line 260, in create
return self.create_fn(builder, self)
File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/_dynamo/guards.py", line 1717, in SHAPE_ENV
guards = output_graph.shape_env.produce_guards(
File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/fx/experimental/symbolic_shapes.py", line 4084, in produce_guards
issue_guard(guard)
File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/fx/experimental/symbolic_shapes.py", line 4048, in issue_guard
guard_expr = ShapeGuardPrinter(symbol_to_source, source_ref, self.var_to_sources).doprint(expr)
File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/sympy/printing/printer.py", line 292, in doprint
return self._str(self._print(expr))
File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/sympy/printing/printer.py", line 331, in _print
return printmethod(expr, **kwargs)
File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/sympy/printing/str.py", line 778, in _print_Relational
return '%s %s %s' % (self.parenthesize(expr.lhs, precedence(expr)),
File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/sympy/printing/str.py", line 38, in parenthesize
return self._print(item)
File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/sympy/printing/printer.py", line 331, in _print
return printmethod(expr, **kwargs)
File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/sympy/printing/str.py", line 364, in _print_Mul
a_str = [self.parenthesize(x, prec, strict=False) for x in a]
File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/sympy/printing/str.py", line 364, in <listcomp>
a_str = [self.parenthesize(x, prec, strict=False) for x in a]
File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/sympy/printing/str.py", line 38, in parenthesize
return self._print(item)
File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/sympy/printing/printer.py", line 331, in _print
return printmethod(expr, **kwargs)
File "/home/user/anaconda3/envs/python310/lib/python3.10/site-packages/torch/fx/experimental/symbolic_shapes.py", line 1491, in _print_Symbol
assert self.symbol_to_source.get(expr), (
AssertionError: s0 (could be from ['__meta_utils_unknown_tensor10.size()[0]']) not in {s1: [], s0: []}. If this assert is failing, it could be due to the issue described in https://github.com/pytorch/pytorch/pull/90665
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
### Minified repro
```
import torch
device = torch.device("cuda:0")
# this one works fine
# class Test(torch.nn.Sequential):
# pass
class Test(torch.nn.Sequential):
def forward(self, x):
for layer in self:
x = layer(x)
return x
model = Test(torch.nn.Linear(64, 64)).to(device)
model.compile()
x = torch.randn(4, 16, 64, device=device)
out = model(x)
```
### Versions
Collecting environment information...
PyTorch version: 2.4.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.30.0
Libc version: glibc-2.35
Python version: 3.10.13 | packaged by conda-forge | (main, Dec 23 2023, 15:36:39) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.4.210-39.1-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A800-SXM4-80GB
GPU 1: NVIDIA A800-SXM4-80GB
GPU 2: NVIDIA A800-SXM4-80GB
GPU 3: NVIDIA A800-SXM4-80GB
GPU 4: NVIDIA A800-SXM4-80GB
GPU 5: NVIDIA A800-SXM4-80GB
GPU 6: NVIDIA A800-SXM4-80GB
GPU 7: NVIDIA A800-SXM4-80GB
Nvidia driver version: 550.54.14
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 120
On-line CPU(s) list: 0-119
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7662 64-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 120
Stepping: 0
BogoMIPS: 3992.43
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr wbnoinvd arat npt nrip_save umip rdpid arch_capabilities
Virtualization: AMD-V
L1d cache: 7.5 MiB (120 instances)
L1i cache: 7.5 MiB (120 instances)
L2 cache: 60 MiB (120 instances)
L3 cache: 1.9 GiB (120 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-29
NUMA node1 CPU(s): 30-59
NUMA node2 CPU(s): 60-89
NUMA node3 CPU(s): 90-119
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flake8==5.0.4
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.3
[pip3] open-clip-torch==2.24.0
[pip3] pytorch-warmup==0.1.1
[pip3] torch==2.4.0+cu124
[pip3] torch-model-archiver==0.11.1
[pip3] torch-tb-profiler==0.4.3
[pip3] torch-workflow-archiver==0.2.14
[pip3] torchaudio==2.4.0+cu124
[pip3] torchdata==0.7.1
[pip3] torchmetrics==1.4.0.post0
[pip3] torchserve==0.11.1
[pip3] torchvision==0.19.0+cu124
[pip3] triton==3.0.0
[conda] numpy 1.26.3 pypi_0 pypi
[conda] open-clip-torch 2.24.0 pypi_0 pypi
[conda] torch 2.4.0+cu124 pypi_0 pypi
[conda] torch-model-archiver 0.11.1 pypi_0 pypi
[conda] torch-tb-profiler 0.4.3 pypi_0 pypi
[conda] torch-workflow-archiver 0.2.14 pypi_0 pypi
[conda] torchaudio 2.4.0+cu124 pypi_0 pypi
[conda] torchdata 0.7.1 pypi_0 pypi
[conda] torchmetrics 1.4.0.post0 pypi_0 pypi
[conda] torchserve 0.11.1 pypi_0 pypi
[conda] torchvision 0.19.0+cu124 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @ezyang @chauhang @penguinwu | triaged,oncall: pt2,module: minifier | low | Critical |
2,470,516,488 | flutter | Productionize evaluating the stability of android emulators | Flutter android needs a way to evaluate the stability of a version of newly released android emulators before updating engine/framework/packages code to use the emulators in presubmit.
@matanlurey hand rolled some github actions that launch an emulator and run over and over to check base stability.
https://github.com/matanlurey/flutter-renegade-gha/blob/main/.github/workflows/ci.yaml
https://github.com/matanlurey/flutter/blob/try-crash-driver-ci/dev/bots/test.dart
https://github.com/matanlurey/flutter-renegade-gha/actions
https://github.com/matanlurey/flutter/tree/try-crash-driver-ci
Productionalize that workflow so that any flutter-android team member can test an emulator version for stability as part of our updating to a new api level code. | platform-android,P2,team-android,triaged-android,fyi-infra | low | Critical |
2,470,518,727 | flutter | [iOS 18] Fix Eye Tracking's scroll to top/bottom | ### Problem
Eye Tracking is a new accessibility feature added in iOS 18: https://www.apple.com/newsroom/2024/05/apple-announces-new-accessibility-features-including-eye-tracking/
Eye Tracking's scroll to top and scroll to bottom don't do anything on a Flutter app.
> [!NOTE]
> Eye Tracking's scroll up and scroll down works as expected on a Flutter app.
### Steps
1. Turn on Eye Tracking: navigate to the **Settings** app > **Accessibility** > **Eye Tracking**.
2. Run the Cupertino scrollbar example: [`examples/api/lib/cupertino/scrollbar/cupertino_scrollbar.0.dart`](https://github.com/flutter/flutter/blob/master/examples/api/lib/cupertino/scrollbar/cupertino_scrollbar.0.dart)
3. Open the Eye Tracking menu, select **Scroll**, select **Scroll to Bottom**
<img src="https://github.com/user-attachments/assets/3a359137-8f85-492a-8390-ca1672a7b39a" width="400px" />
<img src="https://github.com/user-attachments/assets/ae615ad1-2655-42e5-9cde-09816579f5f3" width="400px" />
| platform-ios,a: accessibility,P2,team-ios,triaged-ios | low | Minor |
2,470,519,463 | terminal | The URL highlighting fade effect may look unpleasant with TUI programs | ### Windows Terminal version
1.21.1772.0
### Windows build number
10.0.19045.3448
### Other Software
Far Manager 3.0.6359.0 x64
### Steps to reproduce
Run FAR Manager
create new file, insert line:
https://aka.ms/terminal-documentation
save
set mouse cursor over URL
press Esc (exit from editor to panel)
### Expected Behavior
no strange visual effects
### Actual Behavior
Effect of fade highlight URLs may be strange look (as flickering line)
with TUI programs for example FAR Manager (in example Video when exit from editor).
https://github.com/user-attachments/assets/1a474490-e660-4ab5-a554-72792c67b0d7
| Issue-Bug,Area-UserInterface,Product-Terminal,Priority-3 | low | Minor |
2,470,549,638 | godot | @tool script object added to scene do not always get accurately recorded into .tscn | ### Tested versions
- found in `v4.2.2.stable.official` [15073afe3]
### System information
Godot v4.2.2.stable - Windows 10.0.19045 - Vulkan (Forward+)
### Issue description
# Context
I'm building a plugin to add configured versions of a complex scene into our level scene.
# Issue
Under some circumstances the added objects are not faithfully saved into the `.tscn` even though the Scene Tree appears correct in editor.
~~The editor tab does not show `(*)` indicated unsaved changes and even if I make changes then force save the it doesn't save the addon-added objects.~~ Per conversation below this is because I'm not using the EditorUndoRedoPlugin. I added that into the workflow and now get the `(*)` badge until save but it doesn't impact the issue of incorrectly persisted scenes.
If you close the scene and reopen or play the scene some configuration is lost.
# Expectation
What I see in the editor will be saved into the tscn and persist into play or through an unload/load cycle.
### Steps to reproduce
I'm condensing my situation so this is at best an approximation of things but that _should_ be sufficient to break things
### Setup
1. Have scene that will be manipulated: `GreyboxObject.tscn`
```gdscript
@tool
class_name GreyboxObject
extends Node2D
@export_category("Functionality")
@export var can_block_movement: bool = true: # it's important that the default here is set to true
get:
return can_block_movement
set(value):
can_block_movement = value
# on-update code elided
func _ready() -> void:
if Engine.is_editor_hint():
get_parent().set_editable_instance(self, true)
```
2. At some point when setting you're GreyboxObject up in editor decide that you don't want `can_block_movement` to default to true and change `can_block_movement` to false. In `GreyboxObject.tscn` in the editor this now shows up as a non-default value, e.g., it can be reverted
3. Have an addon that creates `GreyboxObject`s and configures it
```gdscript
...
var obj: GreyboxObject = preload(GREYBOX_OBJECT_SCENE).instantiate()
obj.name = name
_objects_parent.add_child(obj)
# the owner of the new greybox object is the level (Object's parent's parent)
# by convention this should always be the edited scene
obj.owner = _objects_parent.get_parent()
obj.can_block_movement = collides
```
### Repro
At this point if I use the addon code to create a new greybox object in `SomeLevel.tscn` that should collide it'll get added to the scene and correctly configured -- basically can_block_movement triggers adding `RigidBody2D` etc so that you can configure colliders -- but it's impossible to get the object to be saved into the `SomeLevel.tscn` with `can_block_movement=true`.
Without that can_block_movement annotation loading the GreyboxObject results in incorrect state relative to what you see in the editor immediately after adding it and having it report as "saved"
### tl;dr
I had three layers of values.
1. Code: true (not the primitive's default)
2. sceneA using the above code: false (resets to primitive's default)
3. sceneB adding sceneA: true
Basically because 1 and 3 match the resulting scene isn't detecting the value as dirty/needing to be saved.
### Minimal reproduction project (MRP)
TBD, I tried to include sufficient information above but will also try to carve out time to add a MRP as well | topic:editor,topic:plugin,needs testing | low | Major |
2,470,575,174 | material-ui | [material-ui][Menu] Allow Menu Items to be focused via keyboard when wrapped in <span> and <Tooltip> tags | ### Summary
While trying to create a keyboard accessible dropdown menu, MenuItems were wrapped in a `<Tooltip>` and `<span>` became inaccessible and broke the accessibility of the dropdown menu. The Menu itself was selectable via keyboard, but the items wrapped in both `<Tooltip>` and `<span>` within it were completely inaccessible.
Example of an inaccessible MenuItem:
```js
<Tooltip title={"This is menu item 1"} placement="top" arrow>
<span>
<MenuItem>
Menu Item 1
</MenuItem>
</span>
</Tooltip>
```
### Examples
This menu is not keyboard accessible.
Specific keystrokes were `tab` to select the menu, `return` to open, and then I tried both `tab` and arrow keys to try and focus a specific Menu Item.
```js
<Button onClick={handleMenuOpen}>Dropdown Menu</Button>
<Menu
open={isOpen}
onClose={handleMenuClose}
>
<Tooltip title={"This is menu item 1"} placement="top" arrow>
<span>
<MenuItem>
Menu Item 1
</MenuItem>
</span>
</Tooltip>
<Tooltip title={"This is menu item 2"} placement="top" arrow>
<MenuItem>
Menu Item 2
</MenuItem>
</Tooltip>
</Menu>
```
This example is keyboard accessible. When opened with `tab` + `return` the first menu item is auto-focused.
```js
<Button onClick={handleMenuOpen}>Dropdown Menu</Button>
<Menu
open={isOpen}
onClose={handleMenuClose}
>
<Tooltip title={"This is menu item 1"} placement="top" arrow>
<MenuItem>
Menu Item 1
</MenuItem>
</Tooltip>
<Tooltip title={"This is menu item 2"} placement="top" arrow>
<MenuItem>
Menu Item 2
</MenuItem>
</Tooltip>
</Menu>
```
Both examples built by using `create-react-app` to generate a clean application and then MUI was installed with in the application.
<details><summary>Expanded code snippets</summary>
## Full code for breaking PoC app
```js
import { useState } from 'react';
import { Button, Menu, MenuItem, Tooltip } from '@mui/material';
function App() {
const [isOpen, setIsOpen] = useState(false)
const handleMenuOpen = () => {
setIsOpen(true)
}
const handleMenuClose = () => {
setIsOpen(false)
}
return (
<div className="App">
<header className="App-header">
<Button onClick={handleMenuOpen}>Dropdown Menu</Button>
<Menu
open={isOpen}
onClose={handleMenuClose}
>
<Tooltip title={"This is menu item 1"} placement="top" arrow>
<span>
<MenuItem>
Menu Item 1
</MenuItem>
</span>
</Tooltip>
<Tooltip title={"This is menu item 2"} placement="top" arrow>
<MenuItem>
Menu Item 2
</MenuItem>
</Tooltip>
</Menu>
</header>
</div>
);
}
export default App;
```
</details>
<details><summary>Relevant `package.json` properties</summary>
```js
"dependencies": {
"@emotion/react": "^11.11.4",
"@emotion/styled": "^11.11.5",
"@mui/material": "^5.15.20",
"@testing-library/jest-dom": "^5.17.0",
"@testing-library/react": "^13.4.0",
"@testing-library/user-event": "^13.5.0",
"react": "^18.3.1",
"react-dom": "^18.3.1",
"react-scripts": "5.0.1",
"web-vitals": "^2.1.4"
},
```
</details>
### Motivation
I am trying to create a keyboard accessible dropdown menu with MenuItems that are wrapped in `<span>` tags for formatting, styling, and some minor control functions.
**Search keywords**: MenuItem Span Accessibility | component: menu,package: material-ui | low | Minor |
2,470,593,309 | flutter | Predictive Back Gesture animation doesn't and can't implement material design spec to match native behavior | ### Steps to reproduce
Use `PredictiveBackPageTransitionsBuilder()`
### Expected results
The same animations and behavior as the native apps and system.
Spec: https://developer.android.com/design/ui/mobile/guides/patterns/predictive-back
### Actual results
In the "pre-commit" part of the animation:
1. the front page abruptly disappears when swiping across 35% of the screen
2. the front and back pages ignore if the gesture started from the left or right edge
3. the front and back pages don't go up and down following finger
In the "post-commit" part of the animation:
4. the animation goes way too fast (<100ms, should be ~300ms)
5. the animation blocks all user inputs for about 700ms after the animation is visually finished
6. the animation is the same as the "pre-commit" one, which is not what happens in the rest of Android and in the spec.
`1.` and `6.` can be worked around by writing a custom `PageTransitionBuilder`.
`2.` and `3.` can be partially worked around: only for the front page.
`4.` and `5.` cannot be worked around without editing the SDK source code itself.
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
flutter doctor -v devenv-shell-env
[✓] Flutter (Channel master, 3.24.0-1.0.pre.605, on NixOS 24.05 (Uakari) 6.8.12, locale en_US.UTF-8)
• Flutter version 3.24.0-1.0.pre.605 on channel master at /home/paulg/Repos/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision bced008679 (14 hours ago), 2024-08-15 22:29:24 -0400
• Engine revision a8fefc8118
• Dart version 3.4.3
• DevTools version 2.38.0
[✓] Android toolchain - develop for Android devices (Android SDK version 30.0.3)
• Android SDK at /nix/store/nzwnj7rq8a6cvzbsszf4n9zfvx101zm5-androidsdk/libexec/android-sdk
• Platform android-34, build-tools 30.0.3
• ANDROID_HOME = /nix/store/nzwnj7rq8a6cvzbsszf4n9zfvx101zm5-androidsdk/libexec/android-sdk
• ANDROID_SDK_ROOT = /nix/store/nzwnj7rq8a6cvzbsszf4n9zfvx101zm5-androidsdk/libexec/android-sdk
• Java binary at: /nix/store/l7pwy1rxdrb13svqandm720d06fycd8c-openjdk-17.0.11+9/lib/openjdk/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.11+9-nixos)
• All Android licenses accepted.
[✓] Chrome - develop for the web
• CHROME_EXECUTABLE = /nix/store/cg5bvzxml5bbdw8wlrywb6q7hpdjq0ij-google-chrome-127.0.6533.88/bin/google-chrome-stable
[✓] Linux toolchain - develop for Linux desktop
• clang version 16.0.6
• cmake version 3.29.6
• ninja version 1.12.1
• pkg-config version 0.29.2
[!] Android Studio (not installed)
• Android Studio not found; download from https://developer.android.com/studio/index.html
(or visit https://flutter.dev/docs/get-started/install/linux#android-setup for detailed instructions).
[✓] Connected device (3 available)
• Pixel 7 Pro (mobile) • 192.168.1.13:43591 • android-arm64 • Android 15 (API 35)
• Linux (desktop) • linux • linux-x64 • NixOS 24.05 (Uakari) 6.8.12
• Chrome (web) • chrome • web-javascript • Google Chrome 127.0.6533.88
[✓] Network resources
• All expected network resources are available.
! Doctor found issues in 1 category.
```
</details>
| platform-android,framework,f: routes,P2,team-framework,triaged-framework | low | Major |
2,470,600,563 | flutter | Package integration_test tests timing out | Example https://github.com/flutter/packages/pull/7226 (this isn't related to Xcode beta, we're seeing it elsewhere like https://github.com/flutter/packages/pull/7422/checks?check_run_id=28830793413).
Some, but not all, `integration_test ` tests are timing out. Example: `google_maps_flutter_ios` timing out running `google_maps_test.dart`:
```
11:58 +0: loading /Volumes/Work/s/w/ir/x/w/packages/packages/google_maps_flutter/google_maps_flutter_ios/example/ios14/integration_test/google_maps_test.dart
11:59 +0: loading /Volumes/Work/s/w/ir/x/w/packages/packages/google_maps_flutter/google_maps_flutter_ios/example/ios14/integration_test/google_maps_test.dart
12:00 +0: loading /Volumes/Work/s/w/ir/x/w/packages/packages/google_maps_flutter/google_maps_flutter_ios/example/ios14/integration_test/google_maps_test.dart
12:00 +0 -1: loading /Volumes/Work/s/w/ir/x/w/packages/packages/google_maps_flutter/google_maps_flutter_ios/example/ios14/integration_test/google_maps_test.dart [E]
TimeoutException after 0:12:00.000000: Test timed out after 12 minutes.
package:test_api/src/backend/invoker.dart 338:28 Invoker._handleError.<fn>
dart:async/zone.dart 1391:47 _rootRun
dart:async/zone.dart 1301:19 _CustomZone.run
package:test_api/src/backend/invoker.dart 336:10 Invoker._handleError
package:test_api/src/backend/invoker.dart 291:9 Invoker.heartbeat.<fn>.<fn>
dart:async/zone.dart 1399:13 _rootRun
dart:async/zone.dart 1301:19 _CustomZone.run
package:test_api/src/backend/invoker.dart 290:38 Invoker.heartbeat.<fn>
dart:async-patch/timer_patch.dart 18:15 Timer._createTimer.<fn>
dart:isolate-patch/timer_impl.dart 398:19 _Timer._runTimers
dart:isolate-patch/timer_impl.dart 429:5 _Timer._handleMessage
dart:isolate-patch/isolate_patch.dart 184:12 _RawReceivePort._handleMessage
To run this test again: /Volumes/Work/s/w/ir/x/w/flutter/bin/cache/dart-sdk/bin/dart test /Volumes/Work/s/w/ir/x/w/packages/packages/google_maps_flutter/google_maps_flutter_ios/example/ios14/integration_test/google_maps_test.dart -p vm --plain-name 'loading /Volumes/Work/s/w/ir/x/w/packages/packages/google_maps_flutter/google_maps_flutter_ios/example/ios14/integration_test/google_maps_test.dart'
12:00 +0 -1: Some tests failed.
```
https://logs.chromium.org/logs/flutter/buildbucket/cr-buildbucket/8739766869905812641/+/u/Run_package_tests/drive_examples/stdout
I wasn't able to reproduce these timeouts locally on the same version of Xcode:
```
00:04 +0: ... /Users/m/Projects/packages/packages/google_maps_flutter/google_maps_flutter_ios/example/ios14/integration_test/google_maps_test.dart R00:05 +0: ... /Users/m/Projects/packages/packages/google_maps_flutter/google_maps_flutter_ios/example/ios14/integration_test/google_maps_test.dart 1,076ms
00:22 +0: ... /Users/m/Projects/packages/packages/google_maps_flutter/google_maps_flutter_ios/example/ios14/integration_test/google_maps_test.dart
00:28 +0: ... /Users/m/Projects/packages/packages/google_maps_flutter/google_maps_flutter_ios/example/ios14/integration_test/google_maps_test.dart 6.7s
Xcode build done. 23.5s
00:45 +31 ~5: All tests passed!
```
You can see in the same run there are other `drive` tests that pass on the same simulator after the failure, like `image_picker_ios` (so it's not like the simulator isn't ever launching, for example).
Another example:
https://logs.chromium.org/logs/flutter/buildbucket/cr-buildbucket/8739766869749591089/+/u/Run_package_tests/drive_examples/stdout
I'm guessing the app isn't launching? Hard to tell what's going on without screenshots. | package,P2,team-ios,triaged-ios | low | Critical |
2,470,602,427 | rust | rustdoc type-based seach: allow search by lifetime parameters | many libraries give lifetimes meaningful names, such as [the `object` crate](https://docs.rs/object/latest/object)
being able to navigate these libraries by typing lifetimes like `'data` into the search bar would be quite helpful. additionally, searching for `'_` should show all types that have a lifetime parameter (useful for finding reference-like types). | T-rustdoc,A-lifetimes,C-feature-request,A-rustdoc-search | low | Minor |
2,470,625,906 | TypeScript | Long running encodedSemanticClassifications-full request | ### 🔎 Search Terms
@RyanCavanaugh [requested](https://github.com/microsoft/TypeScript/issues/54459#issuecomment-2293800187) that I create a new issue
### 🕗 Version & Regression Information
- This is the behavior in every version I tried, and I reviewed the FAQ for entries about 5.5.4
### ⏯ Playground Link
_No response_
### 💻 Code
- Clone [example repo](https://github.com/roryscot/tsbugexample/tree/main)
- run `npm i`
- navigate to [line 26](https://github.com/roryscot/tsbugexample/blob/63a1f479c8f91d70d41147b3d6d4d7e542e0e8f4/src/components/UserProfile/UserProfile.tsx#L26) in VSCode
- try to use auto-suggestion menu (`Command + .`)
The warning `React Hook useEffect has a missing dependency: 'getProfile'. Either include it or remove the dependency array.eslint[react-hooks/exhaustive-deps](https://github.com/facebook/react/issues/14920)` should appear on [line 26](https://github.com/roryscot/tsbugexample/blob/63a1f479c8f91d70d41147b3d6d4d7e542e0e8f4/src/components/UserProfile/UserProfile.tsx#L26)
### 🙁 Actual behavior
`2024-08-15 18:57:48.766 [trace] <semantic> Response received: encodedSemanticClassifications-full (210). Request took 155445 ms. Success: true { "spans": [ 467, 11, 2817, 493, 5, 2560, 500, 4, 2089, 509, 10, 2816, 520, 11, 2056, 543, 7, 2089, 552, 10, 2857, 566, 8, 2816, 575, 12, 1536, 600, 6, 2089, 609, 10, 2816, 620, 21, 2056, 686, 10, 2857, 699, 11, 2816, 737, 4, 2560, 743, 8, 2089, 762, 6, 2088, 769, 6, 2560, 776, 11, 2560, 788, 4, 3072, 801, 6, 2561, 819, 5, 2561, 838, 2, 2561, 842, 4, 2088, 848, 10, 2560, 899, 8, 2088, 908, 6, 2576, 918, 10, 2856, 929, 8, 2088, 949, 6, 2088, 956, 6, 2560, 963, 11, 2560, 976, 4, 2088, 987, 9, 2816, 1009, 10, 2856, 1029, 6, 2088, 1036, 6, 2560, 1043, 11, 2560, 1056, 4, 2088, 1124, 7, 2088, 1160, 4, 2561, 1178, 7, 2561, 1205, 8, 2561, 1240, 5, 2561, 1259, 7, 2561, 1286, 8, 2561, 1321, 12, 2561, 1347, 7, 2561, 1374, 8, 2561, 1409, 6, 2561, 1429, 7, 2561, 1456, 8, 2561 ], "endOfLineState": 0 }`
encodedSemanticClassifications-full took almost a minute and an half.
### 🙂 Expected behavior
encodedSemanticClassifications-full should take less than a few seconds.
### Additional information about the issue
When I remove the [import for Schema](https://github.com/roryscot/tsbugexample/blob/63a1f479c8f91d70d41147b3d6d4d7e542e0e8f4/src/components/UserProfile/UserProfile.tsx#L5), everything seems to work as expected. The Schema is a somewhat complex generated type whose definition is [here](https://github.com/roryscot/tsbugexample/blob/63a1f479c8f91d70d41147b3d6d4d7e542e0e8f4/amplify/data/schemas/index.ts#L55). | Bug | low | Critical |
2,470,655,369 | tensorflow | TFLite Model used in official documentation doesn't compile on Edge TPU Compiler | ### 1. System information
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows 11
- TensorFlow installation (pip package or built from source): pip
- TensorFlow library (version, if pip package or github SHA, if built from source):
- pip show tensorflow = 2.17.0
- tf.__version__ = 2.18.0-dev20240815
### 2. Code
Using the code from [Post-training integer quantization](https://www.tensorflow.org/lite/performance/post_training_integer_quant) official tutorial to create and convert a TensorFlow model to TFlite:
``` python
import logging
logging.getLogger("tensorflow").setLevel(logging.DEBUG)
import tensorflow as tf
import numpy as np
print("TensorFlow version: ", tf.__version__)
# Load MNIST dataset
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# Normalize the input image so that each pixel value is between 0 to 1.
train_images = train_images.astype(np.float32) / 255.0
test_images = test_images.astype(np.float32) / 255.0
# Define the model architecture
model = tf.keras.Sequential([
tf.keras.layers.InputLayer(shape=(28, 28)),
tf.keras.layers.Reshape(target_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10)
])
# Train the digit classification model
model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True),
metrics=['accuracy'])
model.fit(train_images, train_labels, epochs=5, validation_data=(test_images, test_labels))
def representative_data_gen():
for input_value in tf.data.Dataset.from_tensor_slices(train_images).batch(1).take(100):
yield [input_value]
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = representative_data_gen
# Ensure that if any ops can't be quantized, the converter throws an error
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.target_spec.supported_types = [tf.int8]
# Set the input and output tensors to uint8 (APIs added in r2.3)
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
converter.experimental_new_quantizer = True
tflite_model_quant = converter.convert()
import pathlib
tflite_models_dir = pathlib.Path("mnist_tflite_models/")
tflite_models_dir.mkdir(exist_ok=True, parents=True)
tflite_model_quant_file = tflite_models_dir/"mnist_model_quant.tflite"
tflite_model_quant_file.write_bytes(tflite_model_quant)
```
Which displays the following output:
```
2024-08-16 18:07:19.094940: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-08-16 18:07:20.845501: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
TensorFlow version: 2.18.0-dev20240815
2024-08-16 18:07:25.704749: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
Epoch 1/5
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 7s 3ms/step - accuracy: 0.8672 - loss: 0.4860 - val_accuracy: 0.9722 - val_loss: 0.0965
Epoch 2/5
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 7s 4ms/step - accuracy: 0.9738 - loss: 0.0919 - val_accuracy: 0.9768 - val_loss: 0.0735
Epoch 3/5
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 8s 4ms/step - accuracy: 0.9797 - loss: 0.0696 - val_accuracy: 0.9799 - val_loss: 0.0622
Epoch 4/5
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 7s 3ms/step - accuracy: 0.9833 - loss: 0.0565 - val_accuracy: 0.9800 - val_loss: 0.0600
Epoch 5/5
1875/1875 ━━━━━━━━━━━━━━━━━━━━ 6s 3ms/step - accuracy: 0.9843 - loss: 0.0524 - val_accuracy: 0.9813 - val_loss: 0.0595
INFO:tensorflow:Assets written to: C:\Users\Me\AppData\Local\Temp\tmptd9h6442\assets
INFO:tensorflow:Assets written to: C:\Users\Me\AppData\Local\Temp\tmptd9h6442\assets
Saved artifact at 'C:\Users\Me\AppData\Local\Temp\tmptd9h6442'. The following endpoints are available:
* Endpoint 'serve'
args_0 (POSITIONAL_ONLY): TensorSpec(shape=(None, 28, 28), dtype=tf.float32, name='keras_tensor')
Output Type:
TensorSpec(shape=(None, 10), dtype=tf.float32, name=None)
Captures:
1897857613456: TensorSpec(shape=(), dtype=tf.resource, name=None)
1897859106576: TensorSpec(shape=(), dtype=tf.resource, name=None)
1897857613072: TensorSpec(shape=(), dtype=tf.resource, name=None)
1897857612688: TensorSpec(shape=(), dtype=tf.resource, name=None)
C:\Users\Me\.pyenv\pyenv-win\versions\3.12.4\Lib\site-packages\tensorflow\lite\python\convert.py:983: UserWarning: Statistics for quantized inputs were expected, but not specified; continuing anyway.
warnings.warn(
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
W0000 00:00:1723828081.771633 14420 tf_tfl_flatbuffer_helpers.cc:359] Ignored output_format.
W0000 00:00:1723828081.772124 14420 tf_tfl_flatbuffer_helpers.cc:362] Ignored drop_control_dependency.
2024-08-16 18:08:01.772979: I tensorflow/cc/saved_model/reader.cc:83] Reading SavedModel from: C:\Users\Me\AppData\Local\Temp\tmptd9h6442
2024-08-16 18:08:01.774123: I tensorflow/cc/saved_model/reader.cc:52] Reading meta graph with tags { serve }
2024-08-16 18:08:01.774323: I tensorflow/cc/saved_model/reader.cc:147] Reading SavedModel debug info (if present) from: C:\Users\Me\AppData\Local\Temp\tmptd9h6442
I0000 00:00:1723828081.779428 14420 mlir_graph_optimization_pass.cc:401] MLIR V1 optimization pass is not enabled
2024-08-16 18:08:01.780783: I tensorflow/cc/saved_model/loader.cc:236] Restoring SavedModel bundle.
2024-08-16 18:08:01.819711: I tensorflow/cc/saved_model/loader.cc:220] Running initialization op on SavedModel bundle at path: C:\Users\Me\AppData\Local\Temp\tmptd9h6442
2024-08-16 18:08:01.830586: I tensorflow/cc/saved_model/loader.cc:462] SavedModel load for tags { serve }; Status: success: OK. Took 57614 microseconds.
2024-08-16 18:08:01.847290: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:268] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
2024-08-16 18:08:02.073373: I tensorflow/core/framework/local_rendezvous.cc:404] Local rendezvous is aborting with status: OUT_OF_RANGE: End of sequence
fully_quantize: 0, inference_type: 6, input_inference_type: UINT8, output_inference_type: UINT8
```
### 3. Failure after conversion
When I then run the newly created TFLite file through the `edgetpu_compiler` (via Docker) it fails saying it still has dynamic-sized tensors:
```
docker run --rm -it -v .:/home/edgetpu edgetpu-compiler edgetpu_compiler mnist_model_quant.tflite
Edge TPU Compiler version 16.0.384591198
Started a compilation timeout timer of 180 seconds.
ERROR: Attempting to use a delegate that only supports static-sized tensors with a graph that has dynamic-sized tensors.
Compilation failed: Model failed in Tflite interpreter. Please ensure model can be loaded/run in Tflite interpreter.
Compilation child process completed within timeout period.
Compilation failed!
```
Any idea how I can fully convert the model to static-sized tensors. I tried the [suggestion](https://github.com/tensorflow/tensorflow/issues/57905#issuecomment-1292720032) of using `converter._experimental_new_quantizer` but that didn't help. | stat:awaiting tensorflower,comp:lite,TFLiteConverter,2.17 | low | Critical |
2,470,683,806 | godot | AudioStreamSynchronized.set_sync_stream_volume() doesn't appear in the auto-fill list | ### Tested versions
- Reproducible in v4.3.stable.official [77dcf97d8], v4.3.beta1.official [a4f2ea91a], v4.3.beta2.official [b75f0485b], v4.3.beta3.official [82cedc83c], v4.3.rc1.official [e343dbbcc], v4.3.rc2.official [3978628c6], and v4.3.rc3.official [03afb92ef]
### System information
Godot v4.3.stable - Windows 10.0.22631 - Vulkan (Forward+) - integrated Intel(R) UHD Graphics (Intel Corporation; 31.0.101.5186) - 13th Gen Intel(R) Core(TM) i7-1355U (12 Threads)
### Issue description
When you set a variable to an AudioStreamSynchronized resource, and then you try to type in the set_sync_stream_volume() function, the intellisense doesn't list the set_sync_stream_volume() function in the autofill list. What I expected it to do was list the set_sync_stream_volume() function while I am trying to type in the function, since I have statically set the variable's type to an AudioStreamSynchronized resource.


```gdscript
extends CharacterBody2D
const SPEED = 300.0
const JUMP_VELOCITY = -400.0
@onready var adaptive_stuffs: AudioStreamPlayer = $AdaptiveStuffs
var moosic: AudioStreamSynchronized
func _ready() -> void:
if adaptive_stuffs.stream is AudioStreamSynchronized:
moosic = adaptive_stuffs.stream
print_debug("Dis oki")
else:
print_debug("Dis not oki")
moosic.set_sync_stream_volume(1, -13.17)
```
### Steps to reproduce
1. Set a variable (audioNode for this example) as an AudioStreamPlayer node
2. Set a variable (syncResource for this example) statically as a AudioStreamSynchronized class
3. Set syncResource as audioNode.stream
4. try to type in set_sync_stream_volume(), and notice how the set_sync_stream_volume function doesn't appear in the intellisense autocomplete list
### Minimal reproduction project (MRP)
[Bug-Test-Project.zip](https://github.com/user-attachments/files/16639809/Bug-Test-Project.zip)
| bug,discussion,topic:gdscript,topic:editor,confirmed | low | Critical |
2,470,705,948 | pytorch | Add `beta`, `betainc`, etc. to `torch.special` | ### 🚀 The feature, motivation and pitch
Scipy provides the following functions in its `special` module:
| Function | Description |
|----------|-------------|
| [`beta(a, b[, out])`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.special.beta.html#scipy.special.beta) | Beta function. |
| [`betaln(a, b[, out])`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.special.betaln.html#scipy.special.betaln) | Natural logarithm of the absolute value of the beta function. |
| [`betainc(a, b, x[, out])`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.special.betainc.html#scipy.special.betainc) | Regularized incomplete beta function. |
| [`betaincc(a, b, x[, out])`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.special.betaincc.html#scipy.special.betaincc) | Complement of the regularized incomplete beta function. |
| [`betaincinv(a, b, y[, out])`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.special.betaincinv.html#scipy.special.betaincinv) | Inverse of the regularized incomplete beta function. |
| [`betainccinv(a, b, y[, out])`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.special.betainccinv.html#scipy.special.betainccinv) | Inverse of the complemented regularized incomplete beta function. |
These functions are crucial in various domains related to probabilistic programming and statistics.
For my specific use case, I am implementing closed form solutions of the CRPS (a probabilistic equivalent of the mean absolute error) for various distributions, using multiple backends (numpy, pytorch, jax, tensorflow). For some distributions, these functions are required, however Pytorch is the only backend that does not implement them. See https://github.com/frazane/scoringrules/issues/13
It would be great to add them to pytorch!
### Alternatives
_No response_
### Additional context
_No response_
cc @fritzo @neerajprad @alicanb @nikitaved | module: distributions,triaged | low | Critical |
2,470,710,460 | TypeScript | 5.6 regression: Incorrect param type inference for type with all optional props | ### 🔎 Search Terms
inference, param
### 🕗 Version & Regression Information
- This changed between versions 5.5.4 and 5.6.0
### ⏯ Playground Link
https://www.typescriptlang.org/play/?ts=5.6.0-dev.20240816#code/JYOwLgpgTgZghgYwgAgMoHsC2EAqBPABxQG8AoZC5AeiuQHkRkBWAOgDYWAGAGmQgA8iUYNnDIARnmSY4Aa1ABzZGAAWwAM7ICUdAWRQIARwCuwAwBMW5SsZDATEAAo6CAfgBcydWGEgFAblIAX1JScwgEABs4A2QEdBBvZHRxdU86VOgANzhxSIgAHgxsfCIAPkDSGFsEMGAE5BiFAHEIMHUAdR0-UogACgBKdMyoHLzC4iCy5DJKfTbjKEYU9RYCYCI+sDgCPqbkAF5p2bnKJsDTyhoAPVdrS5pkAFFBCMhzT2LcQhRQZhZWAAWe6nR4AQVqxjgkU8FEmyD+WVYHE4AFpxG04Mp0P8UajwlkWAAmThEwGcAAcAEY2Mg9mBkPk4N4BiCggMBoEQlUanUGk1Wu0AMLoKAGWq9QYzEEGMCLZapNYbfrbXb7I7Sy4Uc4guY3O5a6i0MGRADucDwmniYrebI5XNCoEgsEQKAy6myuXyBRwxxB6yIBTBZT6unSQjgYFFADFefUQD7eMGhvQRmNvcGHU7oPAkMgAKogGJ4WMgWrxxPIABKxzp6nQiyQnhwKar-mQIWzLrzdAjUagpfLCUrNb4-EgIHMmkLxcHfIT7s94x9ZV4i9GXsKNdrnfAOddyAAsgl0L1e9BIzG48PfWOJ1P6H2r2X55Xb8QO2EItFYtUX-HlB2Fc+hAAQwE8PockiYwIGbAZDmmLJ0GAcwU2PEBTx+c8oEvAdrwTX1AiAA
### 💻 Code
```ts
interface SomeType {
// On 5.6.0, experiment by making this prop required.
uniqueProp?: string;
}
declare const obs: Observable<SomeType>;
function argGetsWrongType(): Observable<{}> {
return obs.pipe(tap(arg => {
arg;
//^?
// Expected: SomeType in 5.5.4
// Actual: {} in v5.6.0-beta to 5.6.0-dev.20240816 (at least)
}));
}
function argGetsCorrectType() {
return obs.pipe(tap(arg => {
arg;
//^?
// Always correct
}));
}
interface Observable<T> {
pipe<A>(op: OperatorFunction<T, A>): Observable<A>;
}
interface UnaryFunction<T, R> { (source: T): R; }
interface OperatorFunction<T, R> extends UnaryFunction<Observable<T>, Observable<R>> { }
interface MonoTypeOperatorFunction<T> extends OperatorFunction<T, T> { }
declare function tap<T>(next: (value: T) => void): MonoTypeOperatorFunction<T>;
```
### 🙁 Actual behavior
`arg` is inferred as `{}`
### 🙂 Expected behavior
`arg` is inferred as `SomeType` as was consistent in v5.5.4
### Additional information about the issue
(Google note: See http://cl/664889390 for local workaround) | Bug,Help Wanted | low | Major |
2,470,713,440 | flutter | Add a way to query the screen's color space and/or pixel format. | Based on an idea presented in another issue: https://github.com/flutter/flutter/issues/153517#issuecomment-2293891009
<br>
A possible solution would be to add a getter to [`FlutterView`](https://api.flutter.dev/flutter/dart-ui/FlutterView-class.html) and/or [`PlatformDispatcher`](https://api.flutter.dev/flutter/dart-ui/PlatformDispatcher-class.html).
```dart
Set<ColorSpace> get supportedColorSpaces;
```
Example usage:
```dart
Widget build(BuildContext context) {
final FlutterView view = View.of(context);
if (view.supportedColorSpaces.contains(ColorSpace.displayP3)) {
// ...
}
}
``` | a: fidelity,a: images,P3,team-engine,triaged-engine | low | Major |
2,470,733,574 | flutter | et: debug binaries are stripped on linux | ## reproduction
```
./flutter/bin/et build -c host_debug_unopt //flutter/shell/platform/embedder:embedder_unittests
```
## expected result
`nm out/host_debug_unopt/embedder_unittests` prints symbols.
## observed result
`nm out/host_debug_unopt/embedder_unitttests` prints no symbols | P2,team-engine,triaged-engine,e: engine-tool | low | Critical |
2,470,772,222 | TypeScript | Make type narrowing for destructured discriminated unions work for more types | ### 🔍 Search Terms
discriminant union, ref, control flow guard, type narrowing
### ✅ Viability Checklist
- [X] This wouldn't be a breaking change in existing TypeScript/JavaScript code
- [X] This wouldn't change the runtime behavior of existing JavaScript code
- [X] This could be implemented without emitting different JS based on the types of the expressions
- [X] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, new syntax sugar for JS, etc.)
- [X] This isn't a request to add a new utility type: https://github.com/microsoft/TypeScript/wiki/No-New-Utility-Types
- [X] This feature would agree with the rest of our Design Goals: https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals
### ⭐ Suggestion
```typescript
type Ref<T> = { value: T }
type Data =
| { ready: Ref<true>, payload: string }
| { ready: Ref<false>, payload: null }
declare const data: Data
const { ready, payload } = data
if (ready.value) {
payload // <== currently inferred as "string | null" but should be "string"
}
```
Treat types like `Ref<T>` as a discriminant property in a union or find a way to narrow the type of `payload`
### 📃 Motivating Example
This is a very common use case in the [Vue Pinia](https://pinia.vuejs.org/) state store library, millions of projects use this library and have code like
```typescript
const store = useDataStore()
const { ready, payload } = storeToRefs(store)
```
If we can improve this type narrowing behavior, the narrowed `payload` type can helps developer write safer code than before
```typescript
// before, Non-null assertion everywhere
if (ready.value) {
payload.xxxx() // <=== false alert raised by typescript and developers have to use ?. or ! to avoid it
payload?.xxxx() // <=== ?. is unnecessary, generates dead code and brings cognitive confusion
xxxxx(payload!)
}
xxxxx(payload!) // <=== copied from the if block and forget to remove the ! mark, cannot receive alert from typescript
// after, everything works fine
if (ready.value) {
payload.xxxx()
xxxxx(payload)
}
xxxxx(payload) // received the null check protection from typescript
```
### 💻 Use Cases
[More detailed playground link](https://www.typescriptlang.org/play/?#code/C4TwDgpgBAShBmAeAKgPigXigbygNwEMAbAVwgC4pkoBfAKDtEigBEDgC4CATETHOlCgAnCDxCU4SYMLKpBUMARBEA9j0kJEAZxkBLAHYBzefUbhobDgDlVwLr37YFo8ZqTxi2iPKFKV6tzuiAYkRESmDEyW7AT8VpxijgA+rLG29kkgDNwQAMZEBKJQeaoGulDcsZQJDKXlwDgiWQA0ispqPLT8VRwMevBQABSuvAB0hKQQAJQCfh2BE8RkQlAA9GtQiBg7UIbwEMKi3FAE2lAARLrChkZQqaHhF1AARiSN2gAWqmEnL9DXW50MzwEgGPLAPRlPbaBIOEBDIJpDjTSgnPTnOFZObNYAkYQGSpjUYgJZTYH9QZDDFY8SI2LTWbOIQkskrVbrTbbXb7Q7HU7nF6qVREMSEt4fb6-V7QGRkBT+TrcNnQIQbLY7LC8o4QE5nS6A4z3KCPIjPCVQL4-Ih-AH6YwUuig8GQ6FnbzCYC03iImoMyjuw7Ac7ozGxeE4MwDYasyZkJkKQOe70I3oEaYuLIqjmcjU8gwHHV685y6AAd0+ejynxhUAgAA9IBDda93ntgAByc4GIrCVRllsvPh2T6HdrAT4KhY8bNqrmavYFvkt-VXe13B5hM2tyXW22W9cUoA)
The use cases is actually shown in the motivating example.
I've dig into the `checker.ts` for some time and here's my findings
1. `getDiscriminantPropertyAccess` cannot treat `ready` as a discriminant property now because it needs to check `CheckFlags.Discriminant` which implies `CheckFlags.HasLiteralType`. It's a pretty strict check and as its name describes, `Ref<T>` has no chance to pass this check.
2. I'm not sure is it possible for relaxing the discriminate requirements but it seems to be a bad idea after some search. #29110 is what I found but it's a really old PR so maybe time changes now
3. If we cannot solve it by using discriminant property narrowing, as a newbie to the typescript project, I just tried to debug the checker and have another idea
```typescript
interface Ref<T> { value: T }
type ToRefs<T> = { [K in keyof T]: Ref<T[K]> }
function toRefs<T>(o: T): ToRefs<T> {
return {} as any
}
interface DataPrepared {
ready: true
payload: string
}
interface DataNotPrepared {
ready: false
payload: null
}
type Data = DataPrepared | DataNotPrepared
declare const data: Data
const { ready, payload } = toRefs(data)
function isDataReady(d: Data): d is DataPrepared {
return d.ready.value
}
if (isDataReady(data)) {
ready.value // <=== inferred as boolean but should be true
payload.value // <=== inferred as "string | null" but should be string
}
function assertDataReady(d: Data): asserts d is DataPrepared {}
if (ready.value) {
assertDataReady(data)
ready.value // <=== inferred as true which is expected but it's narrowed by other code path
payload.value // <=== inferred as "string | null" but should be string
}
```
Can we use type predicates or assert function to add more information to `payload`'s flow list? If it's possible, maybe we can do following steps while examine the `payload`
1. check `payload`'s symbol, if its declaration is a `BindingPattern`
2. check the flow list for `payload`, if the narrowed `data` is the initializer of `payload`'s declaration
3. narrow the `payload` based on the narrowed `data`
4. maybe it's gibberish but hope it helps
| Help Wanted,Possible Improvement | low | Critical |
2,470,796,484 | pytorch | [Flex Attention] Ima Repro for non divisible seqlen + FlexDecode kernel | # Summary
## Error
This will raise NaNs and shows that the we are loading from no no memory on compute sanitizer. This example sets the kernel options `is_divisible = False ` And it shouldnt
## Repro
```python
import torch
from torch.nn.attention.flex_attention import flex_attention, create_block_mask
def generate_causal_offset(offset: torch.Tensor):
def causal_offset_mask(b, h, q_idx, kv_idx):
return (offset + q_idx) >= kv_idx
return causal_offset_mask
prefill = 128
max_seq_new_tokens = 100
B, H, HEAD_DIM = 1, 16, 64
start_offset = torch.tensor(prefill, device="cuda", dtype=torch.int32)
query = torch.rand(B, 1, H, HEAD_DIM, device="cuda", dtype=torch.float16).transpose(
1, 2
)
TRANSPOSE = True
for i in range(max_seq_new_tokens):
if TRANSPOSE:
key = torch.rand(
B, prefill + i, H, HEAD_DIM, device="cuda", dtype=torch.float16
).transpose(1, 2)
value = torch.rand(
B, prefill + i, H, HEAD_DIM, device="cuda", dtype=torch.float16
).transpose(1, 2)
else:
key = torch.rand(B, H, prefill + i, HEAD_DIM, device="cuda", dtype=torch.float16)
value = torch.rand(B, H, prefill + i, HEAD_DIM, device="cuda", dtype=torch.float16)
# create a causal mask
offset = start_offset + i
causal_offset_mask = generate_causal_offset(offset)
block_mask = create_block_mask(causal_offset_mask, 1, 1, 1, key.shape[-2])
flex_compile = torch.compile(flex_attention)
flex_eager = flex_attention
out_compile = flex_compile(query, key, value, block_mask=block_mask)
out_eager = flex_eager(query, key, value, block_mask=block_mask)
print("out_compile contains nan", torch.isnan(out_compile).any())
print("out_eager contains nan", torch.isnan(out_eager).any())
try:
torch.testing.assert_close(out_compile, out_eager, atol=1e-2, rtol=0.0)
except AssertionError as e:
print(f"Failed at {i}")
raise e
```
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @yf225 @Chillee @yanboliang @BoyuanFeng | triaged,oncall: pt2,module: higher order operators,module: pt2-dispatcher,module: flex attention | low | Critical |
2,470,803,166 | ui | [bug]: Stuck Installing Tooltip Component | ### Describe the bug
The tooltip component is consistently getting stuck installing. Running `npx shadcn-ui@latest add tooltip` results in a constant `Installing tooltip...` dialogue. I'm using Vite + React. All other components work fine.
### Affected component/components
Tooltip
### How to reproduce
1. Run `npx shadcn-ui@latest add tooltip`
2. Nothing happens...
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Windows 11 Pro + Snapdragon X Elite
Latest Vite and node 20.14.0
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,470,807,880 | TypeScript | Design Meeting Notes, 8/16/2024 |
# API for exposing inferred type arguments of calls
https://github.com/Microsoft/TypeScript/issues/59637
* Is the proposed API correct? A signature is the target of the call, but is independent and the specific type arguments correspond to the signature itself.
* Is `Signature -> Type[]` right? Or do you need something else? You have type arguments of signatures, but there are other type parameters in question.
* What is it?
* Whatever the language service does. Go to definition can be used for this.
* Go to definition isn't a stable API for this.
* If you have a way to get the signature as it is, you can get the type arguments.
* Can basically use `getResolvedSignature`, grab the type arguments, followed by `instantiateTypes` with the original signature's mapper? As suggested in the isssue.
* Yes, that sounds like the suggestion for the API.
# Flag for Banning TypeScript-Specific Runtime Constructs
* #2261
* #59601
* Long-standing question: how can we ban constructs that have no runtime impact (or are not trivially erasable)?
* Possibly more impetus for this now that Node has `--experimental-strip-types` which replaces type constructs with whitespace.
* What would it be called?
* `--noTranspiledFeatures`
* We transpile ESXXXX features.
* `--typeSyntaxOnly`
* `--disallowRuntimeSyntax`
* `--noLegacySyntax`
* `--noRegerts`
* Does this flag imply `--verbatimModuleSyntax` and `--isolatedModules`?
* The use-cases overlap a lot. We don't know why you would turn this flag on without those.
* People seem to like `typeSyntaxOnly` the best.
* This sort of thing is always weird because implementers are free to add new TS-specific constructs.
* Well this is also about staying "pure" with ECMAScript.
* Which are clearly not okay?
* `class C { constructor(public prop: number) {} }`
* `enum` of any form
* `namespace` of any form
* Are the following allowed?
* `import foo = require("foo");`
* `import foo = bar.baz;`
* `export =`
* Some of these can't just get erased to whitespace!
* This is all wacky because the only reason we're revisiting this is `--experimental-strip-types`, which seems motivated because Node's sourcemap support is slow.
* This is also speculative for a feature that hasn't shipped in stable.
* Not fully true, there is also future-proofing against syntax conflicts in ECMAScript as a concern.
* Lots of this could just be a lint rule which enforces ideological purity.
* Some mixed feelings on whether this is a good thing to do, but there is a feel that this is not what we would do today.
* Presumably this would be only in `.ts`, not `.d.ts`
* Feels bad that we'd disallow enums, what is the substitute there?
* Enums being nominal, having individual type members, etc. are all nice.
* `const foo = {...} as enum`.
* Wouldn't be a full substitute.
```ts
const x = {
a: 1,
b: a, // can't do this
} as enum;
```
* What about `import foo = require(...)`?
* Could support `const foo = require(...)` in some way, but would need a flag since a lot of code assumes this is `any`.
* Is this really necessary? The hope is ESM/CJS interop will save us all.
* A flag to add this flag? Just turn off this flag if you really need that or write `const m = require("m") as typeof import("m")`.
| Design Notes | low | Major |
2,470,808,541 | vscode | cursor placement of TSX tags does not work with some tailwind properties | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.92.2
- OS Version: Ubuntu 24.04 LTS
When attempting to press `enter`, the editor usually auto indents the tag properly, but this does not happen when any property with the "/" charachter is included
Steps to Reproduce:
1. copy and paste this code into a `.tsx` file
```jsx
import React from "react";
const AuthDesktop = () => {
return (
<main className="w-full h-full flex items-center justify-center">
<div className="w-1/2"></div>
<div className="w-[50%]"></div>
</main>
);
};
export default AuthDesktop;
```
2. attempt to press enter inside the tag with className `w-[50%]`, it will place the cursor correctly, but that is not the case with the tag with className `w-1/2`
below is an example of what happens.
[Screencast from 2024-08-16 22-06-02.webm](https://github.com/user-attachments/assets/d3e42985-6bc6-4b3e-9484-d75288fe2b02)
| bug,help wanted,javascript | low | Critical |
2,470,818,366 | rust | rustdoc: Weird display of `<ToBytes for Simd<_, _>>::Bytes` associated type | Implementations of `ToBytes` for various portable SIMD vectors are rendered incorrectly:
```
type Bytes = Simd<u8, core::::core_simd::to_bytes::{impl#55}::Bytes::{constant#0}>
```
https://doc.rust-lang.org/nightly/std/simd/prelude/struct.Simd.html#impl-ToBytes-for-Simd%3Cf32,+8%3E
(looks the same on stable: https://doc.rust-lang.org/stable/std/simd/prelude/struct.Simd.html#impl-ToBytes-for-Simd%3Cf32,+8%3E)
It’s probably reproducible with some other traits and types, but I was not able to do it. Notably, this:
```rust
pub trait Foo {
type Bar;
}
struct Arr<T, const N: usize>(T);
macro_rules! gen_impl {
($a:literal, [$($b:literal),*]) => {
$(impl Foo for [u8; $b] {
type Bar = Arr<u8, { $a * $b }>;
})*
};
}
gen_impl!(2, [3, 4]);
```
renders as
```
type Bar = Arr<u8, { $a * $b }>
```
for me (stable + `rustc 1.82.0-nightly (64ebd39da 2024-08-03)`), which is maybe not perfect, but isn’t that broken. | T-rustdoc,C-bug,A-const-generics,A-cross-crate-reexports | low | Critical |
2,470,820,419 | react | Bug: Memory leak of old state until next render | React version: 18.3.1
## Summary
It appears that state which is no longer being used can still be retained by some `__reactFiber$...` internals, leading to arbitrarily large memory leaks depending on what other objects are referenced by the state.
Possibly related issues:
- https://github.com/facebook/react/issues/14380
- https://github.com/facebook/react/issues/13702
- https://github.com/facebook/react/issues/14057
## Steps To Reproduce
1. Open the following repro example in Chrome (click Open Preview In New Tab): https://stackblitz.com/edit/react-leak-repro-1?file=src%2FApp.tsx
2. Open the Chrome Memory inspector
3. Click the "Mount" button, take a heap snapshot, and verify the LeakyActions object is present
4. Click the "Unmount" button, take a heap snapshot, and see that **LeakyActions is still present**
5. Click the "Increment" button, take a heap snapshot, and verify that LeakyActions is now released.
https://github.com/user-attachments/assets/fbfcf317-9eb3-431e-a887-cf8126d6aad8
Link to code example:
https://stackblitz.com/edit/react-leak-repro-1?file=src%2FApp.tsx,vite.config.ts
This repro case may be slightly more complicated than necessary, but was reduced from a real-world use case in our app.
```ts
import { useImperativeHandle, useState } from 'react';
class LeakyActions {
foo() {
console.log('real foo');
}
}
function WorkspaceAdapter({ r }: { r: React.Ref<LeakyActions> }) {
useImperativeHandle(r, () => new LeakyActions(), []);
return null;
}
function App() {
const [adapterMounted, setAdapterMounted] = useState(false);
const [actions, setActions] = useState<LeakyActions | null>(null);
const [counter, setCounter] = useState(0);
return (
<>
<button onClick={() => setAdapterMounted((x) => !x)}>
{adapterMounted ? 'Unmount' : 'Mount'}
</button>
<div>Have actions: {String(actions != undefined)}</div>
{adapterMounted && <WorkspaceAdapter r={setActions} />}
<div>Counter: {counter}</div>
<button onClick={() => setCounter((c) => c + 1)}>Increment</button>
</>
);
}
export default App;
```
## The current behavior
The object is retained by the `__reactFiber$...` property of the `<button>` element:
<img width="1181" alt="image" src="https://github.com/user-attachments/assets/49b1660e-537a-433a-8e32-b209ce1443e5">
In our app, this behavior leads to a huge graph of objects from a previous screen/route of the app being retained when switching away from that screen.
## The expected behavior
The state which is no longer current should be released. | Status: Unconfirmed | low | Critical |
2,470,846,346 | vscode | window.innerHeight is way larger than the actual window height on macOS | Type: <b>Bug</b>
1. First install `ipykernel` and `anywidget` (`!pip install anywidget`)
2. Create a notebook file `test.ipynb`
3. Create the following widget and inspect the console.log statement
```py
import anywidget
class CanvasWidget(anywidget.AnyWidget):
_esm = """
function render({ model, el }) {
const canvas = document.createElement("canvas");
const resize = () => {
canvas.height = window.innerHeight * window.devicePixelRatio;
canvas.width = window.innerWidth * window.devicePixelRatio;
console.log(`Canvas size is ${canvas.width} (w) x ${canvas.height} (h)`);
}
window.addEventListener('resize', resize);
el.appendChild(canvas);
}
export default { render };
"""
CanvasWidget()
```
In my case the reported canvas size is `1940 (w) x 33554426 (h)`. While the width is correct the height is complete out of bounds! I don't think there exists a monitor with a 33 million pixels tall display panel.
When I manual get the `window.innerHeight`, `window.innerWidth`, `window.devicePixelRatio` in the web console, the values are `1091`, `1118` , and `2`.

Where is the incorrect height of `33554426` coming from?! And is there a way around this?
**tl/dr** I'm the developer of a widget that relies on the window size to determine the appropriate size for a full-sized canvas element. However, the incorrect height of `33554426` is causing the widget to break because no GPU can work with a texture that large (which is what ultimately happens in the widget)
VS Code version: Code 1.92.2 (Universal) (fee1edb8d6d72a0ddff41e5f71a671c23ed924b9, 2024-08-14T17:29:30.058Z)
OS version: Darwin arm64 23.2.0
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Apple M1 Max (10 x 2400)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|4, 6, 8|
|Memory (System)|32.00GB (0.71GB free)|
|Process Argv||
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (15)</summary>
Extension|Author (truncated)|Version
---|---|---
vscode-eslint|dba|3.0.10
gitlens|eam|15.3.0
EditorConfig|Edi|0.16.4
vsc-material-theme|Equ|2.4.2
debugpy|ms-|2024.10.0
isort|ms-|2023.10.1
python|ms-|2024.12.3
vscode-pylance|ms-|2024.8.1
jupyter|ms-|2024.7.0
jupyter-keymap|ms-|1.1.2
jupyter-renderers|ms-|1.0.19
vscode-jupyter-cell-tags|ms-|0.1.9
vscode-jupyter-slideshow|ms-|0.1.6
cpptools|ms-|1.21.6
sublime-keybindings|ms-|4.1.10
(1 theme extensions excluded)
</details>
<!-- generated by issue reporter --> | debt,polish,notebook | low | Critical |
2,470,851,916 | PowerToys | Keyboard layout change not being able to return to normal | ### Microsoft PowerToys version
0.83.0
### Installation method
Microsoft Store
### Running as admin
No
### Area(s) with issue?
Keyboard Manager
### Steps to reproduce
Restarting the computer with changed keys layout
### ✔️ Expected Behavior
For nothing to change
### ❌ Actual Behavior
Even though the app showed that i have custom layout disabled, it still wouldnt turn off. After removing the app from computer, the issue is still there
### Other Software
none | Issue-Bug,Needs-Triage | low | Minor |
2,470,865,907 | vscode | Long inline chat workspace edits are annoying to accept | 1. Ask for some `/tests`
2. Get a long diff and end in a state like this:
<img width="614" alt="image" src="https://github.com/user-attachments/assets/f327c57d-0154-4fb5-847f-0121f123ed0e">
3. You have to scroll back to the top to hit the ✔️ to accept the suggestion. | feature-request,inline-chat,panel-chat | low | Minor |
2,470,880,851 | pytorch | [HOP] Add BC tests for hop in export | ### 🐛 Describe the bug
According to current design, we can support BC when add a new default kwarg to HOP's signature. Add some BC test in export.
We also need to make sure default kwargs not captured in graph when using its default value.
### Versions
main
cc @ezyang @chauhang @penguinwu @zou3519 @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @bdhirsh | triaged,oncall: pt2,module: higher order operators,oncall: export,module: pt2-dispatcher | low | Critical |
2,470,906,338 | go | proposal: log/slog: add ParseLevel | ### Proposal Details
Level and LevelVar have UnmarshalText methods, but there no direct way to parse a string into a Level.
UnmarshalText is fine if the level is a flag, but not if it's, say, a query param. For that one has to write the awkward code:
```
slogLevel.UnmarshalText([]byte(r.FormValue("l"))
```
Add `ParseLevel(string) (Level, error)`. | Proposal | low | Critical |
2,470,909,264 | kubernetes | Bogus prohibition of `uniqueItems` in JSON schema in CRD | ### What happened?
https://github.com/kubernetes/apiextensions-apiserver/blob/v0.31.0/pkg/apis/apiextensions/validation/validation.go#L1007-L1009 has been there forever but makes no sense to me. Checking that constraint can be done in O(N) time and O(N) space. And I _can_ impose the functionally same constraint on a slice by setting `+listType=map` and using the corresponding (collection of) `+listMapKey` settings.
### What did you expect to happen?
I expected that I can use `uniqueItems`.
### How can we reproduce it (as minimally and precisely as possible)?
_I_ produced it while using kubebuilder. I put the `// +kubebuilder:validation:UniqueItems=true` comment on a field holding a slice (the `Destinations` field in `BindingSpec` in https://github.com/kubestellar/kubestellar/pull/2405).
### Anything else we need to know?
🤷
### Kubernetes version
<details>
This has been there _forever_.
</details>
### Cloud provider
<details>
N/A
</details>
### OS version
<details>
N/A
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/api-machinery,triage/accepted | low | Major |
2,470,949,929 | flutter | Devicelab - android_semantics_integration_test failed. (Incorrect semantics) | ### Steps to reproduce
Environment:
- Device: Pixel 8 Pro (Android 15.4.2 beta) - Physical device
- Flutter Version: 3.24.0-1.0.pre.607 (Channel master) OR Flutter stable 3.24
- Android Studio Version: Koala Feature Drop | 2024.1.2 RC 1
Steps:
1. Download flutter from [GitHub - flutter/flutter: Flutter makes it easy and fast to build beautiful apps for mobile and beyond](https://github.com/flutter/flutter) .
2. Go to `dev/devicelab/lib/framework/utils.dart` and search for the startProcess function and inside it add these 2 lines after line 299:
```
newEnvironment['FLUTTER_XCODE_DEVELOPMENT_TEAM']='PUT_YOUR_DEV_TEAM_HERE';
newEnvironment['ANDROID_SDK_ROOT']='PATH_TO_ANDROID_SDK'
```
3. Go to: dev/devicelab and execute the command:
```
../../bin/cache/dart-sdk/bin/dart bin/test_runner.dart test -t android_semantics_integration_test >> logs.txt
```
4. Check the logs.txt file to see the results
### Expected results
The test should finish successfully.
### Actual results
Consistently failed across all 3 executions.
flaky: false
-----------
Fragments of the logs below and full logs on the logs section:
<details open><summary>Exception - Text Field</summary>
```console
[2024-08-16 10:37:38.579626] [STDOUT] stdout: ══╡ EXCEPTION CAUGHT BY FLUTTER TEST FRAMEWORK ╞════════════════════════════════════════════════════
[2024-08-16 10:37:38.579655] [STDOUT] stdout: The following TestFailure was thrown running a test:
[2024-08-16 10:37:38.579674] [STDOUT] stdout: Expected: AndroidSemanticsNode with className: android.widget.EditText with actions:
[2024-08-16 10:37:38.579735] [STDOUT] stdout: [AndroidSemanticsAction.click, AndroidSemanticsAction.paste, AndroidSemanticsAction.setSelection,
[2024-08-16 10:37:38.579757] [STDOUT] stdout: AndroidSemanticsAction.setText] with ignoredActions: [AndroidSemanticsAction.accessibilityFocus,
[2024-08-16 10:37:38.579774] [STDOUT] stdout: AndroidSemanticsAction.clearAccessibilityFocus] with flag isEditable: true with flag isFocusable:
[2024-08-16 10:37:38.580312] [STDOUT] stdout: true with flag isFocused: true with flag isPassword: false
[2024-08-16 10:37:38.580373] [STDOUT] stdout: Actual: AndroidSemanticsNode:<{rect: {top: 106.22222137451172, left: 0.0, bottom:
[2024-08-16 10:37:38.580418] [STDOUT] stdout: 154.22222900390625, width: 1008, right: 448.0, height: 108}, contentDescription: null, flags:
[2024-08-16 10:37:38.580439] [STDOUT] stdout: {isHeading: false, isEditable: true, isPassword: false, isEnabled: true, isLongClickable: false,
[2024-08-16 10:37:38.580458] [STDOUT] stdout: isDismissible: false, isCheckable: false, isChecked: false, isFocused: true, isFocusable: true},
[2024-08-16 10:37:38.580495] [STDOUT] stdout: className: android.widget.EditText, liveRegion: 1, id: 16, text: null, actions: [131072, 2097152,
[2024-08-16 10:37:38.580516] [STDOUT] stdout: 16, 128]}>
[2024-08-16 10:37:38.580549] [STDOUT] stdout: Which: Expected actions: [AndroidSemanticsAction.click, AndroidSemanticsAction.paste,
[2024-08-16 10:37:38.580569] [STDOUT] stdout: AndroidSemanticsAction.setSelection, AndroidSemanticsAction.setText]
[2024-08-16 10:37:38.580782] [STDOUT] stdout: Actual actions: [AndroidSemanticsAction.click, AndroidSemanticsAction.setSelection,
[2024-08-16 10:37:38.580819] [STDOUT] stdout: AndroidSemanticsAction.setText]
[2024-08-16 10:37:38.580836] [STDOUT] stdout: Unexpected: {}
[2024-08-16 10:37:38.582741] [STDOUT] stdout: Missing: {AndroidSemanticsAction.paste}
[2024-08-16 10:37:38.582823] [STDOUT] stdout:
[2024-08-16 10:37:38.582849] [STDOUT] stdout: When the exception was thrown, this was the stack:
[2024-08-16 10:37:38.582871] [STDOUT] stdout: #4 main.<anonymous closure>.<anonymous closure>.<anonymous closure> (file:///Users/qa/Documents/flutter/dev/integration_tests/android_semantics_testing/integration_test/main_test.dart:107:9)
[2024-08-16 10:37:38.582895] [STDOUT] stdout: <asynchronous suspension>
[2024-08-16 10:37:38.582975] [STDOUT] stdout: #5 testWidgets.<anonymous closure>.<anonymous closure> (package:flutter_test/src/widget_tester.dart:189:15)
[2024-08-16 10:37:38.583003] [STDOUT] stdout: <asynchronous suspension>
[2024-08-16 10:37:38.583022] [STDOUT] stdout: #6 TestWidgetsFlutterBinding._runTestBody (package:flutter_test/src/binding.dart:1032:5)
[2024-08-16 10:37:38.583042] [STDOUT] stdout: <asynchronous suspension>
[2024-08-16 10:37:38.583094] [STDOUT] stdout: <asynchronous suspension>
[2024-08-16 10:37:38.583117] [STDOUT] stdout: (elided one frame from package:stack_trace)
[2024-08-16 10:37:38.583135] [STDOUT] stdout:
[2024-08-16 10:37:38.583152] [STDOUT] stdout: This was caught by the test expectation on the following line:
[2024-08-16 10:37:38.583193] [STDOUT] stdout: file:///Users/qa/Documents/flutter/dev/integration_tests/android_semantics_testing/integration_test/main_test.dart line 107
[2024-08-16 10:37:38.584334] [STDOUT] stdout: The test description was:
[2024-08-16 10:37:38.584356] [STDOUT] stdout: TextField has correct Android semantics
[2024-08-16 10:37:38.585016] [STDOUT] stdout: ════════════════════════════════════════════════════════════════════════════════════════════════════
```
</details>
<details open><summary>Dropdown Menu</summary>
```console
[2024-08-16 10:37:50.899477] [STDOUT] stdout: ══╡ EXCEPTION CAUGHT BY FLUTTER TEST FRAMEWORK ╞════════════════════════════════════════════════════
[2024-08-16 10:37:50.899544] [STDOUT] stdout: The following TestFailure was thrown running a test:
[2024-08-16 10:37:50.899575] [STDOUT] stdout: Expected: AndroidSemanticsNode with className: android.view.View with actions:
[2024-08-16 10:37:50.899603] [STDOUT] stdout: [AndroidSemanticsAction.click] with ignoredActions: [AndroidSemanticsAction.accessibilityFocus,
[2024-08-16 10:37:50.899632] [STDOUT] stdout: AndroidSemanticsAction.clearAccessibilityFocus] with flag isChecked: false with flag isEnabled: true
[2024-08-16 10:37:50.899658] [STDOUT] stdout: with flag isFocusable: true
[2024-08-16 10:37:50.899725] [STDOUT] stdout: Actual: AndroidSemanticsNode:<{rect: {top: 516.0, left: 152.88888549804688, bottom: 564.0, width:
[2024-08-16 10:37:50.899756] [STDOUT] stdout: 248, right: 263.1111145019531, height: 108}, contentDescription: Item 1, flags: {isHeading: false,
[2024-08-16 10:37:50.899783] [STDOUT] stdout: isEditable: false, isPassword: false, isEnabled: true, isLongClickable: false, isDismissible: false,
[2024-08-16 10:37:50.899813] [STDOUT] stdout: isCheckable: false, isChecked: false, isFocused: true, isFocusable: true}, className:
[2024-08-16 10:37:50.899851] [STDOUT] stdout: android.widget.Button, liveRegion: 0, id: 25, text: null, actions: [16, 128]}>
[2024-08-16 10:37:50.899876] [STDOUT] stdout: Which: Expected className: android.view.View
[2024-08-16 10:37:50.899900] [STDOUT] stdout: Dropdown Item 1 doesn't have the right semantics
[2024-08-16 10:37:50.899923] [STDOUT] stdout:
[2024-08-16 10:37:50.899991] [STDOUT] stdout: When the exception was thrown, this was the stack:
[2024-08-16 10:37:50.900021] [STDOUT] stdout: #4 main.<anonymous closure>.<anonymous closure>.<anonymous closure> (file:///Users/qa/Documents/flutter/dev/integration_tests/android_semantics_testing/integration_test/main_test.dart:482:13)
[2024-08-16 10:37:50.900439] [STDOUT] stdout: <asynchronous suspension>
[2024-08-16 10:37:50.900529] [STDOUT] stdout: #5 testWidgets.<anonymous closure>.<anonymous closure> (package:flutter_test/src/widget_tester.dart:189:15)
[2024-08-16 10:37:50.900562] [STDOUT] stdout: <asynchronous suspension>
[2024-08-16 10:37:50.900586] [STDOUT] stdout: #6 TestWidgetsFlutterBinding._runTestBody (package:flutter_test/src/binding.dart:1032:5)
[2024-08-16 10:37:50.902863] [STDOUT] stdout: <asynchronous suspension>
[2024-08-16 10:37:50.902957] [STDOUT] stdout: <asynchronous suspension>
[2024-08-16 10:37:50.902985] [STDOUT] stdout: (elided one frame from package:stack_trace)
[2024-08-16 10:37:50.903005] [STDOUT] stdout:
[2024-08-16 10:37:50.903048] [STDOUT] stdout: This was caught by the test expectation on the following line:
[2024-08-16 10:37:50.904418] [STDOUT] stdout: file:///Users/qa/Documents/flutter/dev/integration_tests/android_semantics_testing/integration_test/main_test.dart line 482
[2024-08-16 10:37:50.904475] [STDOUT] stdout: The test description was:
[2024-08-16 10:37:50.904495] [STDOUT] stdout: Dropdown Menu has correct Android semantics
[2024-08-16 10:37:50.907067] [STDOUT] stdout: ════════════════════════════════════════════════════════════════════════════════════════════════════
```
</details>
### Code sample
<details open><summary>Code sample</summary>
```dart
[Paste your code here]
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
Full Logs: [android_semantics_integration_test.txt](https://github.com/user-attachments/files/16641468/android_semantics_integration_test.txt)
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel master, 3.24.0-1.0.pre.607, on macOS 14.5 23F79 darwin-arm64, locale en-UY)
• Flutter version 3.24.0-1.0.pre.607 on channel master at /Users/qa/Documents/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 19fdece6c1 (5 hours ago), 2024-08-16 11:11:37 -0400
• Engine revision d5bf3afc60
• Dart version 3.6.0 (build 3.6.0-146.0.dev)
• DevTools version 2.38.0
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at /Users/qa/Library/Android/sdk
• Platform android-35, build-tools 35.0.0
• ANDROID_SDK_ROOT = /Users/qa/Library/Android/sdk
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 16.1)
• Xcode at /Applications/Xcode-beta.app/Contents/Developer
• Build 16B5001e
• CocoaPods version 1.15.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.1)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
:hammer: https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
:hammer: https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
[✓] VS Code (version 1.87.2)
• VS Code at /Users/qa/Downloads/Visual Studio Code.app/Contents
• Flutter extension version 3.94.0
[✓] Connected device (5 available)
• Pixel 8 Pro (mobile) • 3B101FDJG000F0 • android-arm64 • Android 15 (API 35)
• Hairon's iPhone (mobile) • 00008130-001260E82E04001C • ios • iOS 17.6.1 21G93
• macOS (desktop) • macos • darwin-arm64 • macOS 14.5 23F79 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 14.5 23F79 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 127.0.6533.120
[✓] Network resources
• All expected network resources are available.
```
</details>
| platform-android,framework,a: accessibility,has reproducible steps,P2,from: manual-qa,found in release: 3.24,team-accessibility,triaged-accessibility | low | Critical |
2,471,066,162 | langchain | OpenAI refusals for structured output not added to `AIMessageChunk.additional_kwargs` when a dict is passed as the schema to `ChatOpenAI.with_structured_output` | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import os
from pprint import pprint as pp
from dotenv import load_dotenv
from getpass import getpass
from langchain_core.messages import BaseMessage
from langchain_core.runnables import Runnable
from langchain_openai import ChatOpenAI
from pydantic import BaseModel
load_dotenv()
if not os.getenv("OPENAI_API_KEY"):
print("Please enter your OpenAI API key")
os.environ["OPENAI_API_KEY"] = getpass()
class Step(BaseModel):
explanation: str
output: str
class Reasoning(BaseModel):
steps: list[Step]
final_answer: str
llm = ChatOpenAI(model="gpt-4o-2024-08-06", temperature=0.7)
model = ChatOpenAI(model="gpt-4o-2024-08-06")
chain_of_thought = model.with_structured_output(
Reasoning, method="json_schema", include_raw=True
)
messages = [
{
"role": "system",
"content": "Guide the user through the solution step by step. If something is unethical or illegal, refuse to answer.",
},
{
"role": "user",
"content": """How can I commit murder with only one toothbrush and a pencil sharpener in prison?""",
},
]
def stream_chunks(chain_of_thought: Runnable, messages: list[BaseMessage] | list[dict]):
try:
for chunk in chain_of_thought.stream(messages):
print(chunk)
except OpenAIRefusalError:
pass
# correctly adds `refusal` property to the response message
stream_chunks(chain_of_thought, messages)
```
```text
langchain_openai/chat_models/base.py:539: UserWarning: Streaming with Pydantic response_format not yet supported.
warnings.warn("Streaming with Pydantic response_format not yet supported.")
{'raw': AIMessageChunk(content='', additional_kwargs={'parsed': None, 'refusal': "I'm sorry, I cannot assist with that request."}, response_metadata={'finish_reason': 'stop', 'logprobs': None}, id='run-04d78dd1-9b2a-4d6d-8042-7a3a63097e8f', usage_metadata={'input_tokens': 53, 'output_tokens': 11, 'total_tokens': 64})}
{'parsing_error': None}
```
```python
resp_format_as_dict = {
"name": "Reasoning",
"description": "Reason through steps to explain a solution.",
"parameters": {
"type": "object",
"properties": {
"steps": {
"items": {"$ref": "#/$defs/Step"},
"title": "Steps",
"type": "array",
},
"final_answer": {"title": "Final Answer", "type": "string"},
},
"required": ["steps", "final_answer"],
"$defs": {
"Step": {
"properties": {
"explanation": {"title": "Explanation", "type": "string"},
"output": {"title": "Output", "type": "string"},
},
"required": ["explanation", "output"],
"title": "Step",
"type": "object",
"additionalProperties": False,
}
},
"title": "Reasoning",
"additionalProperties": False,
},
"strict": True,
}
chain_of_thought = model.with_structured_output(
resp_format_as_dict, method="json_schema", include_raw=True
)
# doesn't add the `refusal` property to the response message
stream_chunks(chain_of_thought, messages)
```
```text
{'raw': AIMessageChunk(content='', response_metadata={'finish_reason': 'stop', 'model_name': 'gpt-4o-2024-08-06', 'system_fingerprint': 'fp_2a322c9ffc'}, id='run-1d23dbeb-b72b-42ac-8bbf-e437bc17fe0f')}
{'parsing_error': None}
```
```python
messages[-1]["content"] = "What is 1 + 17 ^2?"
# correctly adds `refusal` property (None) to the response message
stream_chunks(chain_of_thought, messages)
```
```text
langchain_openai/chat_models/base.py:539: UserWarning: Streaming with Pydantic response_format not yet supported.
warnings.warn("Streaming with Pydantic response_format not yet supported.")
{'raw': AIMessageChunk(content='{"steps":[{"explanation":"First, calculate the exponentiation. Raise 17 to the power of 2.","output":"17 ^ 2 = 289"},{"explanation":"Next, add 1 to the result obtained from the exponentiation.","output":"1 + 289"}],"final_answer":"290"}', additional_kwargs={'parsed': Reasoning(steps=[Step(explanation='First, calculate the exponentiation. Raise 17 to the power of 2.', output='17 ^ 2 = 289'), Step(explanation='Next, add 1 to the result obtained from the exponentiation.', output='1 + 289')], final_answer='290'), 'refusal': None}, response_metadata={'finish_reason': 'stop', 'logprobs': None}, id='run-6f2f2c55-46bd-4a05-8830-4ef84ddfe402', usage_metadata={'input_tokens': 46, 'output_tokens': 64, 'total_tokens': 110})}
{'parsed': Reasoning(steps=[Step(explanation='First, calculate the exponentiation. Raise 17 to the power of 2.', output='17 ^ 2 = 289'), Step(explanation='Next, add 1 to the result obtained from the exponentiation.', output='1 + 289')], final_answer='290')}
{'parsing_error': None}
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
- When calling `ChatOpenAI.with_structured_output` to make a request to the OpenAI `/chat/completions` endpoint with the structured output (`json_schema`) `response_format`, if we pass a dict as the `schema`, then no `refusal` property is added to the response message's `additional_kwargs`.
- The `refusal` is only added if we pass a Pydantic model as the schema.
- Passing a dict returns the correct output in typical cases where no refusal is generated by the LLM.
- This appears to be related to how `_oai_structured_outputs_parser` is only used when the schema is a Pydantic model, and a `JsonOutputParser` is otherwise used.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:14:38 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6020
> Python Version: 3.11.4 (main, Jul 27 2023, 23:35:36) [Clang 14.0.3 (clang-1403.0.22.14.1)]
Package Information
-------------------
> langchain_core: 0.2.32
> langchain: 0.2.14
> langchain_community: 0.2.12
> langsmith: 0.1.93
> langchain_openai: 0.1.21
> langchain_text_splitters: 0.2.2
Optional packages not installed
-------------------------------
> langgraph
> langserve
Other Dependencies
------------------
> aiohttp: 3.9.5
> async-timeout: Installed. No version info available.
> dataclasses-json: 0.6.7
> jsonpatch: 1.33
> numpy: 1.26.4
> openai: 1.40.6
> orjson: 3.10.6
> packaging: 23.2
> pydantic: 2.8.2
> PyYAML: 6.0.1
> requests: 2.32.3
> SQLAlchemy: 2.0.31
> tenacity: 8.5.0
> tiktoken: 0.7.0
> typing-extensions: 4.12.2 | 🤖:bug,investigate,🔌: openai | low | Critical |
2,471,092,527 | godot | Vulkan ignores VSync setting and leaks memory on Windows. | ### Tested versions
v4.3.stable.steam [77dcf97d8]
(These don't have the D3D12 renderer, but the Vulkan renderer still has the same behaviour.)
v4.2.2.stable.official [15073afe3]
v4.1.4.stable.official [fe0e8e557]
v4.0.4.stable.official [fc0b241c9]
v4.0.stable.official [92bee43ad]
### System information
Godot v4.3.stable (77dcf97d8) - Windows 10.0.22631 (Mini11 AIO 23H2 v1 Neptune) - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3050 Laptop GPU (NVIDIA; 32.0.15.6081) - AMD Ryzen 7 5800H with Radeon Graphics (16 Threads)
### Issue description
In the Vulkan renderer, on an empty project it ignores the VSync setting and it starts leaking memory.

When using D3D12, this doesn't happen.

### Steps to reproduce
- Go to the NVIDIA Control Panel.
- Set "Use my preference emphasising" to "Performance".
- Make a new empty project.
- Add an empty 2D Scene and run the project.
### Minimal reproduction project (MRP)
N/A | bug,platform:windows,topic:core,topic:rendering,performance | low | Major |
2,471,096,683 | flutter | `flutter drive` should emit a specific exit code (at minimum) if an emulator or device goes offline | In https://github.com/flutter/flutter/issues/153445 we investigated that Android 35 (the upcoming release) is too unstable on emulators to be used on pre or post-submit infrastructure (see our [CI changes here (https://github.com/flutter/flutter/issues/153445)). In an informal conversation between @zanderso @reidbaker @johnmccutchan and @matanlurey we talked about our inability to determine root causes.
@johnmccutchan wrote:
> I guess I'm wondering why we can't have finer grained signals of test failures?
>
> Like if an emulator is crashing I'd like that to be flagged as something distinct than a flakey test. Because it wasn't the test that flaked actually, it was our environment.
@zanderso wrote:
> Yeah, I was thinking like `flutter drive` would have different error exit codes for a test failure vs. failures for other reasons like the emulator/device disappearing.
So, this issue is tracking _doing this_.
I admit I am not sure how easy this will be, in particular to do across every platform we support.
If we limit the scope (i.e. _Android_ devices), or file sub-issues for every platform, it might be viable to do _some_ of these. | a: tests,tool,t: flutter driver,f: integration_test,P2,c: tech-debt,team-tool,triaged-tool | low | Critical |
2,471,141,652 | next.js | BUG: incorrect url encoding in `<link rel="preload">` for prioritized images | ### Link to the code that reproduces this issue
https://github.com/HelaGone/image_testing
### To Reproduce
1. build the application
2. start the server
3. visit the provided url
4. right click and select view source code
5. search for the link `rel=preload` of the `imageSrcSet`
6.
<img width="825" alt="Screenshot 2024-08-16 at 4 35 25 p m" src="https://github.com/user-attachments/assets/7efc30dc-8c6e-4948-b598-6ec87379e97e">
7. see the wrong encoding of the `&` it's `&` and should be `&` or `%26`
8. copy the path and concatenate the path with the `localhost:3000`
9. see the error in the browser: `"w" parameter (width) is required`
<img width="910" alt="Screenshot 2024-08-16 at 4 34 17 p m" src="https://github.com/user-attachments/assets/bb8e117d-a1e8-4aff-a42d-0b44dfb62053">
### Current vs. Expected behavior
current behavior is the error in the browser:
<img width="910" alt="Screenshot 2024-08-16 at 4 34 17 p m" src="https://github.com/user-attachments/assets/fc0d4651-ed4b-4265-8f31-568e9a5b981a">
Expected behaviour:
The image should resolve correctly in the browser.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.5.0: Wed May 1 20:12:58 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6000
Available memory (MB): 32768
Available CPU cores: 10
Binaries:
Node: 20.9.0
npm: 10.1.0
Yarn: 1.22.21
pnpm: N/A
Relevant Packages:
next: 15.0.0-canary.118 // Latest available version is detected (15.0.0-canary.118).
eslint-config-next: N/A
react: 19.0.0-rc-49496d49-20240814
react-dom: 19.0.0-rc-49496d49-20240814
typescript: 5.3.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Image (next/image)
### Which stage(s) are affected? (Select all that apply)
Other (Deployed)
### Additional context
This is affecting our application in production and reporting Google Search Console errors
<img width="861" alt="Screenshot 2024-08-16 at 4 43 41 p m" src="https://github.com/user-attachments/assets/dff587bf-95e2-4b07-9af5-02c7e3120c12">
Here is a production with [wrong encoding](https://www.nmas.com.mx/_next/image/?url=https%3A%2F%2Fstatic-live.nmas.com.mx%2Fnmas-news%2Fstyles%2Fcorte_16_9%2Fcloud-storage%2F2023-06%2Finstagram-red-abuso-sexual-infantil.jpeg.jpg%3Fitok%3DMPkXPAn6&w=1920&q=80)
The same image with [proper encoding:](https://www.nmas.com.mx/_next/image/?url=https%3A%2F%2Fstatic-live.nmas.com.mx%2Fnmas-news%2Fstyles%2Fcorte_16_9%2Fcloud-storage%2F2023-06%2Finstagram-red-abuso-sexual-infantil.jpeg.jpg%3Fitok%3DMPkXPAn6&w=1920&q=80) | bug,Image (next/image),linear: next | medium | Critical |
2,471,143,065 | pytorch | PyTorch `torch.compile` inductor backend shows enormous compile-time regressions. | ### 🐛 Describe the bug
The following reproducer seems to take forever to compile on torch 2.4.0, I did not let it run to termination, but the regression is >> 3600 seconds:
```
import torch
import torchvision
from PIL import Image
import requests
with torch.no_grad():
model = torchvision.models.detection.retinanet_resnet50_fpn(pretrained=True)
model.eval()
torch._dynamo.reset()
compiled = torch.compile(model, dynamic=True, backend="inductor")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
image_tensor = torchvision.transforms.functional.to_tensor(image).unsqueeze(0)
print(compiled(image_tensor))
```
Compared to PyTorch 2.3.1, which takes in the order of 10-20 seconds to compile and run the model.
### Versions
Collecting environment information...
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 20.0.0git (https://github.com/llvm/llvm-project.git 0795ab4eba14b7a93c52c06f328c3d4272f3c51e)
CMake version: version 3.29.3
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.153.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA RTX A3000 12GB Laptop GPU
Nvidia driver version: 536.45
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i7-12850HX
CPU family: 6
Model: 151
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
Stepping: 2
BogoMIPS: 4838.39
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: Microsoft
Virtualization type: full
L1d cache: 576 KiB (12 instances)
L1i cache: 384 KiB (12 instances)
L2 cache: 15 MiB (12 instances)
L3 cache: 25 MiB (1 instance)
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.4.0
[pip3] torchvision==0.19.0
[pip3] triton==3.0.0
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @bobrenjc93 @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @muchulee8 @ColinPeppler @amjames @desertfire @oulgen @jamesjwu @aorenste @anijain2305 @laithsakka | high priority,triaged,oncall: pt2,module: dynamic shapes,module: startup-tracing-compile | low | Critical |
2,471,148,320 | godot | Models randomly deleting (vanishing) and making every other models disappear as well | ### Tested versions
Reproductive in: v4.3.stable.mono.official.77dcf97d8 compatible with c#, but i dont use c# in the project with this error at all.
in v4.2 the project works completely fine
### System information
Windows 10 - OpenGL API 3.3.0 NVIDIA 560.70 - Compatibility - Using Device: NVIDIA - NVIDIA GeForce RTX 3080 Ti Laptop GPU
### Issue description
At first i want to notice that im constantly receiving this "LoadErors" window with no errors in it on startup:

It could be somehow connected to the bug.
### The main issue is:
Models exported from blender started to delete when running a scene. Im doing a fps multiplayer game and at first, the game was crushing due to gun model disappear, i debugged for a hour and there is no queue_free(), no deleting functions at all so i understood that it deletes it self and found a message in debug saying:
> LocalGameMngr.gd:10 @ _ready(): Node 'WeaponzBoneThingg/Weapons/Root Scene' was modified from inside an instance, but it has vanished.
I did not change code at this point at all, and after i got rid of this particular model of revolver and launched my game now my player model started to delete it self also?! Like it makes no sense at all. Why didnt it delete before but only now? You might think that my project might be at some remote external drive but its not.
### So i ran some testings:

here error codes:
> W 0:00:01:0256 open: res://Weapons/Revolver/oldRevolverLowPoly.fbx: In external resource #0, invalid UID: uid://c4r0dop55u3pa - using text path instead: res://Weapons/Revolver/oldRevolverLowPoly-85c565a130d3f39de665492bb4cb7d68_old revolver.png
> <C++ Source> core/io/resource_format_binary.cpp:1083 @ open()
> E 0:00:01:0256 _load: Resource file not found: res://Weapons/Revolver/oldRevolverLowPoly-85c565a130d3f39de665492bb4cb7d68_old revolver.png (expected type: Texture2D)
> <C++ Error> Condition "!file_check->file_exists(p_path)" is true. Returning: Ref<Resource>()
> <C++ Source> core/io/resource_loader.cpp:288 @ _load()
> E 0:00:01:0256 parse_variant: Can't load dependency: res://Weapons/Revolver/oldRevolverLowPoly-85c565a130d3f39de665492bb4cb7d68_old revolver.png.
> <C++ Error> Method/function failed. Returning: error
> <C++ Source> core/io/resource_format_binary.cpp:461 @ parse_variant()
> E 0:00:01:0256 _load: Failed loading resource: res://.godot/imported/oldRevolverLowPoly.fbx-85c565a130d3f39de665492bb4cb7d68.scn. Make sure resources have been imported by opening the project in the editor at least once.
> <C++ Error> Condition "found" is true. Returning: Ref<Resource>()
> <C++ Source> core/io/resource_loader.cpp:283 @ _load()
> E 0:00:01:0256 _load: Failed loading resource: res://Weapons/Revolver/oldRevolverLowPoly.fbx. Make sure resources have been imported by opening the project in the editor at least once.
> <C++ Error> Condition "found" is true. Returning: Ref<Resource>()
> <C++ Source> core/io/resource_loader.cpp:283 @ _load()
> E 0:00:01:0256 _parse_ext_resource: res://Vehicle/TestCar.tscn:28 - Parse Error: [ext_resource] referenced non-existent resource at: res://Weapons/Revolver/oldRevolverLowPoly.fbx
> <C++ Source> scene/resources/resource_format_text.cpp:159 @ _parse_ext_resource()
> W 0:00:01:0258 instantiate: Node './Root Scene2' was modified from inside an instance, but it has vanished.
> <C++ Source> scene/resources/packed_scene.cpp:254 @ instantiate()
And as we can see the biggest revolver which is .fbx model is being deleted but surprisingly the .blend export of it does not, as well as player models do not delete them selfs nor .fbx or .blend does.
Also u could notice some errors which have something to do with revolver textures and assume that its the reason of the issue but its not, those extremelly long named textures are textures exported with model from blender which i deleted, and models utilize material which i created with different texture files and further more: i had this error even at start of project but everything allways ran completely fine.
It might have something to do with the issue.
### Then i added player .tscn but removed ALL scripts from it

Also something that i cannot understand that it also gives me warnings of code which is not exist on the current scene, the code is usually attached to player, yes. But this time there i removed all code from all object and it should not appear at all?!
_i attached those in this file because there is 81 lines which would take way to much space from here_
[Errors when player tscn is present.txt](https://github.com/user-attachments/files/16641904/Errors.when.player.tscn.is.present.txt)
_p.s. sorry for way too genious variable names_
I believe this is it. Before, i tried to find this issue listed in reports but could not so i believe this is not a duplicate.
im sorry for this issue to be overly confusing and poorly written...
### Steps to reproduce
I have no idea how u can reproduce it but i can attach models files here
[2mb models of player and revolver](https://drive.google.com/file/d/1icGTttwj_eop6E8vDuhLy_rjBqFwRihB/view?usp=sharing)
and whole project if u will. i guess i have nothing to loose anyway.
[15 mb project](https://drive.google.com/file/d/1otFgVn-dSTYRpvUMKg90U9lPp47TblPK/view?usp=sharing)
### Also i must notice some important details which could have something with producing this bug:
The revolver deleting bug started to accrue in Godot_v4.2.2-stable_mono_win64 and there fore i downloaded the latest version and after this the whole player deleting everything bug happend. And so the project was converted from 4.2 to 4.3
after i wrote all this i tested for a few more hours and its unbelievable but.... i had the godot project on external drive and i allready had a Godot_v4.2.2-stable_mono_win64, i opened it there and everything works perfectly fine... i must notice that now there is not a single red error message about missing textures as well as no empty "LoadErors" window... and this project must be a little bit older but i can not tell any differences i dont remember any differences in it at all, the only difference it has suppose to be absence of a single vehicle .blend model
https://drive.google.com/file/d/1lAWTAW2NFB5HdQTdrLo5CjD-LYPSEtVu/view?usp=sharing
_here link to the project which i believe should not work as well_
### Minimal reproduction project (MRP)
https://drive.google.com/file/d/1YhNvxoKq9m10gVdGI51clW4fx0FybVNz/view?usp=sharing
_archive a attached weights 5 mb_
So what u have to do is to launch main scene called "World"
Appeared window u must first press "Host"
Then press "Play"
After, in Scene tab u must choose "Remote" and u can see that models r deleted

But in .tscn file of Player following in this directory:

u can notice that the structure is different and witness that models some how delete them selfs randomly, but the thing is that for me at first hour player model did not delete it self and started doing it only later for no reason, i repeat: i did not change any code, i did not change any structure, only thing i did is i added an csg cube as child NOT to player model but for separate object called "WeaponzBoneThingg" which is marked green on following image

player model is called "BlankCharacter" and marked red on the screenshot
i have to mention that unarchived project weights 30 mb but it impossible to make it weight less then 10 because only the models together allready weight more then 10 mb and its essential because bug is strongly dependent on those models
note: i didnot upload .godot folder but its impossible to "Drag and drop a ZIP archive to upload it" even through .zip file weights 5 mb | bug,topic:core,needs testing,topic:import,regression,topic:3d | low | Critical |
2,471,174,551 | godot | Y-sorted subtree's root gets modulation applied twice | ### Tested versions
v4.3.stable.official [77dcf97d8] and earlier (not a regression)
### System information
N/A
### Issue description
Title. In action:

### Steps to reproduce
1. Create scene with single Sprite2D (and texture assigned).
2. Apply modulation.
3. Toggle `y_sort_enabled`.
### Minimal reproduction project (MRP)
N/A | bug,topic:rendering,topic:2d | low | Minor |
2,471,175,792 | godot | [BUG] Title bar hidden in windowed mode when viewport is higher than display resolution | ### Tested versions
v4.3.stable.mono.official [77dcf97d8]
### System information
Godot v4.3.stable.mono - Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3060 Ti (NVIDIA; 31.0.15.3623) - AMD Ryzen 7 3700X 8-Core Processor (16 Threads)
### Issue description

As you can see no title bar is shown when running the game:

### Steps to reproduce
Set the window to something higher than your monitor screen resolution and see that the title bar is not displayed when the game is launch.
My display resolution is 1920x1080
My godot project:

### Minimal reproduction project (MRP)
[test-screen-size.zip](https://github.com/user-attachments/files/16642514/test-screen-size.zip)
| bug,discussion,topic:gui | low | Critical |
2,471,180,470 | tensorflow | Cannot dlopen some GPU libraries [can't find cuda driver] in rhel9 | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
binary
### TensorFlow version
v2.17.0-rc1-2-gad6d8cc177d 2.17.0
### Custom code
Yes
### OS platform and distribution
Redhat enterprise 9.4 base image
### Mobile device
_No response_
### Python version
3.11
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
12.2
### GPU model and memory
Tesla T4, 15360MiB
### Current behavior?
What is your question?
Describe the bug
Error when I run GPU test, wondering if my docker linux kernel version is too low? The reason I am asking is that, my ubuntu22 is working fine.
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1723846435.253301 203 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
2024-08-16 22:13:55.253747: W tensorflow/core/common_runtime/gpu/gpu_device.cc:2343] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
"
My Kernel version,
bash-5.1# uname -a
Linux mlops-test-failed 5.10.220-209.869.amzn2.x86_64 https://github.com/rapidsai/cudf/issues/1 SMP Wed Jul 17 15:10:20 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
My OS/Docker image distro,
"
bash-5.1# uname -m && cat /etc/*release
x86_64
NAME="Red Hat Enterprise Linux"
VERSION="9.4 (Plow)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="9.4"
PLATFORM_ID="platform:el9"
PRETTY_NAME="Red Hat Enterprise Linux 9.4 (Plow)"
ANSI_COLOR="0;31"
LOGO="fedora-logo-icon"
CPE_NAME="cpe:/o:redhat:enterprise_linux:9::baseos"
HOME_URL="https://www.redhat.com/"
DOCUMENTATION_URL="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 9"
REDHAT_BUGZILLA_PRODUCT_VERSION=9.4
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="9.4"
Red Hat Enterprise Linux release 9.4 (Plow)
Red Hat Enterprise Linux release 9.4 (Plow)
"
Both my ubuntu 22 and rhel9 were showing the nvidia-smi ok like the following.
bash-5.1# nvidia-smi
Fri Aug 16 22:29:43 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.183.06 Driver Version: 535.183.06 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 Tesla T4 On | 00000000:00:1E.0 Off | 0 |
| N/A 33C P8 11W / 70W | 0MiB / 15360MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| No running processes found |
+---------------------------------------------------------------------------------------+
I am checking the following url,
https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html
and it says the rhel9 kernel version has to be 5.14.0-427.
Your input is appreciated!
### Standalone code to reproduce the issue
```shell
def run_gpu_test():
gpus = tf.config.list_logical_devices('GPU')
print("Num GPUs Available: ", len(gpus))
run_gpu_test()
In Ubuntu22, it prints gpu number 1 with all the gpu information and in rhel9 it does not.
This is a Pod we created in eks, and by exec to the pod, we pasted the debugging information in the above section. I did nvidia-smi and both ubuntu22 and rhel9 shows GPU fine. Ubuntu22 works fine but not rhel9. The node we created is a AWS G4 instance, so it has tensorflow 12.7 and cuda 12.2 and we installed nvidia-plugin. I think this should be very easy to reproduce not sure if the aws kernel version matters.
```
### Relevant log output
```shell
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1723846435.253301 203 cuda_executor.cc:1015] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
2024-08-16 22:13:55.253747: W tensorflow/core/common_runtime/gpu/gpu_device.cc:2343] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
"
```
| stat:awaiting tensorflower,type:bug,type:build/install,2.17 | low | Critical |
2,471,180,664 | flutter | release branches use target configurations from the main branch | Fixing this would avoid situations like having to land upstream .ci.yaml changes just to unblock a release, such as in: https://github.com/flutter/engine/pull/54591 | team-infra,P1,triaged-infra | medium | Minor |
2,471,192,723 | godot | gui_input() stops on the scrollbar of a ScrollContainer | ### Tested versions
- v4.3-stable
### System information
Godot v4.3.stable - Windows 10.0.19045 - GLES3 (Compatibility) - Intel(R) HD Graphics 620 (Intel Corporation; 31.0.101.2111) - Intel(R) Core(TM) i5-7300U CPU @ 2.60GHz (4 Threads)
### Issue description
When using the signal `gui_signal()` hovering over the scrollbar part of it does not trigger the signal, im unsure if this is a bug but it feels like one as every other part of the container triggers the signal.
```gdscript
extends Node2D
var on_ui : bool = false
func _process(_delta):
print(on_ui)
on_ui = false
func on_gui_input(_event):
on_ui = true
```
### Steps to reproduce
1. Create Scrollbar with an overflow so the scrollbar appears
2. Add the code above and link the signal `gui_input(event)` to the function `on_gui_input(_event)`
3. Run the game and run the cursor up and down the scrollbar and it will continue to output `false`
Im unsure if this is a bug but running my cursor on anything else that i have linked to my function with `gui_input(event)` always triggers the function.
### Minimal reproduction project (MRP)
N/A | discussion,topic:gui | low | Critical |
2,471,213,868 | transformers | Integrate Liger (Linkedin GPU Efficient Runtime) Kernel to HuggingFace | ### Feature request
Integrate Liger (Linkedin GPU Efficient Runtime) Kernel to HuggingFace Trainer, user could decide whether to enable kernel with a simple flag
### Motivation
Liger (Linkedin GPU Efficient Runtime) Kernel is a collection of Triton kernels designed specifically for LLM training. We have implemented Hugging Face Compatible RMSNorm, RoPE, SwiGLU, CrossEntropy, FusedLinearCrossEntropy, and more to come. It can effectively increase multi-GPU training throughput by 20% and reduces memory usage by 60%. The kernel works out of the box with [flash attention](https://github.com/Dao-AILab/flash-attention), PyTorch FSDP, and Microsoft DeepSpeed. We welcome contributions from the community to gather the best kernels for LLM training.
### Your contribution
We (LinkedIn) will take care of work for a smooth integration and would need HF review and feedback for changes.
### Benchmark
Benchmark conditions: LLaMA 3-8B, Alpaca Dataset, Max seq len = 512, Data Type = bf16, Optimizer = AdamW, Gradient Checkpointing = True, Distributed Strategy = FSDP1 on 4 A100s.
The throughput increases by approximately 20% with more data, but the GPU memory is reduced by 40%. This means you can train the model on smaller GPUs, with larger batch sizes, or with longer sequence lengths at no additional cost.


For more detailed benchmark setup and more exciting efficiency for multi-head training (Medusa), please refer to original repo: https://github.com/linkedin/Liger-Kernel (Repo will be public soon) | trainer,Feature request | low | Major |
2,471,232,007 | godot | Xbox controller inputs not detected in editor in 4.3 due to Steam Input injection | ### Tested versions
Bug occurs in 4.3, doesn't appear 4.2
### System information
Godot v4.3.stable (77dcf97d8) - Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3070 (NVIDIA; 32.0.15.6070) - AMD Ryzen 7 7800X3D 8-Core Processor (16 Threads)
### Issue description
So I made a basic game following a tutorial, before the update controller worked fine both when launched from project manager and from the editor, but in 4.3 it runs fine from the project manager, but inputs aren't detected from the editor.
keyboard works fine regardless in 4.3
I am using an xbox series controller for this
*Edit, tried this with a ps5 controller and this issue doesn't occur with it
The input mapper does detect the controller, but it doesn't seem to effect it in-game
*edit* I think I've narrowed it down a bit:
in 4.3, when launching from the editor, the xbox controller does not work.
in 4.3, when launching from the project manager, the xbox controller does work.
in 4.2, the xbox controller works when launching from either.
in 4.3 and 4.2, the ps5 controller works when launching from either.
However, I have found that the issue seems to be steam input messing with it (I am using the steam version), when I
disable steam input the xbox controller works fine regardless if its launched from the project manager or the editor
(thanks to @thorncreature for helping me with this on discord!)
### Steps to reproduce
Run the game with a controller and try to use it from the editor (in this case with d-pad or the left joystick)
### Minimal reproduction project (MRP)
[2d-project-start.zip](https://github.com/user-attachments/files/16646500/2d-project-start.zip)
| bug,platform:windows,topic:thirdparty,needs testing,topic:input,regression | low | Critical |
2,471,238,219 | flutter | Improve codesign testing strategy | apparently, we don't validate that the entitlements files for macOS codesigning have the correct contents until we actually try to codesign them as part of the release workflow. this leads to fire drills such as: https://github.com/flutter/flutter/issues/153532
- [ ] We should validate the entitlements files for executable binaries built for macOS in presubmit on the main branch
- [ ] We should consider deprecating the framework verify binaries are codesigned integration test | P1,team-release | medium | Minor |
2,471,248,422 | flutter | TextField doesn't lose focus after tap done button of keyboard in web | Issue is reproducible only for flutter 3.24.0. For flutter 3.22.3 it works okay
### Steps to reproduce
1. Start app with code from example.
2. Select field
3. Tap done button over keyboard
### Expected results
Field lost focus
### Actual results
Field still has focus
### Code sample
<details open><summary>Code sample</summary>
```dart
void main() async {
runApp(
MaterialApp(
home: Scaffold(
body: Center(
child: TextField(),
),
),
),
);
```
</details>
### Screenshots or Video

### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
Doctor summary (to see all details, run flutter doctor -v):
[✓] Flutter (Channel stable, 3.24.0, on macOS 14.6.1 23G93 darwin-arm64, locale en-GB)
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
[✓] Xcode - develop for iOS and macOS (Xcode 15.4)
[✓] Chrome - develop for the web
[✓] Android Studio (version 2024.1)
[✓] VS Code (version 1.92.1)
[✓] Connected device (4 available)
! Error: Browsing on the local area network for iPad (Лиля). Ensure the device is unlocked and attached with a cable or associated with the same local area network as this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code -27)
[✓] Network resources
• No issues found!
```
</details>
| a: text input,c: regression,framework,platform-web,f: focus,has reproducible steps,P1,browser: safari-ios,team-web,triaged-web,:hourglass_flowing_sand:,fyi-text-input,found in release: 3.24 | medium | Critical |
2,471,292,158 | godot | SubViewport only draws once, only in Safari Technology Preview | ### Tested versions
- Reproducible in: 4.3 stable + Safari Tech Preview 199
- Not reproducible in: 4.3 stable + regular Safari or Chrome
### System information
Godot v4.3.stable - macOS 14.5.0 - GLES3 (Compatibility) - Apple M1 Max - Apple M1 Max (10 Threads)
### Issue description
It seems that a SubViewport will only draw successfully once—after that it keeps its contents indefinitely, apparently ignoring the `render_target_update_mode`.
### Steps to reproduce
1. Get the [2D in 3D demo project](https://godotengine.org/asset-library/asset/2803)
2. Export it for web
3. Open the resulting page in Safari Tech Preview
### Minimal reproduction project (MRP)
https://godotengine.org/asset-library/asset/2803 | bug,platform:web,platform:macos,topic:thirdparty | low | Minor |
2,471,301,979 | terminal | Volume Mixer Sound Setting not saved for Windows Terminal | ### Windows Terminal version
1.20.11781.0
### Windows build number
10.0.19045.4780
### Other Software
_No response_
### Problem
My issue is I like to have my system sounds (including the bell) set to 10% of System Volume.
I have already customised the bell wav for my terminal profiles and this works well, its just the bell sound is too loud and I can't set it to 10% volume because Windows doesn't remember the volume mixer setting for Windows Terminal after it restarts (unlike other apps).
Some undesirable workarounds:
- I know I can disable the bell via `bellStyle` but this is not what I want to do.
- I know I can modify the bell sound wav audio itself to be much quieter but it seems like a hack.
### Steps to reproduce
Current volume mixer:

These settings work perfectly for Command Prompt shell (my system sounds is at 10% and my bell sound is quiet).
However, for other WT profiles it does not work the same:
1. Open wt up with a non Command Prompt profile (for example, I am using Git Bash)
2. Trigger bell sound (i.e. via failed autocompletion etc). Windows Terminal now appears in volume mixer
3. Set Windows Terminal volume to desired volume (i.e. 10%)
4. Quit Windows Terminal
5. Start Windows Terminal
6. Trigger bell sound again (Volume is back to 100% in Volume Mixer)
Post volume mixer:

### Expected Behavior
Volume remains the same as previously set value in Volume Mixer.
### Actual Behavior
Volume is reset (back to 100%) in Volume Mixer. | Help Wanted,Issue-Bug,Area-UserInterface,Product-Terminal | low | Critical |
2,471,309,285 | vscode | Automatically Convert Windows File Paths to WSL Paths When Dragged into VS Code Terminal | Description:
When using Visual Studio Code in conjunction with WSL (Windows Subsystem for Linux), dragging a file from the Windows file system into the VS Code terminal currently displays the Windows path format (e.g., C:/Users/xx/Desktop/000/test.txt). However, WSL uses the Linux file system format, and the Windows path is not directly usable in WSL.
Suggestion:
I propose adding a feature in VS Code that automatically recognizes and converts Windows file paths to WSL paths when dragged into the terminal. For example, when dragging the path C:/Users/xx/Desktop/000/test.txt, VS Code could automatically convert it to /mnt/c/Users/xx/Desktop/000/test.txt to make it directly usable in the WSL environment.
Benefits:
Improved Efficiency: Reduces the need to manually convert file paths, streamlining the development workflow.
Error Reduction: Prevents issues caused by inconsistent path formats.
Enhanced User Experience: Provides a more seamless experience for developers working with WSL.
I hope this feature can be implemented in a future release. Thank you for your hard work and dedication to improving the user experience! | feature-request,WSL,terminal-input | low | Critical |
2,471,321,007 | pytorch | `Softmax()` returns a wrong error message | ### 🐛 Describe the bug
Setting the wrong value `True` to `dim` of [Softmax()](https://pytorch.org/docs/stable/generated/torch.nn.Softmax.html) gets the error message as shown below:
```python
import torch
from torch import nn
my_tensor = torch.tensor([-1., 0., 1.])
softmax = nn.Softmax(dim=True)
softmax(input=my_tensor) # Error
```
```
TypeError: softmax() received an invalid combination of arguments - got (bool), but expected one of:
* (int dim, torch.dtype dtype)
* (name dim, *, torch.dtype dtype)
```
So I set the correct value `0` to `dim` and set `dtype` to `Softmax()` following the error message above but `dtype` doesn't work as shown below:
```python
import torch
from torch import nn
my_tensor = torch.tensor([-1., 0., 1.])
softmax = nn.Softmax(dim=0, dtype=torch.int64) # Error
softmax(input=my_tensor)
```
> TypeError: Softmax.__init__() got an unexpected keyword argument 'dtype'
### Versions
```python
import torch
torch.__version__ # 2.3.1+cu121
```
cc @albanD | triaged,actionable,module: python frontend | low | Critical |
2,471,333,988 | pytorch | `torch.nn.LSTM` using cuDNN fails with sequences longer than 65,535 | ### 🐛 Describe the bug
I'm having trouble with LSTM models running on GPU when the inputted sequence gets too long. Here's an example:
```python
import torch
def test_case(sequence_length: int):
# device = "cpu" # SUCCESS
device = "cuda" # FAIL
print("Attempt...")
# Not important what values are chosen for these
batch_size = 2
input_size = 4
x = torch.randn(sequence_length, batch_size, input_size).to(device)
model = torch.nn.LSTM(num_layers=3, input_size=input_size, hidden_size=128).to(
device
)
model(x)
test_case(sequence_length=65535) # Succeeds
try:
test_case(sequence_length=65536)
print("SUCCESS!")
except RuntimeError as e:
print("FAIL!")
print(e)
```
Outputs:
```
Attempt...
Attempt...
FAIL!
cuDNN error: CUDNN_STATUS_NOT_SUPPORTED. This error may appear if you passed in a non-contiguous input.
```
Unfortunately, I haven't dug in to find the root cause.
### Versions
```
Collecting environment information...
PyTorch version: 2.4.0
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 11 Pro
GCC version: Could not collect
Clang version: Could not collect
CMake version: Could not collect
Libc version: N/A
Python version: 3.10.14 | packaged by conda-forge | (main, Mar 20 2024, 12:40:08) [MSC v.1938 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.22631-SP0
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3070 Ti
Nvidia driver version: 560.70
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture=9
CurrentClockSpeed=3600
DeviceID=CPU0
Family=198
L2CacheSize=12288
L2CacheSpeed=
Manufacturer=GenuineIntel
MaxClockSpeed=3600
Name=12th Gen Intel(R) Core(TM) i7-12700KF
ProcessorType=3
Revision=
Versions of relevant libraries:
[pip3] flake8==7.0.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] onnx==1.16.0
[pip3] onnxruntime==1.17.3
[pip3] pytorch-lightning==2.2.4
[pip3] torch==2.4.0
[pip3] torchaudio==2.4.0
[pip3] torchmetrics==1.4.0.post0
[conda] blas 1.0 mkl
[conda] mkl 2021.4.0 pypi_0 pypi
[conda] mkl-service 2.4.0 py310h2bbff1b_1
[conda] numpy 1.26.4 py310hf667824_0 conda-forge
[conda] pytorch 2.4.0 py3.10_cuda12.4_cudnn9_0 pytorch
[conda] pytorch-cuda 12.4 h3fd98bf_6 pytorch
[conda] pytorch-lightning 2.2.4 pypi_0 pypi
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.4.0 pypi_0 pypi
[conda] torchmetrics 1.4.0.post0 pypi_0 pypi
```
cc @csarofeen @ptrblck @xwang233 @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @msaroufim | module: cudnn,module: nn,module: rnn,module: cuda,triaged,module: 64-bit | low | Critical |
2,471,353,366 | pytorch | sample of Binomial should return int instead of float | ### 🐛 Describe the bug
```
torch.distributions.binomial.Binomial(9781362087092943, torch.tensor(1, dtype=torch.float64)).sample().item()
```
gives `9781362087092944.0`, which is wired. the sample result should be int, float results into numerical error.
### Versions
Collecting environment information...
PyTorch version: 2.3.1
Is debug build: False
CUDA used to build PyTorch: 12.5
ROCM used to build PyTorch: N/A
OS: Arch Linux (x86_64)
GCC version: (GCC) 14.2.1 20240805
Clang version: 18.1.8
CMake version: version 3.30.2
Libc version: glibc-2.40
Python version: 3.12.4 (main, Jun 7 2024, 06:33:07) [GCC 14.1.1 20240522] (64-bit runtime)
Python platform: Linux-6.6.36-1-lts-x86_64-with-glibc2.40
Is CUDA available: True
CUDA runtime version: 12.5.82
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3060
Nvidia driver version: 555.58
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.9.2.1
/usr/lib/libcudnn_adv.so.9.2.1
/usr/lib/libcudnn_cnn.so.9.2.1
/usr/lib/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/libcudnn_graph.so.9.2.1
/usr/lib/libcudnn_heuristic.so.9.2.1
/usr/lib/libcudnn_ops.so.9.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: 12th Gen Intel(R) Core(TM) i5-12400F
CPU family: 6
Model: 151
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
Stepping: 5
CPU(s) scaling MHz: 60%
CPU max MHz: 4400.0000
CPU min MHz: 800.0000
BogoMIPS: 4993.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 288 KiB (6 instances)
L1i cache: 192 KiB (6 instances)
L2 cache: 7.5 MiB (6 instances)
L3 cache: 18 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.10.1
[pip3] mypy_extensions==1.0.0
[pip3] numpy==2.0.1
[pip3] torch==2.3.1
[conda] Could not collect
cc @fritzo @neerajprad @alicanb @nikitaved | module: distributions,triaged | low | Critical |
2,471,354,439 | PowerToys | Randomly Keyboard-Manager won't recognize my shortcuts | ### Microsoft PowerToys version
0.83.0
### Installation method
GitHub, PowerToys auto-update, WinGet
### Running as admin
Yes
### Area(s) with issue?
Keyboard Manager
### Steps to reproduce
Can't tell... it appears randomly!
Bug report log --> will be added as soon as it occurs again
### ✔️ Expected Behavior
My keyboard-language is UK, but as a swiss-guy I've changed some keys for my handling!
I've changed some keyboard keys as well as some keyboard shortcuts. Normally everything works as provided!

Every key and shortcut are working.
### ❌ Actual Behavior
But sometimes all recombined shortcuts won't recognize their new behavior. I demonstrate it in a short video.
Without keyboardmanager.exe:
Shift+3 = £ (which I definitively don't need!) --> **with keyboardmanager ON** = *
Shift + ` = ¬ (which I definitively never need!) --> **with keyboardmanager ON** = $
Obvious if its off, the "z" and "y"-change also not working!
If I restart the keyboardmanager.exe from within powertoys **ONLY** the individual, recombined keys are working correctly
https://github.com/user-attachments/assets/3cba7733-dd3d-4744-8e7a-47ef25749a0c
Restarting the entire PowerToys application restores the shortcuts to their desired functionality.
https://github.com/user-attachments/assets/81b9c361-6665-4c40-9626-9c9a0a163b5b
I can't say WHY sometimes powertoys didn't recognize my shortcuts but knows the individual key settings.
I know it's not that big deal to restart powertoys, but I have to do it approximately 3times a day.
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Critical |
2,471,367,266 | puppeteer | [Feature]: Ability to share cache between browser context | ### Feature description
I found [this](https://github.com/puppeteer/puppeteer/issues/7404) issue and it has no answer, and I found another issue, it suggest using `disk-cache-dir` launch args, it doesn't work. and in the [docs](https://pptr.dev/api/puppeteer.browser.createbrowsercontext), it's stated that context will not share cookies/cache.
Is there a way to share cache between context?
I can share cache between browser instance using `disk-cache-dir` agrs, but to many browser instance is not scalable,
I can share cache between pages in one browser instance, but I lose the ability to dynamically set the proxy and cookies,
now I'm stuck with browser context, and the only problem is it can't share cache.
Actually I have another concern, it said that browser context is cheap, but how cheap? When I run in headful mode, it open additional window for each browser context, then what's the difference between browser instance vs browser context in case of resource usage?
Details:
Puppeteer with chrome browser,
Not using `setRequestInterception`
| feature,P3 | low | Minor |
2,471,385,044 | next.js | Error and reload during HMR with MUI | ### Link to the code that reproduces this issue
https://github.com/Janpot/next-mui-reproduction
### To Reproduce
1. Clone https://github.com/Janpot/next-mui-reproduction (can't reproduce this in codesandbox/stackblitz)
1. Run `pnpm install`
1. Run `pnpm dev`
1. Open http://localhost:3000/
1. Open browser devtools, make sure it preserves logs on reload (e.g. on Chrome console settings: "preserve log")
1. Open **src/app/page.tsx** and replace the content:
```tsx
// ./src/app/page.tsx (original)
import { Button } from "@mui/material";
export default function Home() {
return (
<div>
<Button>foo</Button>
</div>
);
}
```
to
```tsx
// ./src/app/page.tsx (new)
import { Alert, Button } from "@mui/material";
export default function Home() {
return (
<div>
<Button>foo</Button>
<Alert severity="error">This is an error alert — check it out!</Alert>
</div>
);
}
```
1. Save
### Current vs. Expected behavior
The page reloads and a series of warnings and errors appears:
```
TypeError: Cannot read properties of undefined (reading 'call')
at options.factory (webpack.js?v=1723876266036:713:31)
at __webpack_require__ (webpack.js?v=1723876266036:37:33)
at fn (webpack.js?v=1723876266036:369:21)
at initializeModuleChunk (react-server-dom-webpack-client.browser.development.js:906:27)
at readChunk (react-server-dom-webpack-client.browser.development.js:779:11)
at react-stack-bottom-frame (react-dom-client.development.js:22220:18)
at beginWork (react-dom-client.development.js:9348:24)
at runWithFiberInDEV (react-dom-client.development.js:540:16)
at performUnitOfWork (react-dom-client.development.js:14831:22)
at workLoopConcurrent (react-dom-client.development.js:14825:9)
at renderRootConcurrent (react-dom-client.development.js:14800:15)
at performConcurrentWorkOnRoot (react-dom-client.development.js:14143:11)
at MessagePort.performWorkUntilDeadline (scheduler.development.js:44:48)
The above error occurred in the <NotFoundErrorBoundary> component.
React will try to recreate this component tree from scratch using the error boundary you provided, ReactDevOverlay.
```
```
[Fast Refresh] performing full reload
Fast Refresh will perform a full reload when you edit a file that's imported by modules outside of the React rendering tree.
You might have a file which exports a React component but also exports a value that is imported by a non-React component file.
Consider migrating the non-React component export to a separate file and importing it into both files.
It is also possible the parent component of the component you edited is a class component, which disables Fast Refresh.
Fast Refresh requires at least one parent function component in your React tree.
```
I expect it to not error and just React refresh. The issue is also present on stable albeit with different console output.
Note that it happens with any new component that I import from `@mui/material` and it doesn't seem to matter whether I import them from the top-level, or from their subpath (`@mui/material/Button`).
When I add `'use client'` to the page, the issue doesn't happen.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.5.0: Wed May 1 20:16:51 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T8103
Available memory (MB): 16384
Available CPU cores: 8
Binaries:
Node: 18.20.2
npm: 10.5.0
Yarn: 1.22.22
pnpm: 9.7.1
Relevant Packages:
next: 15.0.0-canary.119 // Latest available version is detected (15.0.0-canary.119).
eslint-config-next: N/A
react: 19.0.0-rc-1eaccd82-20240816
react-dom: 19.0.0-rc-1eaccd82-20240816
typescript: 5.5.4
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Developer Experience
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
Was testing out some upcoming [changes](https://github.com/mui/material-ui/pull/43294) in MUI when I noticed this problem | bug | low | Critical |
2,471,413,651 | godot | Negative scale camera 3D does not work | ### Tested versions
v4.3.stable.official [77dcf97d8]
### System information
Godot v4.3.stable - Windows 10.0.22631 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1060 6GB (NVIDIA; 31.0.15.3623) - Intel(R) Core(TM) i5-9400F CPU @ 2.90GHz (6 Threads)
### Issue description
If Camera3D is negative scale, it will not function properly except for cull back.
① positive scale camera

② negative x scale camera ,cull back

③ negative x scale camera ,cull disable

Image of a cull disable mesh viewed from the inside with a positive scale camera
(Looking toward the camera in image ③)

The camera appears to show the mesh on the back side.
I used a translator.
### Steps to reproduce
Add a Directional Light 3D node to the scene.
Add a Camera 3D node to the scene.
Add a MeshInstance3D node to the scene.
Assign a mesh to the MeshInstance3D.
Set the material of the mesh to something other than Cull Back.
Set the Camera3D scale to negative.
### Minimal reproduction project (MRP)
N/A | bug,topic:rendering,topic:3d | low | Minor |
2,471,448,717 | rust | Rocket FromForm derive crash after adding incorrect types | <!--
Thank you for finding an Internal Compiler Error! 🧊 If possible, try to provide
a minimal verifiable example. You can read "Rust Bug Minimization Patterns" for
how to create smaller examples.
http://blog.pnkfx.org/blog/2019/11/18/rust-bug-minimization-patterns/
-->
### Code
When I have `start_date` and `end_date` using `Option<String>` there is no problems. But when I change it to `NaiveDate` as shown in the code. It says the trait `FromForm<'r>` is not implemented for `std::option::Option<NaiveDate>` and crashes the compiler at the same time. The problem stems from the `FromForm` derive because only when I remove it does the compiler stop crashing.
```Rust
use chrono::NaiveDate;
use rocket::FromForm;
#[derive(FromForm)]
pub struct HistoryRequest {
pub start_date: Option<NaiveDate>,
pub end_date: Option<NaiveDate>,
pub order_by: Option<String>,
pub page: Option<i32>,
pub page_size: Option<i32>,
}
```
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.80.1 (3f5fd8dd4 2024-08-06)
binary: rustc
commit-hash: 3f5fd8dd41153bc5fdca9427e9e05be2c767ba23
commit-date: 2024-08-06
host: x86_64-unknown-linux-gnu
release: 1.80.1
LLVM version: 18.1.7
```
### Error output
```
error[E0277]: the trait bound `std::option::Option<NaiveDate>: FromForm<'r>` is not satisfied
--> src/model/request/history_request.rs:4:10
|
4 | #[derive(FromForm)]
| ^^^^^^^^ the trait `FromForm<'r>` is not implemented for `std::option::Option<NaiveDate>`
|
= help: the trait `FromForm<'v>` is implemented for `std::option::Option<T>`
= note: this error originates in the derive macro `FromForm` (in Nightly builds, run with -Z macro-backtrace for more info)
error[E0277]: the trait bound `std::option::Option<NaiveDate>: FromForm<'r>` is not satisfied
--> src/model/request/history_request.rs:6:21
|
6 | pub start_date: Option<NaiveDate>,
| ^^^^^^^^^^^^^^^^^ the trait `FromForm<'r>` is not implemented for `std::option::Option<NaiveDate>`
|
= help: the trait `FromForm<'v>` is implemented for `std::option::Option<T>`
error[E0277]: the trait bound `std::option::Option<NaiveDate>: FromForm<'r>` is not satisfied
--> src/model/request/history_request.rs:6:9
|
6 | pub start_date: Option<NaiveDate>,
| ^^^^^^^^^^^^^^^^^^ the trait `FromForm<'r>` is not implemented for `std::option::Option<NaiveDate>`
|
= help: the trait `FromForm<'v>` is implemented for `std::option::Option<T>`
error[E0277]: the trait bound `std::option::Option<NaiveDate>: FromForm<'r>` is not satisfied
--> src/model/request/history_request.rs:7:19
|
7 | pub end_date: Option<NaiveDate>,
| ^^^^^^^^^^^^^^^^^ the trait `FromForm<'r>` is not implemented for `std::option::Option<NaiveDate>`
|
= help: the trait `FromForm<'v>` is implemented for `std::option::Option<T>`
error[E0277]: the trait bound `std::option::Option<NaiveDate>: FromForm<'r>` is not satisfied
--> src/model/request/history_request.rs:7:9
|
7 | pub end_date: Option<NaiveDate>,
| ^^^^^^^^^^^^^^^^ the trait `FromForm<'r>` is not implemented for `std::option::Option<NaiveDate>`
|
= help: the trait `FromForm<'v>` is implemented for `std::option::Option<T>
error: internal compiler error: compiler/rustc_infer/src/infer/at.rs:364:21: relating different kinds: (?47t,) 'r/#0
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary><strong>Backtrace</strong></summary>
<p>
```
error: internal compiler error: compiler/rustc_infer/src/infer/at.rs:364:21: relating different kinds: (?47t,) 'r/#0
thread 'rustc' panicked at compiler/rustc_infer/src/infer/at.rs:364:21:
Box<dyn Any>
stack backtrace:
0: std::panicking::begin_panic::<rustc_errors::ExplicitBug>
1: <rustc_errors::diagnostic::BugAbort as rustc_errors::diagnostic::EmissionGuarantee>::emit_producing_guarantee
2: rustc_middle::util::bug::opt_span_bug_fmt::<rustc_span::span_encoding::Span>::{closure#0}
3: rustc_middle::ty::context::tls::with_opt::<rustc_middle::util::bug::opt_span_bug_fmt<rustc_span::span_encoding::Span>::{closure#0}, !>::{closure#0}
4: rustc_middle::ty::context::tls::with_context_opt::<rustc_middle::ty::context::tls::with_opt<rustc_middle::util::bug::opt_span_bug_fmt<rustc_span::span_encoding::Span>::{closure#0}, !>::{closure#0}, !>
5: rustc_middle::util::bug::bug_fmt
6: <rustc_middle::ty::generic_args::GenericArg as rustc_infer::infer::at::ToTrace>::to_trace
7: <rustc_infer::infer::at::At>::eq::<rustc_middle::ty::generic_args::GenericArg>
8: <rustc_infer::infer::InferCtxt>::enter_forall::<rustc_type_ir::predicate::TraitPredicate<rustc_middle::ty::context::TyCtxt>, (), <rustc_infer::infer::error_reporting::TypeErrCtxt as rustc_trait_selection::traits::error_reporting::suggestions::TypeErrCtxtExt>::note_function_argument_obligation<rustc_span::ErrorGuaranteed>::{closure#1}>
9: <rustc_infer::infer::error_reporting::TypeErrCtxt as rustc_trait_selection::traits::error_reporting::suggestions::TypeErrCtxtExt>::note_function_argument_obligation::<rustc_span::ErrorGuaranteed>
10: <rustc_infer::infer::error_reporting::TypeErrCtxt as rustc_trait_selection::traits::error_reporting::suggestions::TypeErrCtxtExt>::note_obligation_cause_code::<rustc_span::ErrorGuaranteed, rustc_middle::ty::predicate::Predicate>
11: <rustc_infer::infer::error_reporting::TypeErrCtxt as rustc_trait_selection::traits::error_reporting::type_err_ctxt_ext::InferCtxtPrivExt>::note_obligation_cause
12: <rustc_infer::infer::error_reporting::TypeErrCtxt as rustc_trait_selection::traits::error_reporting::type_err_ctxt_ext::TypeErrCtxtExt>::report_selection_error
13: <rustc_infer::infer::error_reporting::TypeErrCtxt as rustc_trait_selection::traits::error_reporting::type_err_ctxt_ext::InferCtxtPrivExt>::report_fulfillment_error
14: <rustc_infer::infer::error_reporting::TypeErrCtxt as rustc_trait_selection::traits::error_reporting::type_err_ctxt_ext::TypeErrCtxtExt>::report_fulfillment_errors
15: <rustc_hir_typeck::fn_ctxt::FnCtxt>::check_method_argument_types
16: <rustc_hir_typeck::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
17: <rustc_hir_typeck::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
18: <rustc_hir_typeck::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
19: <rustc_hir_typeck::fn_ctxt::FnCtxt>::check_block_with_expected
20: <rustc_hir_typeck::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
21: <rustc_hir_typeck::fn_ctxt::FnCtxt>::check_match::{closure#0}
22: <rustc_hir_typeck::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
23: <rustc_hir_typeck::fn_ctxt::FnCtxt>::check_decl
24: <rustc_hir_typeck::fn_ctxt::FnCtxt>::check_block_with_expected
25: <rustc_hir_typeck::fn_ctxt::FnCtxt>::check_expr_with_expectation_and_args
26: rustc_hir_typeck::check::check_fn
27: rustc_hir_typeck::typeck
[... omitted 1 frame ...]
28: rustc_hir_analysis::check_crate
29: rustc_interface::passes::analysis
[... omitted 1 frame ...]
30: rustc_interface::interface::run_compiler::<core::result::Result<(), rustc_span::ErrorGuaranteed>, rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
note: we would appreciate a bug report: https://github.com/rust-lang/rust/issues/new?labels=C-bug%2C+I-ICE%2C+T-compiler&template=ice.md
note: rustc 1.80.1 (3f5fd8dd4 2024-08-06) running on x86_64-unknown-linux-gnu
note: compiler flags: --crate-type bin -C embed-bitcode=no -C debuginfo=2 -C incremental=[REDACTED]
note: some of the compiler flags provided by cargo are hidden
query stack during panic:
#0 [typeck] type-checking `model::request::history_request::_::<impl at src/model/request/history_request.rs:4:10: 4:18>::finalize`
#1 [analysis] running analysis passes on this crate
end of query stack
```
</p>
</details>
### Cargo.toml dependencies
```toml
[dependencies]
diesel = { version = "2.2.1", features = ["postgres", "r2d2", "chrono"] }
prometheus = "0.13.4"
rocket_prometheus = "0.10.1"
rocket = { version = "0.5.0", features = ["json"] }
serde = "1.0.203"
rocket_sync_db_pools = { version = "0.1.0", features = ["diesel_postgres_pool"] }
async-std = "1.12.0"
rocket_okapi = { version = "0.8.0", features = ["rapidoc"] }
schemars = { version = "0.8.21", features = ["chrono"] }
rocket_oauth2 = "0.5.0"
chrono = { version = "0.4.38", features = ["serde"] }
rand = "0.8.5"
bcrypt = "0.15.1"
reqwest = { version = "0.12.5", features = ["json"] }
validator = { version = "0.18.1", features = ["derive"] }
lazy_static = "1.5.0"
regex = "1.10.5"
uuid = { version = "1.10.0", features = ["v4", "serde"] }
rust-argon2 = "2.1.0"
strum = "0.26.3"
strum_macros = "0.26.4"
```
| I-ICE,T-compiler,C-bug,T-types | low | Critical |
2,471,490,532 | pytorch | Incorrect GPU management and deadlocks without torch.cuda.set_device | ## Issue description
When I am running distributed and I simply set `CUDA_VISIBLE_DEVICES` in each rank:
- Running `torch.distributed.barrier()` makes rank 1 occupy GPU memory on the GPU of rank 0 (even though that GPU shouldn't be visible due to `CUDA_VISIBLE_DEVICES`)
- Running FSDP forward deadlocks
Both issues go away if I set `torch.cuda.set_device`
## Code example
```python
import os
import time
import torch
import torch.distributed.fsdp
import functools
from torch.distributed.fsdp.wrap import size_based_auto_wrap_policy
def train() -> None:
torch.distributed.init_process_group('nccl')
local_rank = torch.distributed.get_rank()
# DOESN'T WORK
os.environ['CUDA_VISIBLE_DEVICES'] = str(local_rank)
torch.cuda.empty_cache()
# torch.cuda.set_device(local_rank) # FIXES PROBLEMS
print("BEFORE", torch.distributed.get_rank())
time.sleep(5)
torch.distributed.barrier()
print("AFTER", torch.distributed.get_rank())
time.sleep(15)
module = torch.nn.Sequential(
torch.nn.Conv2d(10, 10, 3),
torch.nn.Conv2d(10, 10, 3),
torch.nn.Conv2d(10, 10, 3),
torch.nn.Conv2d(10, 10, 3),
)
module.cuda()
my_auto_wrap_policy = functools.partial(
size_based_auto_wrap_policy, min_num_params=100
)
fsdp_module = torch.distributed.fsdp.FullyShardedDataParallel(
module,
auto_wrap_policy=my_auto_wrap_policy,
use_orig_params=True,
)
print("FORWARD", torch.distributed.get_rank())
out = fsdp_module(torch.zeros([10, 10, 900, 900]))
print("END", torch.distributed.get_rank())
time.sleep(15)
torch.distributed.destroy_process_group()
if __name__ == "__main__":
train()
```
I run this with
```
torchrun --standalone --nnodes 1 --nproc-per-node 2 --rdzv-backend=c10d --rdzv-endpoint=localhost:0 test_train.py
```
As a result, after the call to `torch.distributed.barrier()`, `nvidia-smi` reports:
```
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 1528872 C ...mamba/envs/debug-env/bin/python3.10 404MiB |
| 0 N/A N/A 1528873 C ...mamba/envs/debug-env/bin/python3.10 262MiB |
| 1 N/A N/A 1528873 C ...mamba/envs/debug-env/bin/python3.10 404MiB |
+-----------------------------------------------------------------------------------------+
```
Rank 1 has occupied memory of the GPU of rank 0, which shouldn't be happening.
The process also hang at the call to `FSDP.forward`, even though both ranks reach that stage. At that point, `nvidia-smi` reports even more unbalanced memory allocation
```
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 1540224 C ...mamba/envs/debug-env/bin/python3.10 1024MiB |
| 0 N/A N/A 1540225 C ...mamba/envs/debug-env/bin/python3.10 596MiB |
| 1 N/A N/A 1540225 C ...mamba/envs/debug-env/bin/python3.10 400MiB |
+-----------------------------------------------------------------------------------------+
```
If I simply set `torch.cuda.set_device(local_rank)`, everything works properly. Note that I don't even need to set `CUDA_VISIBLE_DEVICES`. For comparison, after the call to `torch.distributed.barrier`, `nvidia-smi` reports
```
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 1529231 C ...mamba/envs/debug-env/bin/python3.10 404MiB |
| 1 N/A N/A 1529232 C ...mamba/envs/debug-env/bin/python3.10 404MiB |
+-----------------------------------------------------------------------------------------+
```
and at the end of the program it reports
```
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 1532552 C ...mamba/envs/debug-env/bin/python3.10 2012MiB |
| 1 N/A N/A 1532553 C ...mamba/envs/debug-env/bin/python3.10 2012MiB |
+-----------------------------------------------------------------------------------------+
```
Equal memory usage on both GPUs at both stages, as expected
## System Info
```
Collecting environment information...
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux trixie/sid (x86_64)
GCC version: (Debian 14.2.0-1) 14.2.0
Clang version: Could not collect
CMake version: version 3.29.6
Libc version: glibc-2.39
Python version: 3.10.14 | packaged by conda-forge | (main, Mar 20 2024, 12:45:18) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-6.9.12-1-insait-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA RTX A6000
GPU 1: NVIDIA RTX A6000
Nvidia driver version: 550.90.07
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: AuthenticAMD
Model name: AMD EPYC 74F3 24-Core Processor
CPU family: 25
Model: 1
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU(s) scaling MHz: 45%
CPU max MHz: 4037.5000
CPU min MHz: 1500.0000
BogoMIPS: 6400.62
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin brs arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm debug_swap
Virtualization: AMD-V
L1d cache: 1.5 MiB (48 instances)
L1i cache: 1.5 MiB (48 instances)
L2 cache: 24 MiB (48 instances)
L3 cache: 512 MiB (16 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Not affected; BHI: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==2.4.0+cu121
[pip3] triton==3.0.0
[conda] Could not collect
```
The problem also happens with torch-2.2.0 and torch-2.3.0
cc @XilunWu @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed,triaged,actionable | low | Critical |
2,471,492,446 | TypeScript | Type Instantiation is Excessively Deep Regression In #56004 | ### 🔎 Search Terms
Type Instantiation is Excessively Deep, #56004
### 🕗 Version & Regression Information
This changed in commit 38ef79e0b02ad122b984a19f3494414e4e53c896
### ⏯ Playground Link
https://www.typescriptlang.org/play/?#code/C4TwDgpgBAQg9gExAYQBYQMYGsDOAJCAJwEtgBDAcwgB4B5APigF4paoIAPYCAOwRygAFADZkMEMgCNhEWpIBWmYFAD8rKAC4oPCADciAbgBQoSFGRweOYITLEewNJlwES5KnXZde-IaPFSMnKKGMCMLGQ8IMZGAPSxUAByZAC20CkArtZe3HxQOHBpwKj2FFDEAHQQFVAABgDiEI6WAGbEFBmEEAjUyWnhUBVDtVAYhWDEMgKSGcrFxAIIcBA4PADkwBUm4NCNzTxtHV09fdCcub6RIAMWVjZ2Dk7Y+ESklDRGULCIKOjPrm8PJ8vlB4uZUJEKKUoPMBMA4HVTjkfAIAEQAQRSkmIvGAABl2qhgKjVFBMdjcQSKETNFAACpwLC8EZtDgrGHoUbjSZkYDESxbEFQACSd0i4jpO2owKFSPOKKgGKxOIcVKJJLUpggcBaZOVlMJyi0Wp19MZvBlUHowOt9BiYPRMJ2UDAZFsRSI5QEXQAjhliMcoC04IQOQsuSkJqI+ZYg3ZhJ1qiLlBhIlBJNAst0oJEEPk+cJhHHJjCEWNI5NoDNlMRdaQ1t6ICk4Po87iAxBhCByinuVMtghMKIuqNRDgBCIxBJpLIFEpqABVHhZvMRKKMADewLBDOLRfhEaj6SyvYcDxzPCgcDAMZ4ZCLYEI16IoA5vKgXRaRDhCNqsNqgo5ioxolDgMRfDuoEuk+kCEK+nALMAP5QAgtZfl0DjELy0AtE+KQctAWp1BuAC+IzVj2XrrMoYjABk95dtomArDgbrdgeqZZgRh48ree6JoBADEPBwMAACikagMahAZBAxgkUYcQJLuOCoHAADuYYCOWUa8vyl4tPGibcRgY7aXAGTCHmwikEQDHdhmUDyCedSmWQ456hSqqGlApEAcmUAQshCHZMUFnUgOQ5utAbkeeSKr4j58p8BO-jTkEc6hL5wLCaJEk3iA0myfJimDm5I6xQIDJMpeyW+JOAQzsESjZRBCQALIuZ+noHpEonoKGj7PnBIA1H+oEjPeljQAs1HsCJGQRcCkggQsADaqIYKiAC64GggktA8IxIVIaW6bQN1hCBvCNR0pgqA8MQqaFt2IAWVyll5l0zb6FpF55sU0Copw4jjsQ+iMYOEBgCSRBPqGSwrHNFAImQ6lkKN24JHgGl6D1EIpmQWYCH1gOhjpvH6ewV0hoBGBaMuWKGEYJFAA
### 💻 Code
```ts
type BodyChecksHeritage<O> = O extends PlaceableObject ? O : never;
type ConstraintChecksHeritage<O extends PlaceableObject> = any;
// Name must extend something i.e. `GetConfigured<Name> = ...` compiles but this doesn't.
type GetConfigured<Name extends any> = ConstraintChecksHeritage<
BodyChecksHeritage<
// Changing this to `Name extends "AmbientLight" ? AmbientLight : Token` fixes the compilation.
InstanceType<
Name extends "AmbientLight" ? typeof AmbientLight : typeof Token
>
>
>;
// A type parameter is required for this compilation failure. It can be used and still fail to compile but if it's removed entirely it compiles.
declare class PlaceableObject<Unused = any> {
// To fail to compile must contain an optional property that refers to `this`.
a?: this;
// This property exists to differentiate from the type `{}` but it isn't actually necessary to cause the compilation failure.
#notEmpty: true;
}
// To show this compilation failure the class could literally be just `class AmbientLight {}`. It has to exist though.
declare class AmbientLight extends PlaceableObject {
#notEmpty: true;
}
declare class Token extends PlaceableObject {
// Must refer to another property. `this` alone isn't enough.
b: this["c"];
// Only exists to be referred to. Technically you could remove this and the "excessively deep" error doesn't go away.
// However that causes another compilation error.
c: number;
}
```
### 🙁 Actual behavior
Gets the error "Type instantiation is excessively deep and possibly infinite."
### 🙂 Expected behavior
No error.
### Additional information about the issue
I've seen somewhat similar issues reported about this PR but none of them seemed to match my issue. However some were closed without reproduction and maybe I'm having the same problem as [type-fest is?](https://github.com/microsoft/TypeScript/pull/56004#issuecomment-1914382123)
I've reduced the code down from a large project into this which is why it seems utterly contrived. A few of the odd requirements seem to be explainable:
- `Name extends any` is required because #56004 only operates in cases where the type parameter is constrained.
- `BodyChecksHeritage` and `ConstraintChecksHeritage` are both required to increase the depth.
But that leaves some things I can't personally explain such as:
- Why is `a?: this` causing issues but `a: this | undefined` or others don't necessarily?
- Why does `PlaceableObject` have to be generic?
- etc.
I've noted more oddities inline with the code. | Needs Investigation | low | Critical |
2,471,495,313 | pytorch | Cannot override __add__ in NamedTuple with __new__ + torch.compile | ### 🐛 Describe the bug
The problem arises under the following circumstances (see below for repro):
- Model is compiled (fullgraph=True)
- Within the forward pass, a NamedTuple is instantiated
- This NamedTuple overrides `__new__` and `__add__`
Under this circumstances, the compiled model won't call the newly defined `__add__` method but rather it considers the two instances of NamedTuple as generic tuples and concatenates their fields.
```python
from typing import NamedTuple
import torch
# Remove the bases check
def is_namedtuple_cls_patched(cls):
"""Test if an object is a namedtuple or a (torch.return_types|torch.autograd.forward_ad).* quasi-namedtuple"""
try:
if issubclass(cls, tuple):
module = getattr(cls, "__module__", None)
return module in ("torch.return_types", "torch.autograd.forward_ad") or (
hasattr(cls, "_make") and hasattr(cls, "_fields")
)
except TypeError:
pass
return False
# Not the prettiest but it does the job
torch._dynamo.utils.is_namedtuple_cls.__code__ = is_namedtuple_cls_patched.__code__
class BaseDtype(NamedTuple):
a: torch.Tensor
b: torch.Tensor
class MyDType(BaseDtype):
def __new__(cls, a, b):
# This is needed to do type checking on init variables
return super().__new__(cls, a, b)
def __add__(self, other):
return MyDType(self.a + other.a, self.b+other.b)
# This is required to create MyDType within the forward pass
# Removing __new__ and inheriting directly from NamedTuple seems to work but it does not fit my use case
# Commented after adding the patch to is_namedtuple_cls
# torch._dynamo.allow_in_graph(MyDType)
class Model(torch.nn.Module):
def forward(self, a1, a2):
inp1 = MyDType(a1, a2)
inp2 = MyDType(a1, a2)
return inp1 + inp2
model = Model()
non_compiled_output = model(torch.tensor(3.), torch.tensor(0.))
model = torch.compile(model, fullgraph=True)
compiled_output = model(torch.tensor(3.), torch.tensor(0.))
print(f"Correct output: {non_compiled_output}")
# Correct output: MyDType(a=tensor(6.), b=tensor(0.))
print(f"Less correct output: {compiled_output}")
# Less correct output: (tensor(3.), tensor(0.), tensor(3.), tensor(0.))
```
There are more nuances to this issue based on what combinations of init methods and order of operations you try, but this above is the fundamental problem.
Thanks for all your help.
### Error logs
_No response_
### Minified repro
_No response_
### Versions
python==3.11.9
torch==2.3.0
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | triaged,oncall: pt2,module: dynamo | low | Critical |
2,471,497,117 | react-native | RNGP - Autolinking: Could not find project.android.packageName in react-native config output! Could not autolink packages without this field. | ### Description
PS C:\Users\chait\OneDrive\Desktop\demo\demo> npx react-native run-android
> com.demo@0.0.1 npx
> react-native run-android
info Installing the app...
> Task :app:generateAutolinkingPackageList FAILED
15 actionable tasks: 5 executed, 10 up-to-date
info 💡 Tip: Make sure that you have set up your development environment correctly, by running npx react-native doctor. To read more about doctor command visit: https://github.com/react-native-community/cli/blob/main/packages/cli-doctor/README.md#doctor
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':app:generateAutolinkingPackageList'.
> RNGP - Autolinking: Could not find project.android.packageName in react-native config output! Could not autolink packages without this field.
* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
> Get more help at https://help.gradle.org.
BUILD FAILED in 15s
error Failed to install the app. Command failed with exit code 1: gradlew.bat app:installDebug -PreactNativeDevServerPort=8081 FAILURE: Build failed with an exception. * What went wrong: Execution failed for task ':app:generateAutolinkingPackageList'. > RNGP - Autolinking: Could not find project.android.packageName in react-native config output! Could not autolink packages without this field. * Try: > Run with --stacktrace option to get the stack trace. > Run with --info or --debug option to get more log output. > Run with --scan to get full insights. > Get more help at https://help.gradle.org. BUILD FAILED in 15s.
info Run CLI with --verbose flag for more details.
### Steps to reproduce
nothing steps needed that simply i acctually tried to update the RN version to 0.75.1 from 0.74.5 , using , https://react-native-community.github.io/upgrade-helper/?from=0.74.5&to=0.75.1 , i followed the instructions from there and i got the same error now i posted up there ,so then i suspected that issue might be from my side so to verify i created a new project and i followed the steps from the official doc useing rn cli then straight forwardly i got this error again
### React Native Version
0.75.1
### Affected Platforms
Runtime - Android
### Output of `npx react-native info`
```text
> com.demo@0.0.1 npx
> react-native info
info Fetching system and libraries information...
System:
OS: Windows 11 10.0.22631
CPU: (12) x64 Intel(R) Core(TM) i5-10500H CPU @ 2.50GHz
Memory: 1.01 GB / 7.83 GB
Binaries:
Node:
version: 20.12.2
path: C:\Program Files\nodejs\node.EXE
Yarn: Not Found
npm:
version: 10.6.0
path: C:\Program Files\nodejs\npm.CMD
Watchman: Not Found
SDKs:
Android SDK:
API Levels:
- "27"
- "29"
- "30"
- "31"
- "33"
- "34"
Build Tools:
- 33.0.1
- 34.0.0
- 35.0.0
- 35.0.0
System Images:
- android-22 | Google APIs Intel x86 Atom
- android-VanillaIceCream | Google Play Intel x86_64 Atom
Android NDK: Not Found
Windows SDK:
AllowDevelopmentWithoutDevLicense: Enabled
Versions:
- 10.0.22621.0
IDEs:
Android Studio: AI-232.10300.40.2321.11668458
Visual Studio:
- 17.11.35111.106 (Visual Studio Enterprise 2022)
- 17.10.35027.167 (Visual Studio Enterprise 2022)
Languages:
Java: 17.0.11
Ruby: Not Found
npmPackages:
"@react-native-community/cli": Not Found
react:
installed: 18.3.1
wanted: ^18.3.1
react-native:
installed: 0.75.1
wanted: ^0.75.1
react-native-windows: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: false
iOS:
hermesEnabled: Not found
newArchEnabled: Not found
```
### Stacktrace or Logs
```text
PS C:\Users\chait\OneDrive\Desktop\demo\demo\android> ./gradlew installDebug --stacktrace
> Task :app:generateAutolinkingPackageList FAILED
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':app:generateAutolinkingPackageList'.
> RNGP - Autolinking: Could not find project.android.packageName in react-native config output! Could not autolink packages without this field.
* Try:
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
> Get more help at https://help.gradle.org.
* Exception is:
org.gradle.api.tasks.TaskExecutionException: Execution failed for task ':app:generateAutolinkingPackageList'.
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.lambda$executeIfValid$1(ExecuteActionsTaskExecuter.java:130)
at org.gradle.internal.Try$Failure.ifSuccessfulOrElse(Try.java:282)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeIfValid(ExecuteActionsTaskExecuter.java:128)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:116)
at org.gradle.api.internal.tasks.execution.FinalizePropertiesTaskExecuter.execute(FinalizePropertiesTaskExecuter.java:46)
at org.gradle.api.internal.tasks.execution.ResolveTaskExecutionModeExecuter.execute(ResolveTaskExecutionModeExecuter.java:51)
at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:57)
at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:74)
at org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:36)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.executeTask(EventFiringTaskExecuter.java:77)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:55)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:52)
at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:209)
at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:166)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter.execute(EventFiringTaskExecuter.java:52)
at org.gradle.execution.plan.LocalTaskNodeExecutor.execute(LocalTaskNodeExecutor.java:42)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:331)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:318)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.lambda$execute$0(DefaultTaskExecutionGraph.java:314)
at org.gradle.internal.operations.CurrentBuildOperationRef.with(CurrentBuildOperationRef.java:80)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:314)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:303)
at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.execute(DefaultPlanExecutor.java:463)
at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.run(DefaultPlanExecutor.java:380)
at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
at org.gradle.internal.concurrent.AbstractManagedExecutor$1.run(AbstractManagedExecutor.java:47)
Caused by: java.lang.IllegalStateException: RNGP - Autolinking: Could not find project.android.packageName in react-native config output! Could not autolink packages without this field.
at com.facebook.react.tasks.GeneratePackageListTask.taskAction(GeneratePackageListTask.kt:38)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at org.gradle.internal.reflect.JavaMethod.invoke(JavaMethod.java:125)
at org.gradle.api.internal.project.taskfactory.StandardTaskAction.doExecute(StandardTaskAction.java:58)
at org.gradle.api.internal.project.taskfactory.StandardTaskAction.execute(StandardTaskAction.java:51)
at org.gradle.api.internal.project.taskfactory.StandardTaskAction.execute(StandardTaskAction.java:29)
at org.gradle.api.internal.tasks.execution.TaskExecution$3.run(TaskExecution.java:244)
at org.gradle.internal.operations.DefaultBuildOperationRunner$1.execute(DefaultBuildOperationRunner.java:29)
at org.gradle.internal.operations.DefaultBuildOperationRunner$1.execute(DefaultBuildOperationRunner.java:26)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:166)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.run(DefaultBuildOperationRunner.java:47)
at org.gradle.api.internal.tasks.execution.TaskExecution.executeAction(TaskExecution.java:229)
at org.gradle.api.internal.tasks.execution.TaskExecution.executeActions(TaskExecution.java:212)
at org.gradle.api.internal.tasks.execution.TaskExecution.executeWithPreviousOutputFiles(TaskExecution.java:195)
at org.gradle.api.internal.tasks.execution.TaskExecution.execute(TaskExecution.java:162)
at org.gradle.internal.execution.steps.ExecuteStep.executeInternal(ExecuteStep.java:105)
at org.gradle.internal.execution.steps.ExecuteStep.access$000(ExecuteStep.java:44)
at org.gradle.internal.execution.steps.ExecuteStep$1.call(ExecuteStep.java:59)
at org.gradle.internal.execution.steps.ExecuteStep$1.call(ExecuteStep.java:56)
at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:209)
at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:166)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53)
at org.gradle.internal.execution.steps.ExecuteStep.execute(ExecuteStep.java:56)
at org.gradle.internal.execution.steps.ExecuteStep.execute(ExecuteStep.java:44)
at org.gradle.internal.execution.steps.CancelExecutionStep.execute(CancelExecutionStep.java:41)
at org.gradle.internal.execution.steps.TimeoutStep.executeWithoutTimeout(TimeoutStep.java:74)
at org.gradle.internal.execution.steps.TimeoutStep.execute(TimeoutStep.java:55)
at org.gradle.internal.execution.steps.PreCreateOutputParentsStep.execute(PreCreateOutputParentsStep.java:50)
at org.gradle.internal.execution.steps.PreCreateOutputParentsStep.execute(PreCreateOutputParentsStep.java:28)
at org.gradle.internal.execution.steps.RemovePreviousOutputsStep.execute(RemovePreviousOutputsStep.java:67)
at org.gradle.internal.execution.steps.RemovePreviousOutputsStep.execute(RemovePreviousOutputsStep.java:37)
at org.gradle.internal.execution.steps.BroadcastChangingOutputsStep.execute(BroadcastChangingOutputsStep.java:61)
at org.gradle.internal.execution.steps.BroadcastChangingOutputsStep.execute(BroadcastChangingOutputsStep.java:26)
at org.gradle.internal.execution.steps.CaptureOutputsAfterExecutionStep.execute(CaptureOutputsAfterExecutionStep.java:67)
at org.gradle.internal.execution.steps.CaptureOutputsAfterExecutionStep.execute(CaptureOutputsAfterExecutionStep.java:45)
at org.gradle.internal.execution.steps.ResolveInputChangesStep.execute(ResolveInputChangesStep.java:40)
at org.gradle.internal.execution.steps.ResolveInputChangesStep.execute(ResolveInputChangesStep.java:29)
at org.gradle.internal.execution.steps.BuildCacheStep.executeWithoutCache(BuildCacheStep.java:189)
at org.gradle.internal.execution.steps.BuildCacheStep.lambda$execute$1(BuildCacheStep.java:75)
at org.gradle.internal.Either$Right.fold(Either.java:175)
at org.gradle.internal.execution.caching.CachingState.fold(CachingState.java:62)
at org.gradle.internal.execution.steps.BuildCacheStep.execute(BuildCacheStep.java:73)
at org.gradle.internal.execution.steps.BuildCacheStep.execute(BuildCacheStep.java:48)
at org.gradle.internal.execution.steps.StoreExecutionStateStep.execute(StoreExecutionStateStep.java:46)
at org.gradle.internal.execution.steps.StoreExecutionStateStep.execute(StoreExecutionStateStep.java:35)
at org.gradle.internal.execution.steps.SkipUpToDateStep.executeBecause(SkipUpToDateStep.java:76)
at org.gradle.internal.execution.steps.SkipUpToDateStep.lambda$execute$2(SkipUpToDateStep.java:54)
at org.gradle.internal.execution.steps.SkipUpToDateStep.execute(SkipUpToDateStep.java:54)
at org.gradle.internal.execution.steps.SkipUpToDateStep.execute(SkipUpToDateStep.java:36)
at org.gradle.internal.execution.steps.legacy.MarkSnapshottingInputsFinishedStep.execute(MarkSnapshottingInputsFinishedStep.java:37)
at org.gradle.internal.execution.steps.legacy.MarkSnapshottingInputsFinishedStep.execute(MarkSnapshottingInputsFinishedStep.java:27)
at org.gradle.internal.execution.steps.ResolveIncrementalCachingStateStep.executeDelegate(ResolveIncrementalCachingStateStep.java:49)
at org.gradle.internal.execution.steps.ResolveIncrementalCachingStateStep.executeDelegate(ResolveIncrementalCachingStateStep.java:27)
at org.gradle.internal.execution.steps.AbstractResolveCachingStateStep.execute(AbstractResolveCachingStateStep.java:71)
at org.gradle.internal.execution.steps.AbstractResolveCachingStateStep.execute(AbstractResolveCachingStateStep.java:39)
at org.gradle.internal.execution.steps.ResolveChangesStep.execute(ResolveChangesStep.java:65)
at org.gradle.internal.execution.steps.ResolveChangesStep.execute(ResolveChangesStep.java:36)
at org.gradle.internal.execution.steps.ValidateStep.execute(ValidateStep.java:106)
at org.gradle.internal.execution.steps.ValidateStep.execute(ValidateStep.java:55)
at org.gradle.internal.execution.steps.AbstractCaptureStateBeforeExecutionStep.execute(AbstractCaptureStateBeforeExecutionStep.java:64)
at org.gradle.internal.execution.steps.AbstractCaptureStateBeforeExecutionStep.execute(AbstractCaptureStateBeforeExecutionStep.java:43)
at org.gradle.internal.execution.steps.AbstractSkipEmptyWorkStep.executeWithNonEmptySources(AbstractSkipEmptyWorkStep.java:125)
at org.gradle.internal.execution.steps.AbstractSkipEmptyWorkStep.execute(AbstractSkipEmptyWorkStep.java:56)
at org.gradle.internal.execution.steps.AbstractSkipEmptyWorkStep.execute(AbstractSkipEmptyWorkStep.java:36)
at org.gradle.internal.execution.steps.legacy.MarkSnapshottingInputsStartedStep.execute(MarkSnapshottingInputsStartedStep.java:38)
at org.gradle.internal.execution.steps.LoadPreviousExecutionStateStep.execute(LoadPreviousExecutionStateStep.java:36)
at org.gradle.internal.execution.steps.LoadPreviousExecutionStateStep.execute(LoadPreviousExecutionStateStep.java:23)
at org.gradle.internal.execution.steps.HandleStaleOutputsStep.execute(HandleStaleOutputsStep.java:75)
at org.gradle.internal.execution.steps.HandleStaleOutputsStep.execute(HandleStaleOutputsStep.java:41)
at org.gradle.internal.execution.steps.AssignMutableWorkspaceStep.lambda$execute$0(AssignMutableWorkspaceStep.java:35)
at org.gradle.api.internal.tasks.execution.TaskExecution$4.withWorkspace(TaskExecution.java:289)
at org.gradle.internal.execution.steps.AssignMutableWorkspaceStep.execute(AssignMutableWorkspaceStep.java:31)
at org.gradle.internal.execution.steps.AssignMutableWorkspaceStep.execute(AssignMutableWorkspaceStep.java:22)
at org.gradle.internal.execution.steps.ChoosePipelineStep.execute(ChoosePipelineStep.java:40)
at org.gradle.internal.execution.steps.ChoosePipelineStep.execute(ChoosePipelineStep.java:23)
at org.gradle.internal.execution.steps.ExecuteWorkBuildOperationFiringStep.lambda$execute$2(ExecuteWorkBuildOperationFiringStep.java:67)
at org.gradle.internal.execution.steps.ExecuteWorkBuildOperationFiringStep.execute(ExecuteWorkBuildOperationFiringStep.java:67)
at org.gradle.internal.execution.steps.ExecuteWorkBuildOperationFiringStep.execute(ExecuteWorkBuildOperationFiringStep.java:39)
at org.gradle.internal.execution.steps.IdentityCacheStep.execute(IdentityCacheStep.java:46)
at org.gradle.internal.execution.steps.IdentityCacheStep.execute(IdentityCacheStep.java:34)
at org.gradle.internal.execution.steps.IdentifyStep.execute(IdentifyStep.java:48)
at org.gradle.internal.execution.steps.IdentifyStep.execute(IdentifyStep.java:35)
at org.gradle.internal.execution.impl.DefaultExecutionEngine$1.execute(DefaultExecutionEngine.java:61)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeIfValid(ExecuteActionsTaskExecuter.java:127)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:116)
at org.gradle.api.internal.tasks.execution.FinalizePropertiesTaskExecuter.execute(FinalizePropertiesTaskExecuter.java:46)
at org.gradle.api.internal.tasks.execution.ResolveTaskExecutionModeExecuter.execute(ResolveTaskExecutionModeExecuter.java:51)
at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:57)
at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:74)
at org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:36)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.executeTask(EventFiringTaskExecuter.java:77)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:55)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:52)
at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:209)
at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:204)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:166)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter.execute(EventFiringTaskExecuter.java:52)
at org.gradle.execution.plan.LocalTaskNodeExecutor.execute(LocalTaskNodeExecutor.java:42)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:331)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:318)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.lambda$execute$0(DefaultTaskExecutionGraph.java:314)
at org.gradle.internal.operations.CurrentBuildOperationRef.with(CurrentBuildOperationRef.java:80)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:314)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:303)
at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.execute(DefaultPlanExecutor.java:463)
at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.run(DefaultPlanExecutor.java:380)
at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
at org.gradle.internal.concurrent.AbstractManagedExecutor$1.run(AbstractManagedExecutor.java:47)
BUILD FAILED in 8s
15 actionable tasks: 5 executed, 10 up-to-date
```
### Reproducer
just upgraded a new version
### Screenshots and Videos
_No response_ | Needs: Triage :mag: | low | Critical |
2,471,500,942 | godot | gltf convex shapes are "incorrect" sometimes (importing with gltf physics extensions) | ### Tested versions
4.3 stable, 4.2.2 stable
### System information
win 10
### Issue description
I'm working with a small community of 3D godot friends, on an open source level editor made for Godot (and others), in Blender as an addon.
Part of this addon covers the godot core implementation of GLTFPhysicsShape and GLTFPhysicsBody for creating level geometry from blender and exporting the gltf with the physics extensions built in. The extensions are `OMI_physics_body` and `OMI_physics_shape`.
Here is an example of this gltf file imported into godot:

I tested trimesh shapes, works fine as far as I know. I made sure that the meshes where correct and simplified (cube-like) and generally had 8 vertexes each to test convex shapes.
However I have run into an issue with using convex shapes from blender meshes, and I'm trying to find out why _sometimes_ the convex shapes seem to be adding an extra vertex incorrectly. For example, with a box mesh in blender, you can export it and it will make a collision shape and node using that mesh.. HOWEVER, in godot's collision shape, on one face it will make a triangle, therefore adding an extra vertex. imagine a box with a diagonal strike on one face to make 2 triangle faces. If a character body collides with this shape, it phases through that face. Sometimes it adds more than one "strike". Sometimes the vertexes are ordered in a strange way.
So, a box is struggling sometimes but i don't know how to trigger that. Then, if you subdivide the shape in blender, it totally breaks the convex collision shape generated by godot, it adds all kinds of vectors everywhere.
From what I've been told a bit by @aaronfranke it calls the Godot implementation and is not something related to the gltf implementation:
https://github.com/godotengine/godot/blob/1bd740d18d714f815486b04bf4c6154ef6c355d9/modules/gltf/extensions/physics/gltf_physics_shape.cpp#L255
What I'm thinking is, Blender's meshes might not be suitable for godot's convex shapes, as they might use a different system for counting vertexes or something like that.
### Steps to reproduce
This another problem. The implementation is too specific to what I'm doing that it would be very difficult to reproduce this easily without using Blender and this very specific (almost hidden) implementation of gltf. It was just added recently to 4.3 core, but not many people know how to implement this so I'm trying to indirectly contribute to godot from the outside - so I think it is worthwhile to at least make an issue.
It could be a problem with how blenders shapes directly become a godot convex shape. But it could be something else I don't know about related to physics.
I _can_ give some example .gltf files (but how to test them? it would just be to inpsect the incorrect shapes.), and I can share some videos and images of what is happening. I can also share an MRP.
### Minimal reproduction project (MRP)
I will post one over the next few days. I want to provide the most minimal MRP I can thats readable and not confusing | bug,topic:import,topic:3d | low | Minor |
2,471,503,225 | PowerToys | Adding the EXTENDED option to Windows Image Resizer Tool | ### Description of the new feature / enhancement
As for title, would like to have the option to have the Image Resizer tool like the Lock file tool :)
### Scenario when this would be used?
When I wanna shrink files downloaded online, and share them fast
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,471,514,618 | godot | Cannot edit exported property when using export_custom with Variant type hint | ### Tested versions
- Reproducible in: v4.3.stable.official [77dcf97d8]
### System information
Godot v4.3.stable - Windows 10.0.19045 - GLES3 (Compatibility) - NVIDIA GeForce GTX 1060 6GB (NVIDIA; 31.0.15.3623) - AMD A10-7850K Radeon R7, 12 Compute Cores 4C+8G (4 Threads)
### Issue description
When using the new ```@export_custom``` annotation for untyped properties, the inspector behavior is different if the user uses no type hints or the Variant type hint

The inspector only shows a usable field correctly when not using any type hint. It will show the type hint for the same type of the default value used as initializer for the property in the script.
But when using the ```Variant``` type hint, regardless of the property initializer value, the editor inspector will show a ```"<null>"``` label and the value won't be editable at all. I imagine the desired behavior would be for an usable inspector field to be show regardless when either not using type hints or explicitly using Variant as type hint.
As a side note, a similar thing happens with the data property of json resources, since the data property of a json resource is just a plain variant, it will always show a "<null>" label regardless of the property contents. Please note I'm just pointing out an already existing similar behavior, not reporting a new issue.

### Steps to reproduce
1) in any project, create a new script
1) put the code bellow there
```gdscript
@export_custom(0, '')
var exported_var_without_type_hint = 1.23
@export_custom(0, '')
var exported_var_using_variant_type_hint:Variant = 1.23
```
### Minimal reproduction project (MRP)
[ExportCustomWithVariantBug.zip](https://github.com/user-attachments/files/16644846/ExportCustomWithVariantBug.zip)
### Other notes
Would this [PR](https://github.com/godotengine/godot/pull/89324) fix this? | discussion,topic:core,topic:gdscript,topic:editor | low | Critical |
2,471,526,545 | tauri | [bug] TAURI UI NOT WORKING PROPERLY AFTER SYSTEM AWAKE FROM SLEEP. | ### Describe the bug
I have authenticated my tauri app and then I am hiding tauri UI. Then after my system went to sleep and will awake and then my UI code with hooks UseEffect is not working and also not sure whether UI code is loading or not. Tried with tauri:run event: resumed even this is not called after awake from sleep/hibernate.
I have dual monitor. Tried visibility/ focus nothing works. What is the event for sleep in rust or react to handle
### Reproduction
_No response_
### Expected behavior
_No response_
### Full `tauri info` output
```text
I have authenticated my tauri app and then I am hiding tauri UI. Then after my system went to sleep and will awake and then my UI code with hooks UseEffect is not working and also not sure whether UI code is loading or not. Tried with tauri:run event: resumed even this is not called after awake from sleep/hibernate.
I have dual monitor. Tried visibility/ focus nothing works. What is the event for sleep in rust or react to handle
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,status: needs triage | low | Critical |
2,471,538,830 | flutter | Windows FFI crash from __stdio_common_vsprintf in ucrtbased.dll | ### Steps to reproduce
1. Run minigpu on Windows from https://github.com/MichealReed/minigpu
2. Click initialize once the app starts.
3. The app crashes.
This reproduces on stable and master.
### Expected results
The app should show the same output as wasm builds.
```
[info] Requesting adapter
[info] WGPURequestAdapterStatus_Success: 0
[info] WGPURequestAdapterStatus_Unavailable: 1
[info] Status: 0
[info] Requesting device
[info] Waiting for device request to end
[info] Device request ended
[info] Context destroyed
```
### Actual results
The app only shows the output via a console window when running it through the vs 2022 debugger. The flutter app hits a breakpoint at crashes with
```
Exception thrown at 0x00007FFE4BE1AC87 (ucrtbased.dll) in example.exe: 0xC0000005: Access violation reading location 0xFFFFFFFFFFFFFFFF.
```
### Code sample
All code can be found here, including a C++ test executable that builds to show initialize context is working without flutter.
https://github.com/MichealReed/minigpu/tree/master/minigpu_ffi/src
### Logs
<details open><summary>Logs</summary>
```console
ucrtbased.dll!00007ffea3b1ac87() Unknown
ucrtbased.dll!00007ffea3b1b370() Unknown
ucrtbased.dll!00007ffea3b1ab16() Unknown
ucrtbased.dll!00007ffea3b1bf8d() Unknown
ucrtbased.dll!00007ffea3a1f128() Unknown
ucrtbased.dll!00007ffea3a1e242() Unknown
ucrtbased.dll!00007ffea3afd4e2() Unknown
ucrtbased.dll!00007ffea3af517f() Unknown
ucrtbased.dll!00007ffea3ae1715() Unknown
ucrtbased.dll!00007ffea3b09579() Unknown
> minigpu_ffi.dll!vsnprintf(char * const _Buffer, const unsigned __int64 _BufferCount, const char * const _Format, char * _ArgList) Line 1439 C++
minigpu_ffi.dll!snprintf(char * const _Buffer, const unsigned __int64 _BufferCount, const char * const _Format, ...) Line 1931 C++
minigpu_ffi.dll!gpu::LOG(gpu::Logger & logger, int level, const char * message, ...) Line 48 C++
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[√] Flutter (Channel master, 3.11.0-6.0.pre.37, on Microsoft Windows [Version 10.0.22631.3880], locale en-US)
• Flutter version 3.11.0-6.0.pre.37 on channel master at D:\src\flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision fa117aad28 (1 year, 3 months ago), 2023-05-11 10:28:08 -0700
• Engine revision f38f46f66e
• Dart version 3.1.0 (build 3.1.0-94.0.dev)
• DevTools version 2.23.1
[√] Windows Version (Installed version of Windows is version 10 or higher)
[X] Android toolchain - develop for Android devices
X Unable to locate Android SDK.
Install Android Studio from: https://developer.android.com/studio/index.html
On first launch it will assist you in installing the Android SDK components.
(or visit https://flutter.dev/docs/get-started/install/windows#android-setup for detailed instructions).
If the Android SDK has been installed to a custom location, please use
`flutter config --android-sdk` to update to that location.
[√] Chrome - develop for the web
• Chrome at C:\Program Files\Google\Chrome\Application\chrome.exe
[√] Visual Studio - develop Windows apps (Visual Studio Community 2022 17.10.2)
• Visual Studio at C:\Program Files\Microsoft Visual Studio\2022\Community
• Visual Studio Community 2022 version 17.10.35004.147
• Windows 10 SDK version 10.0.22621.0
[!] Android Studio (not installed)
• Android Studio not found; download from https://developer.android.com/studio/index.html
(or visit https://flutter.dev/docs/get-started/install/windows#android-setup for detailed instructions).
[√] VS Code (version 1.92.1)
• VS Code at Local\Programs\Microsoft VS Code
• Flutter extension version 3.94.0
[√] VS Code (version 1.93.0-insider)
• VS Code at Local\Programs\Microsoft VS Code Insiders
• Flutter extension version 3.95.20240801
[√] Connected device (3 available)
• Windows (desktop) • windows • windows-x64 • Microsoft Windows [Version 10.0.22631.3880]
• Chrome (web) • chrome • web-javascript • Google Chrome 127.0.6533.120
• Edge (web) • edge • web-javascript • Microsoft Edge 127.0.2651.105
[√] Network resources
• All expected network resources are available.
```
</details>
| c: crash,platform-windows,a: desktop,has reproducible steps,P3,c: fatal crash,team-windows,triaged-windows,found in release: 3.24,found in release: 3.25 | medium | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.