id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
|---|---|---|---|---|---|---|
2,794,624,107
|
go
|
x/tools/gopls: gopls not properly interpreting `build.directoryFilters`
|
### gopls version
```
golang.org/x/tools/gopls v0.17.0
golang.org/x/tools/gopls@v0.17.0 h1:yiwvxZX6lAQzZtJyDhKbGUiCepoGOEVw7E/i31JUcLE=
github.com/BurntSushi/toml@v1.4.1-0.20240526193622-a339e1f7089c h1:pxW6RcqyfI9/kWtOwnv/G+AzdKuy2ZrqINhenH4HyNs=
github.com/google/go-cmp@v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
golang.org/x/exp/typeparams@v0.0.0-20231108232855-2478ac86f678 h1:1P7xPZEwZMoBoz0Yze5Nx2/4pxj6nw9ZqHWXqP0iRgQ=
golang.org/x/mod@v0.22.0 h1:D4nJWe9zXqHOmWqj4VMOJhvzj7bEZg4wEYa759z1pH4=
golang.org/x/sync@v0.9.0 h1:fEo0HyrW1GIgZdpbhCRO0PkJajUS5H9IFUztCgEo2jQ=
golang.org/x/telemetry@v0.0.0-20241106142447-58a1122356f5 h1:TCDqnvbBsFapViksHcHySl/sW4+rTGNIAoJJesHRuMM=
golang.org/x/text@v0.20.0 h1:gK/Kv2otX8gz+wn7Rmb3vT96ZwuoxnQlY+HlJVj7Qug=
golang.org/x/tools@v0.27.1-0.20241211153006-a83c4ee29a47 h1:dFDhAo0DFSbmpMYZcvCfIQK9q/wH3fMI8V18Gbcnm9E=
golang.org/x/vuln@v1.0.4 h1:SP0mPeg2PmGCu03V+61EcQiOjmpri2XijexKdzv8Z1I=
honnef.co/go/tools@v0.5.1 h1:4bH5o3b5ZULQ4UrBmP+63W9r7qIkqJClEA9ko5YKx+I=
mvdan.cc/gofumpt@v0.7.0 h1:bg91ttqXmi9y2xawvkuMXyvAA/1ZGJqYAEGjXuP0JXU=
mvdan.cc/xurls/v2@v2.5.0 h1:lyBNOm8Wo71UknhUs4QTFUNNMyxy2JEIaKKo0RWOh+8=
go: go1.23.4
```
### go env
```shell
GO111MODULE=''
GOARCH='arm64'
GOBIN=''
GOCACHE='/Users/darius/Library/Caches/go-build'
GOENV='/Users/darius/Library/Application Support/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='arm64'
GOHOSTOS='darwin'
GOINSECURE=''
GOMODCACHE='/Users/darius/go/pkg/mod'
GONOPROXY='github.com/[redacted]/*'
GONOSUMDB='github.com/[redacted]/*'
GOOS='darwin'
GOPATH='/Users/darius/go'
GOPRIVATE='github.com/[redacted]/*'
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/Users/darius/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.9.darwin-arm64'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/Users/darius/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.9.darwin-arm64/pkg/tool/darwin_arm64'
GOVCS=''
GOVERSION='go1.22.9'
GCCGO='gccgo'
AR='ar'
CC='clang'
CXX='clang++'
CGO_ENABLED='1'
GOMOD='/Users/darius/[redacted]/go.mod'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -arch arm64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -ffile-prefix-map=/var/folders/zt/m98ktd916s16_d4xl9nq34v40000gp/T/go-build2505097657=/tmp/go-build -gno-record-gcc-switches -fno-common'
```
### What did you do?
In VSCode, set
```
"gopls": {
[...]
"build.directoryFilters": ["-", "+go", "-go/example"]
[...]
}
```
And then in `go/example/example.go`, have a function with an unused parameter
### What did you see happen?
gopls still reports unusedparams error on example.go
### What did you expect to see?
I expected to see no report of unusedparams errors in example.go.
Note, if `+go` is removed from the `build.directoryFilters` paths, it no longer shows, but of course this disables gopls entirely on my codebase which is not desirable.
Based on [docs](https://github.com/golang/tools/blob/master/gopls/doc/settings.md#directoryfilters-string), `build.directoryFilters` seems intended to allow excluding a subdirectory like this:
> Include only project_a, but not node_modules inside it: `-`, `+project_a`, `-project_a/node_modules`
### Editor and settings
_No response_
### Logs
_No response_
|
gopls,Tools,gopls/metadata,ToolProposal
|
low
|
Critical
|
2,794,629,608
|
flutter
|
Bug Report: CupertinoListTile Remains backgroundColorActivated After launchUrl(url)
|
### Steps to reproduce
1. Tap on a `CupertinoListTile` to launchUrl(url).
2. Observe that the tile occasionally remains in the `backgroundColorActivated` state (gray background color) after launchUrl.
### Expected results
The `CupertinoListTile` should not remain in the `backgroundColorActivated` state after navigation. The background color should reset to the default state.
### Actual results
The `CupertinoListTile` occasionally remains in the `backgroundColorActivated` state (gray background color) after navigation.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/cupertino.dart';
import 'package:flutter/material.dart';
import 'package:google_fonts/google_fonts.dart';
import 'package:provider/provider.dart';
import 'package:url_launcher/url_launcher.dart';
class SettingsMainPage extends StatelessWidget {
const SettingsMainPage({super.key});
Future<void> _launchPrivacyPolicy() async {
final Uri url = Uri.parse(
'https://www.apple.com/legal/internet-services/itunes/dev/stdeula/');
if (!await launchUrl(url)) {
throw Exception('Could not launch $url');
}
}
Future<void> _launchTermsOfUse() async {
final Uri url = Uri.parse(
'https://www.apple.com/legal/internet-services/itunes/dev/stdeula/');
if (!await launchUrl(url)) {
throw Exception('Could not launch $url');
}
}
@override
Widget build(BuildContext context) {
return Consumer<AppStateModel>(
builder: (context, model, child) {
return CupertinoPageScaffold(
child: CustomScrollView(
semanticChildCount: 2,
slivers: <Widget>[
CupertinoSliverNavigationBar(
largeTitle: Text('Settings'),
),
SliverSafeArea(
top: false,
minimum: EdgeInsets.only(top: 10),
sliver: SliverList.separated(
itemCount: 2,
itemBuilder: (BuildContext context, int index) {
if (index == 0) {
return CupertinoListTile(
title: Text('Privacy Policy',
style: GoogleFonts.poppins(
fontSize: 17,
fontWeight: FontWeight.w400,
)),
onTap: _launchPrivacyPolicy,
);
} else if (index == 1) {
return CupertinoListTile(
title: Text('Terms of Use',
style: GoogleFonts.poppins(
fontSize: 17,
fontWeight: FontWeight.w400,
)),
onTap: _launchTermsOfUse,
);
}
return null;
},
separatorBuilder: (BuildContext context, int index) =>
Padding(
padding: EdgeInsets.symmetric(horizontal: 20.0),
child: Divider(
height: 1,
thickness: 1,
color: Colors.grey[300],
),
),
),
)
],
),
);
},
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.27.1, on macOS 15.2 24C101 darwin-x64, locale zh-Hans-CN)
• Flutter version 3.27.1 on channel stable at /Users/liyue/Developer/SDK/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 17025dd882 (4 weeks ago), 2024-12-17 03:23:09 +0900
• Engine revision cb4b5fff73
• Dart version 3.6.0
• DevTools version 2.40.2
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at /Users/liyue/Library/Android/sdk
• Platform android-35, build-tools 35.0.0
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11572160)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 16.2)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16C5032a
• CocoaPods version 1.16.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2023.3)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11572160)
[✓] VS Code (version 1.96.3)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.102.0
[✓] Connected device (4 available)
• iPhone 16 Pro (mobile) • 17B20026-583C-4CD7-BDD3-5595615B4A60 • ios • com.apple.CoreSimulator.SimRuntime.iOS-18-2 (simulator)
• macOS (desktop) • macos • darwin-x64 • macOS 15.2 24C101 darwin-x64
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.266
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
|
framework,f: cupertino,f: focus,has reproducible steps,P2,team-design,triaged-design,found in release: 3.27,found in release: 3.28
|
low
|
Critical
|
2,794,634,179
|
ollama
|
FR: Meaningful names of models in models/blobs dir
|
Please make models to have meaningful filenames (like user/modelname-quantization.gguf) in models/blobs directory, so they can be (easier) used with other model inference software.
Currently they have a lot of similar names like `sha256-2bada8a7450677000f678be90653b85d364de7db25eb5ea54136ada5f3933730`.
|
feature request
|
low
|
Minor
|
2,794,634,685
|
go
|
cmd/internal/bootstrap_test: TestRepeatBootstrap failures
|
```
#!watchflakes
default <- pkg == "cmd/internal/bootstrap_test" && test == "TestRepeatBootstrap"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8725538157882983345)):
=== RUN TestRepeatBootstrap
reboot_test.go:59: GOROOT overlay set up in 8.441881598s
Building Go cmd/dist using /Users/swarming/.swarming/w/ir/x/w/goroot. (go1.23.4-devel_bb8230f80535945648e8b56739ad450cf433eba9 darwin/amd64)
Building Go toolchain1 using /Users/swarming/.swarming/w/ir/x/w/goroot.
Building Go bootstrap cmd/go (go_bootstrap) using Go toolchain1.
Building Go toolchain2 using go_bootstrap and Go toolchain1.
Building Go toolchain3 using go_bootstrap and Go toolchain2.
# internal/bisect
runtime: newstack sp=0xc000a3cffe stack=[0xc000a44000, 0xc000a4c000]
morebuf={pc:0x0 sp:0xc000a3d00e lr:0x0}
...
fatal error: runtime: split stack overflow
runtime stack:
runtime.throw({0xadd9f61?, 0x70000b496e60?})
runtime/panic.go:1067 +0x48 fp=0x70000b496e18 sp=0x70000b496de8 pc=0xa5482a8
runtime.newstack()
runtime/stack.go:1061 +0x74d fp=0x70000b496f58 sp=0x70000b496e18 pc=0xa52b34d
runtime.morestack()
runtime/asm_amd64.s:621 +0x7a fp=0x70000b496f60 sp=0x70000b496f58 pc=0xa54e83a
...
cmd/compile/internal/gc/compile.go:188 +0x38 fp=0xc002e79fb0 sp=0xc002e79f70 pc=0xa385d98
cmd/compile/internal/gc.compileFunctions.func3.1()
cmd/compile/internal/gc/compile.go:170 +0x30 fp=0xc002e79fe0 sp=0xc002e79fb0 pc=0xa386190
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc002e79fe8 sp=0xc002e79fe0 pc=0x9b46581
created by cmd/compile/internal/gc.compileFunctions.func3 in goroutine 3
cmd/compile/internal/gc/compile.go:169 +0x247
go tool dist: FAILED: /Users/swarming/.swarming/w/ir/x/t/TestRepeatBootstrap2135723931/001/goroot/pkg/tool/darwin_amd64/go_bootstrap install -a cmd/asm cmd/cgo cmd/compile cmd/link cmd/preprofile: exit status 1
reboot_test.go:82: exit status 2
--- FAIL: TestRepeatBootstrap (412.72s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
|
NeedsInvestigation
|
low
|
Critical
|
2,794,634,719
|
go
|
cmd/internal/obj/x86: TestVexEvexPCrelative failures
|
```
#!watchflakes
default <- pkg == "cmd/internal/obj/x86" && test == "TestVexEvexPCrelative"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8725529494471363313)):
=== RUN TestVexEvexPCrelative
pcrelative_test.go:68: error exit status 1 output # runtime
runtime: s.allocCount= 167 s.nelems= 168
fatal error: s.allocCount != s.nelems && freeIndex == s.nelems
goroutine 2033 gp=0xc006678000 m=3 mp=0xc000080008 [running]:
runtime.throw({0x3fcd7f7?, 0x50?})
runtime/panic.go:1099 +0x48 fp=0xc0086e36f0 sp=0xc0086e36c0 pc=0x3676188
runtime.(*mcache).nextFree(0x4a80a78, 0xa)
runtime/malloc.go:962 +0x4ca fp=0xc0086e3748 sp=0xc0086e36f0 pc=0x360c14a
...
runtime/chan.go:283
runtime.chansend1(0xc003f13ab0, 0xc002b34a10)
runtime/chan.go:161 +0x35a fp=0xc0081f5fb0 sp=0xc0081f5f40 pc=0x360361a
cmd/compile/internal/gc.compileFunctions.func3.1()
cmd/compile/internal/gc/compile.go:172 +0x3f fp=0xc0081f5fe0 sp=0xc0081f5fb0 pc=0x3f53b3f
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc0081f5fe8 sp=0xc0081f5fe0 pc=0x367d7c1
created by cmd/compile/internal/gc.compileFunctions.func3 in goroutine 202
cmd/compile/internal/gc/compile.go:170 +0x247
--- FAIL: TestVexEvexPCrelative (4.29s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
|
NeedsInvestigation
|
low
|
Critical
|
2,794,665,775
|
pytorch
|
Make flex_attention work if `score_mod`'s output doesn't require gradients at all
|
### 🚀 The feature, motivation and pitch
See https://github.com/pytorch/pytorch/issues/139548#issuecomment-2597509430
```
import warnings
import numpy as np
import torch
from torch.nn.attention.flex_attention import flex_attention, create_mask, create_block_mask
# import astropy_healpix as hp
hlc = 4
num_healpix_cells = 12 * 4**hlc
print( f'seq_length : {num_healpix_cells}')
# with warnings.catch_warnings(action="ignore"):
# nbours= hp.neighbours( np.arange(num_healpix_cells), 2**hlc, order='nested').transpose()
# build adjacency matrix (smarter ways to do it ...)
nbours_mat = torch.zeros( (num_healpix_cells,num_healpix_cells), dtype=torch.bool, device='cuda')
# for i in range(num_healpix_cells) :
# for j in nbours[i] :
# nbours_mat[i,j] = True if j>=0 else False
hp_adjacency = nbours_mat
# tc_tokens = torch.from_numpy( np.load( 'tc_tokens.npy')).to(torch.float16).to('cuda')
tc_tokens = torch.ones( [204458, 256], dtype=torch.float16, device='cuda', requires_grad=True)
# tcs_lens = torch.from_numpy( np.load( './tcs_lens.npy')).to(torch.int32).to('cuda')
# tcs_lens = torch.ra
# print( f'tc_tokens = {tc_tokens.shape}')
# print( f'tcs_lens = {tcs_lens.shape}')
tc_tokens_cell_idx = torch.zeros(204458, dtype=torch.int, device='cuda')
def sparsity_mask( score, b, h, q_idx, kv_idx):
return hp_adjacency[ tc_tokens_cell_idx[q_idx], tc_tokens_cell_idx[kv_idx] ]
compiled_flex_attention = torch.compile(flex_attention, dynamic=False)
toks = tc_tokens[:,:64].unsqueeze(0).unsqueeze(0)
out = compiled_flex_attention( toks, toks, toks, score_mod=sparsity_mask)
t = torch.zeros_like( out)
mse = torch.nn.MSELoss()
loss = mse( t, out)
loss.backward()
```
### Alternatives
_No response_
### Additional context
_No response_
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @yf225 @drisspg @yanboliang @BoyuanFeng
|
triaged,oncall: pt2,module: higher order operators,module: pt2-dispatcher,module: flex attention
|
low
|
Minor
|
2,794,695,668
|
deno
|
compat: deno install `file:` protocol
|
Version: Deno 2.1.6
When running `deno install` in an existing node project (using its `package.json` file), it should be able to install/link local files and directories as dependencies.
> see https://docs.npmjs.com/cli/v11/commands/npm-install#install-links
```json
{
"dependencies": {
"foobar": "file:./foobar"
}
}
```
|
node compat
|
low
|
Minor
|
2,794,706,432
|
pytorch
|
DISABLED test_nested_optimize_decorator (__main__.MiscTests)
|
Platforms: linux, mac, macos
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_nested_optimize_decorator&suite=MiscTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35754430609).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_nested_optimize_decorator`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/Users/ec2-user/runner/_work/pytorch/pytorch/test/dynamo/test_misc.py", line 4045, in test_nested_optimize_decorator
self.assertEqual(cnts3.op_count, 4)
File "/Users/ec2-user/runner/_work/_temp/conda_environment_12820448148/lib/python3.9/site-packages/torch/testing/_internal/common_utils.py", line 4036, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not equal!
Expected 4 but got 3.
Absolute difference: 1
Relative difference: 0.25
To execute this test, run the following from the base repo dir:
python test/dynamo/test_misc.py MiscTests.test_nested_optimize_decorator
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `dynamo/test_misc.py`
cc @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames
|
triaged,module: flaky-tests,skipped,oncall: pt2,module: dynamo
|
low
|
Critical
|
2,794,717,220
|
PowerToys
|
FancyZones is unable to differentiate multiple web apps that run on Edge.
|
### Microsoft PowerToys version
0.87.1
### Installation method
Microsoft Store, PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
FancyZones
### Steps to reproduce
Run multiple instances of Edge whether be it a full browser or web app that runs on Edge behind the scenes (not WebView2) then snap either of them to one of the zones
### ✔️ Expected Behavior
FancyZones should be able to differentiate web apps or _tabs_ that run on their own windows that run on the same executable (such as Edge)
### ❌ Actual Behavior
all instances that relies on Edge will snap to that specific zone when new instances are run.
### Other Software
Chromium Edge
|
Issue-Bug,Needs-Triage
|
low
|
Minor
|
2,794,737,325
|
three.js
|
Slow rendering on : webgpu_compute_particles
|
### Description
On iphone12 pro max it seems the "[webgpu_compute_particles](https://threejs.org/examples/?q=webgpu%20particle#webgpu_compute_particles)" is very slow, I originally though its the computation, but it's actually the particule rendering:
- Webgpu safari, top angle : 60fps
- Webgpu safari, bottom angle : 20fps
I'm a bit surprised cause I though render 1M particles would have been a piece of cake in webgpu.
For the moment I can do a PR :
- to setup 256k particles on mobiles by default
- add a gui to select the number and regenerate the particles system.
- pause/play the physics
So it might help to monitoring the performance on various mobile
### Reproduction steps
1. open the example on mobile
2. play with the camera angle
### Code
X
### Live example
X
### Screenshots
X
### Version
r172
### Device
Mobile
### Browser
Safari
### OS
iOS
|
Suggestion
|
low
|
Major
|
2,794,767,221
|
ant-design
|
CheckableTag组件添加可访问性增强
|
### What problem does this feature solve?
用户使用可以用键盘操控
### What does the proposed API look like?
使用<button/>替代<span/>实现
<!-- generated by ant-design-issue-helper. DO NOT REMOVE -->
|
⌨️ Accessibility
|
low
|
Minor
|
2,794,771,848
|
next.js
|
`new URL(”...” , import.meta.url)` to get the path, but the file contents are unintentionally loaded at build time.
|
### Link to the code that reproduces this issue
https://github.com/yskszk63/nextjs-unexpected-js-eval
### To Reproduce
1. `npm run build`
### Current vs. Expected behavior
## Current
```
$ npm run build
> build
> next build
▲ Next.js 15.2.0-canary.13
Creating an optimized production build ...
✓ Compiled successfully
✓ Linting and checking validity of types
✓ Collecting page data
Error occurred prerendering page "/". Read more: https://nextjs.org/docs/messages/prerender-error
Error: 🙈
at Object.<anonymous> (/home/yskszk63/work/nextjs-unexpected-js-eval/node_modules/throw-error-on-load/throw.js:1:7)
at Module._compile (node:internal/modules/cjs/loader:1566:14)
at Object..js (node:internal/modules/cjs/loader:1718:10)
at Module.load (node:internal/modules/cjs/loader:1305:32)
at Function._load (node:internal/modules/cjs/loader:1119:12)
at TracingChannel.traceSync (node:diagnostics_channel:322:14)
at wrapModuleLoad (node:internal/modules/cjs/loader:220:24)
at Module.<anonymous> (node:internal/modules/cjs/loader:1327:12)
at mod.require (/home/yskszk63/work/nextjs-unexpected-js-eval/node_modules/next/dist/server/require-hook.js:65:28)
at require (node:internal/modules/helpers:136:16)
Export encountered an error on /page: /, exiting the build.
⨯ Static worker exited with code: 1 and signal: null
```
## Expected behavior
The build completes successfully.
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: #1 SMP PREEMPT_DYNAMIC Fri, 10 Jan 2025 00:39:41 +0000
Available memory (MB): 30844
Available CPU cores: 16
Binaries:
Node: 23.4.0
npm: 11.0.0
Yarn: N/A
pnpm: N/A
Relevant Packages:
next: 15.2.0-canary.13 // Latest available version is detected (15.2.0-canary.13).
eslint-config-next: N/A
react: 19.0.0
react-dom: 19.0.0
typescript: 5.7.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Module Resolution
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local)
### Additional context
I want to get the path of an asset after build, but unintentionally the specified file is loaded.
Removing `serverExternalPackages` from next.config.ts works as expected.
|
Module Resolution
|
low
|
Critical
|
2,794,803,554
|
go
|
unique: unrecognized failures
|
```
#!watchflakes
default <- pkg == "unique" && test == ""
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8725520602403648273)):
FAIL unique [build failed]
# unique [unique.test]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x5e48fd2]
goroutine 1167 gp=0xc003b0ce00 m=3 mp=0xc000080008 [running]:
panic({0x6275460?, 0x6883b40?})
runtime/panic.go:806 +0x168 fp=0xc0004149a8 sp=0xc0004148f8 pc=0x575ce08
runtime.panicmem(...)
runtime/panic.go:262
...
runtime.chansend(...)
runtime/chan.go:283
runtime.chansend1(0xc003f28700, 0xc00465b570)
runtime/chan.go:161 +0x35a fp=0xc000e69fb0 sp=0xc000e69f40 pc=0x56ea61a
cmd/compile/internal/gc.compileFunctions.func3.1()
cmd/compile/internal/gc/compile.go:172 +0x3f fp=0xc000e69fe0 sp=0xc000e69fb0 pc=0x603a83f
runtime.goexit({})
runtime/asm_amd64.s:1700 +0x1 fp=0xc000e69fe8 sp=0xc000e69fe0 pc=0x57647c1
created by cmd/compile/internal/gc.compileFunctions.func3 in goroutine 5
cmd/compile/internal/gc/compile.go:170 +0x247
— [watchflakes](https://go.dev/wiki/Watchflakes)
|
NeedsInvestigation
|
low
|
Critical
|
2,794,806,571
|
flutter
|
google_sign_in Error PlatformException(sign_in_failed, O3.a: 10: , null, null)
|
Flutter Doctor:
```
[✓] Flutter (Channel stable, 3.27.1, on macOS 15.2 24C101 darwin-arm64, locale en-US)
• Flutter version 3.27.1 on channel stable at /opt/homebrew/Caskroom/flutter/3.24.4/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 17025dd882 (5 weeks ago), 2024-12-17 03:23:09 +0900
• Engine revision cb4b5fff73
• Dart version 3.6.0
• DevTools version 2.40.2
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at /Users/mrcse/Library/Android/sdk
• Platform android-35, build-tools 35.0.0
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 21.0.3+-79915917-b509.11)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 16.1)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16B40
• CocoaPods version 1.16.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.2)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 21.0.3+-79915917-b509.11)
[✓] VS Code (version 1.96.3)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.102.0
[✓] Connected device (5 available)
• TECNO KD7 (mobile) • 192.168.1.19:5555 • android-arm64 • Android 10 (API 29)
• 23129RAA4G (mobile) • 192.168.1.8:40957 • android-arm64 • Android 15 (API 35)
• macOS (desktop) • macos • darwin-arm64 • macOS 15.2 24C101 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 15.2 24C101 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.265
! Error: Browsing on the local area network for Jamshid’s iPhone. Ensure the device is unlocked and attached with a cable or associated with the same local area network as this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code -27)
[✓] Network resources
• All expected network resources are available.
```
Code:
````
Future<String?> signinWithGoogle() async {
try {
print("Google Sign in Start");
final GoogleSignIn googleSignIn = GoogleSignIn(
clientId: Platform.isAndroid
? dotenv.env['ANDROID_CLIENT_ID']
: dotenv.env['IOS_CLIENT_ID'],
scopes: [
'https://www.googleapis.com/auth/userinfo.email',
'https://www.googleapis.com/auth/userinfo.profile',
],
);
final GoogleSignInAccount? googleUser = await googleSignIn.signIn();
if (googleUser == null) {
_logger.e("Google Sign-In canceled by the user.");
return null; // User canceled the login
}
final GoogleSignInAuthentication googleAuth =
await googleUser.authentication;
// Return the idToken to send to your backend
return googleAuth.idToken;
} catch (e) {
_logger.e("Error during Google Sign-In: $e");
print(e);
return null;
}
}
````
<img width="917" alt="Image" src="https://github.com/user-attachments/assets/cd03af2f-1ce8-48af-aef1-2b612561e53b" />
I am not using firebase so added only SHA-1 key for oAuth 2.0 Client Id
|
waiting for customer response,in triage
|
low
|
Critical
|
2,794,831,368
|
vscode
|
Copilot stuck after website authentification
|
Type: <b>Bug</b>
I click on "Sign in to GitHub.com"
Sign in on the website.
Get redirected to VsCode and then nothing happens.
The prompt "Ask Copilot Sign in with GitHub to use GitHub Copilot, your AI pair programmer." is still there. Same display as before the login. I am asked to click "Sign in to GitHub.com" again.
Probably an error due to me using VsCode via Remote SSH on a HPC (high performance computer) System.
I replaced my user name with \<user\> and real path to home with \~/
Remote Machine HPC
lsb_release -a
LSB Version: :core-4.1-amd64:core-4.1-noarch:cxx-4.1-amd64:cxx-4.1-noarch:desktop-4.1-amd64:desktop-4.1-noarch:languages-4.1-amd64:languages-4.1-noarch:printing-4.1-amd64:printing-4.1-noarch
Distributor ID: RedHatEnterprise
Description: Red Hat Enterprise Linux release 8.8 (Ootpa)
Release: 8.8
Codename: Ootpa
##########################################
Output GitHub Copilot:
2025-01-17 07:37:44.497 [warning] [certificates] Failed to parse certificate # DWD Prod CA 2
Error: error:0480006C:PEM routines::no start line
at new X509Certificate (node:internal/crypto/x509:119:21)
at \~/.vscode-server/extensions/github.copilot-1.257.0/lib/src/network/certificateReaders.ts:79:36
at Array.filter (<anonymous>)
at nse.removeExpiredCertificates (\~/.vscode-server/extensions/github.copilot-1.257.0/lib/src/network/certificateReaders.ts:77:32)
at nse.getAllRootCAs (\~/.vscode-server/extensions/github.copilot-1.257.0/lib/src/network/certificateReaders.ts:68:38)
at Tge.createSecureContext (\~/.vscode-server/extensions/github.copilot-1.257.0/lib/src/network/certificates.ts:47:23) {
opensslErrorStack: [
'error:0688010A:asn1 encoding routines::nested asn1 error',
'error:06800066:asn1 encoding routines::bad object header',
'error:0680009B:asn1 encoding routines::too long'
],
library: 'PEM routines',
reason: 'no start line',
code: 'ERR_OSSL_PEM_NO_START_LINE'
}
2025-01-17 07:37:44.497 [info] [certificates] Removed 4 expired certificates
2025-01-17 07:37:45.116 [info] [fetcher] Using Helix fetcher, Electron fetcher is not available.
2025-01-17 07:37:45.605 [info] [fetcher] Using Helix fetcher, Electron fetcher is not available.
2025-01-17 07:37:45.606 [info] [code-referencing] Public code references are enabled.
2025-01-17 07:54:46.260 [info] [fetcher] Using Helix fetcher, Electron fetcher is not available.
2025-01-17 07:54:46.260 [info] [code-referencing] Public code references are enabled.
2025-01-17 07:54:46.288 [info] [fetcher] Using Helix fetcher, Electron fetcher is not available.
2025-01-17 07:54:46.288 [info] [code-referencing] Public code references are enabled.
2025-01-17 07:54:46.323 [info] [fetcher] Using Helix fetcher, Electron fetcher is not available.
2025-01-17 07:54:46.323 [info] [code-referencing] Public code references are enabled.
2025-01-17 07:56:55.499 [info] [fetcher] Using Helix fetcher, Electron fetcher is not available.
2025-01-17 07:56:55.499 [info] [code-referencing] Public code references are enabled.
2025-01-17 07:56:55.520 [info] [fetcher] Using Helix fetcher, Electron fetcher is not available.
2025-01-17 07:56:55.520 [info] [code-referencing] Public code references are enabled.
2025-01-17 07:56:55.526 [info] [fetcher] Using Helix fetcher, Electron fetcher is not available.
2025-01-17 07:56:55.526 [info] [code-referencing] Public code references are enabled.
########################
Output GitHub Copilot Chat
2025-01-17 07:37:42.700 [info] Can't use the Electron fetcher in this environment.
2025-01-17 07:37:42.700 [info] Using the Node fetch fetcher.
2025-01-17 07:37:42.700 [info] Initializing Git extension service.
2025-01-17 07:37:42.700 [info] Successfully activated the vscode.git extension.
2025-01-17 07:37:42.700 [info] Enablement state of the vscode.git extension: true.
2025-01-17 07:37:42.700 [info] Successfully registered Git commit message provider.
2025-01-17 07:37:44.937 [info] Logged in as \<user\>
2025-01-17 07:37:45.029 [error] TypeError: fetch failed
at node:internal/deps/undici/undici:13392:13
at ek._fetch (\~/.vscode-server/extensions/github.copilot-chat-0.23.2/dist/extension.js:763:5811)
at B0.fetchCopilotToken (\~/.vscode-server/extensions/github.copilot-chat-0.23.2/dist/extension.js:334:23966)
at B0.authFromGitHubToken (\~/.vscode-server/extensions/github.copilot-chat-0.23.2/dist/extension.js:334:21634)
at B0._auth (\~/.vscode-server/extensions/github.copilot-chat-0.23.2/dist/extension.js:446:3023): Failed to get copilot token
2025-01-17 07:37:45.029 [error] GitHub Copilot could not connect to server. Extension activation failed: "fetch failed"
2025-01-17 07:37:45.029 [warning] [LanguageModelAccess] LanguageModel/Embeddings are not available without auth token
2025-01-17 07:37:45.029 [error] TypeError: fetch failed
at node:internal/deps/undici/undici:13392:13
at ek._fetch (\~/.vscode-server/extensions/github.copilot-chat-0.23.2/dist/extension.js:763:5811)
at B0.fetchCopilotToken (\~/.vscode-server/extensions/github.copilot-chat-0.23.2/dist/extension.js:334:23966)
at B0.authFromGitHubToken (\~/.vscode-server/extensions/github.copilot-chat-0.23.2/dist/extension.js:334:21634)
at B0._auth (\~/.vscode-server/extensions/github.copilot-chat-0.23.2/dist/extension.js:446:3023)
2025-01-17 07:37:45.029 [warning] [LanguageModelAccess] LanguageModel/Embeddings are not available without auth token
2025-01-17 07:37:45.029 [error] TypeError: fetch failed
at node:internal/deps/undici/undici:13392:13
at ek._fetch (\~/.vscode-server/extensions/github.copilot-chat-0.23.2/dist/extension.js:763:5811)
at B0.fetchCopilotToken (\~/.vscode-server/extensions/github.copilot-chat-0.23.2/dist/extension.js:334:23966)
at B0.authFromGitHubToken (\~/.vscode-server/extensions/github.copilot-chat-0.23.2/dist/extension.js:334:21634)
at B0._auth (\~/.vscode-server/extensions/github.copilot-chat-0.23.2/dist/extension.js:446:3023)
2025-01-17 07:37:45.029 [info] activationBlocker from 'languageModelAccess' took for 2758ms
2025-01-17 07:52:45.233 [info] Logged in as \<user\>
2025-01-17 07:52:45.250 [warning] [LanguageModelAccess] LanguageModel/Embeddings are not available without auth token
2025-01-17 07:52:45.250 [error] TypeError: fetch failed
at node:internal/deps/undici/undici:13392:13
at ek._fetch (\~/.vscode-server/extensions/github.copilot-chat-0.23.2/dist/extension.js:763:5811)
at B0.fetchCopilotToken (\~/.vscode-server/extensions/github.copilot-chat-0.23.2/dist/extension.js:334:23966)
at B0.authFromGitHubToken (\~/.vscode-server/extensions/github.copilot-chat-0.23.2/dist/extension.js:334:21634)
at B0._auth (\~/.vscode-server/extensions/github.copilot-chat-0.23.2/dist/extension.js:446:3023)
at B0._authShowWarnings (\~/.vscode-server/extensions/github.copilot-chat-0.23.2/dist/extension.js:446:3203)
at B0.getCopilotToken (\~/.vscode-server/extensions/github.copilot-chat-0.23.2/dist/extension.js:446:2438)
at dF.getCopilotToken (\~/.vscode-server/extensions/github.copilot-chat-0.23.2/dist/extension.js:334:26267)
at Y0._getAuthSession (\~/.vscode-server/extensions/github.copilot-chat-0.23.2/dist/extension.js:488:2825)
at Object.A [as task] (\~/.vscode-server/extensions/github.copilot-chat-0.23.2/dist/extension.js:488:640)
at xC._processQueue (\~/.vscode-server/extensions/github.copilot-chat-0.23.2/dist/extension.js:487:1101)
2025-01-17 07:54:46.175 [info] Logged in as \<user\>
2025-01-17 07:56:55.503 [info] Logged in as \<user\>
VS Code version: Code 1.96.4 (cd4ee3b1c348a13bafd8f9ad8060705f6d4b9cba, 2025-01-16T00:16:19.038Z)
OS version: Windows_NT x64 10.0.19045
Modes:
Remote OS version: Linux x64 4.18.0-477.15.1.el8_8.x86_64
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i5-5200U CPU @ 2.20GHz (4 x 2195)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|7.92GB (2.10GB free)|
|Process Argv|--crash-reporter-id 0946425e-424a-4684-b202-1aab6b06ca43|
|Screen Reader|no|
|VM|0%|
|Item|Value|
|---|---|
|Remote|SSH: rcl|
|OS|Linux x64 4.18.0-477.15.1.el8_8.x86_64|
|CPUs|AMD EPYC 7502 32-Core Processor (64 x 3348)|
|Memory (System)|250.44GB (149.05GB free)|
|VM|0%|
</details><details><summary>Extensions (15)</summary>
Extension|Author (truncated)|Version
---|---|---
jupyter-keymap|ms-|1.1.2
remote-ssh|ms-|0.116.1
remote-ssh-edit|ms-|0.87.0
remote-wsl|ms-|0.88.5
remote-explorer|ms-|0.4.3
pdf|tom|1.2.2
linter-gfortran|for|3.4.2024101621
chatgpt-vscode|gen|0.0.13
copilot|Git|1.257.0
copilot-chat|Git|0.23.2
bash-ide-vscode|mad|1.43.0
debugpy|ms-|2024.14.0
python|ms-|2024.22.2
vscode-pylance|ms-|2024.12.1
cpptools|ms-|1.22.11
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
a9j8j154:30646983
962ge761:30959799
pythonnoceb:30805159
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupytercf:31046870
2f103344:31071589
nativerepl2:31139839
pythonrstrctxt:31112756
nativeloc2:31192216
cf971741:31144450
iacca1:31171482
notype1cf:31157160
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
390bf810:31215807
6074i472:31201624
dwoutputs:31217127
```
</details>
<!-- generated by issue reporter -->
|
info-needed,network
|
low
|
Critical
|
2,794,850,960
|
go
|
runtime: increased memory usage in 1.23 with AzCopy
|
### Go version
go 1.23.1
### Output of `go env` in your module/workspace:
```shell
go env
set GO111MODULE=
set GOARCH=amd64
set GOBIN=
set GOCACHE=C:\Users\dphulkar\AppData\Local\go-build
set GOENV=C:\Users\dphulkar\AppData\Roaming\go\env
set GOEXE=.exe
set GOEXPERIMENT=
set GOFLAGS=
set GOHOSTARCH=amd64
set GOHOSTOS=windows
set GOINSECURE=
set GOMODCACHE=C:\Users\dphulkar\go\pkg\mod
set GONOPROXY=
set GONOSUMDB=
set GOOS=windows
set GOPATH=C:\Users\dphulkar\go
set GOPRIVATE=
set GOPROXY=https://proxy.golang.org,direct
set GOROOT=C:\Users\dphulkar\go\pkg\mod\golang.org\toolchain@v0.0.1-go1.23.1.windows-amd64
set GOSUMDB=sum.golang.org
set GOTMPDIR=
set GOTOOLCHAIN=auto
set GOTOOLDIR=C:\Users\dphulkar\go\pkg\mod\golang.org\toolchain@v0.0.1-go1.23.1.windows-amd64\pkg\tool\windows_amd64
set GOVCS=
set GOVERSION=go1.23.1
set GODEBUG=
set GOTELEMETRY=local
set GOTELEMETRYDIR=C:\Users\dphulkar\AppData\Roaming\go\telemetry
set GCCGO=gccgo
set GOAMD64=v1
set AR=ar
set CC=gcc
set CXX=g++
set CGO_ENABLED=0
set GOMOD=C:\Users\dphulkar\azure-storage-azcopy\go.mod
set GOWORK=
set CGO_CFLAGS=-O2 -g
set CGO_CPPFLAGS=
set CGO_CXXFLAGS=-O2 -g
set CGO_FFLAGS=-O2 -g
set CGO_LDFLAGS=-O2 -g
set PKG_CONFIG=pkg-config
set GOGCCFLAGS=-m64 -fno-caret-diagnostics -Qunused-arguments -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=C:\Users\dphulkar\AppData\Local\Temp\go-build1895105759=/tmp/go-build -gno-record-gcc-switches
```
### What did you do?
- Used AzCopy version 10.27.0, built with Go version 1.23.1, to copy a dataset with the following characteristics:
1. Total Data Size: 567 GB
2. Total Files: 1,415,230 files
- Observed memory usage behavior during the operation.
- Customers also reported similar memory issues when using AzCopy versions built with **Go 1.23.1**: [Issue #2901](https://github.com/Azure/azure-storage-azcopy/issues/2901)
- To identify the root cause, multiple experiments were conducted:
1. AzCopy 10.26.0 (built with Go 1.22.x) showed stable memory usage (~54% RAM).
2. AzCopy 10.27.0 (built with Go 1.23.1) exhibited a gradual memory increase, peaking at 96%.
3. Downgrading the Go runtime in AzCopy 10.27.x from 1.23.1 to 1.22.5/1.22.7 mitigated the issue, with memory usage stabilizing (~54%).
4. Applying a workaround by setting runtime.MemoryProfileRate=0 in AzCopy 10.27.0 (built with Go 1.23.1) failed to resolve the problem, as memory usage still peaked at 99%.
5. Tried setting the environment variable GODEBUG=profstackdepth=32 to limit profiling stack depth, but memory usage remained high, peaking at 97% RAM.
6. Also experimented with lowering the profstackdepth value further, but it did not resolve the issue.
### What did you see happen?
- Memory Usage: Gradual increase, peaking at 96% RAM.
- CPU Usage: Fluctuating between 51% and 100%.
- Memory Profile: Did not stabilize, leading to high resource consumption over time.
**Observations:**
1. Significant memory usage difference compared to prior versions (e.g., AzCopy 10.26).
2. Downgrading the Go runtime from version 1.23.1 to 1.22.5 or 1.22.7 mitigated the issue, with memory usage stabilizing around 54%.
3. The workaround of setting runtime.MemoryProfileRate=0 did not resolve the issue, as memory usage still reached 99% RAM.
### What did you expect to see?
- Consistent memory usage profile, similar to AzCopy 10.26 or when using Go 1.22.5/1.22.7.
- Stabilized memory usage without a gradual increase over time.
|
Performance,WaitingForInfo,NeedsInvestigation,BugReport
|
low
|
Critical
|
2,794,868,896
|
godot
|
Manually activating the game window requires pressing "Input" or other tabs in the embedded game window
|
### Tested versions
v4.4.beta1.official [d33da79d3]
### System information
w11
### Issue description
When the game is running and you switch between the "Input," "2D," or "3D" tabs, the game window loses focus.
Example: You select "2D" and then switch to "Input" to control the game. However, at this point, keyboard input no longer goes directly to the game window but instead to the editor. After selecting "Input," you need to click on the embedded game window again to redirect keyboard input to the game instead of the editor.
This behavior is somewhat inconvenient. Could an option be added to automatically activate the game window when "Input" is selected?
Observe the video: I activate "2D" and then "Input," and immediately try to control the character, but the keyboard input is directed to the editor.
As shown in the video, after activating "2D" and then "Input," I attempt to control the character. However, the keyboard input is still directed to the editor. To regain control of the character, I have to click on the embedded game window again.
https://github.com/user-attachments/assets/b85e6bcb-1711-421e-ba01-21239b29bbe1
### Steps to reproduce
See the video
### Minimal reproduction project (MRP)
...
|
bug,topic:editor,usability
|
low
|
Minor
|
2,794,879,853
|
transformers
|
`pipeline` AttributeError with `torch.nn.DataParallel`
|
### System Info
- `transformers` version: 4.48.0
- Platform: Linux-6.5.0-35-generic-x86_64-with-glibc2.35
- Python version: 3.11.5
- Huggingface_hub version: 0.27.1
- Safetensors version: 0.4.3
- Accelerate version: 0.33.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: yes
- Using GPU in script?: yes
- GPU type: NVIDIA RTX A6000
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Hello,
I am finetuning a `BertForSequenceClassification` after which point I would like to test it using `pipelines`.
However, since I have multiple GPUs, I use `torch.nn.DataParallel` to wrap it in the following way:
```python
self.model = torch.nn.DataParallel(
module=BertForSequenceClassification.from_pretrained(
pretrained_model_name_or_path=self.config.embedding_model_file.model_name,
cache_dir=Path(self.config.embedding_model_file.cache_dir),
num_labels=len(self.datasets.train.unique_classes),
id2label={
idx: label
for idx, label in enumerate(self.datasets.train.unique_classes)
},
label2id={
label: idx
for idx, label in enumerate(self.datasets.train.unique_classes)
},
torch_dtype=self.config.training_params.torch_dtype,
).to(self.device)
)
```
and then try to use it for inference via:
```python
pipeline(
task="text-classification",
model=self.model,
tokenizer=self.datasets.test.tokenizer,
device=self.device,
top_k=self.config.training_params.top_k,
torch_dtype=self.config.training_params.torch_dtype,
)
```
This worked when I simply had the `BertForSequenceClassification` instance but now with the `DataParallel` wrapping over it I get:
```python
File "/home/xx/miniconda3/envs/xxx/lib/python3.11/site-packages/transformers/pipelines/__init__.py", line 950, in pipeline
model_config = model.config
^^^^^^^^^^^^
File "/home/xx/miniconda3/envs/xxx/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1695, in __getattr__
raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'")
AttributeError: 'DataParallel' object has no attribute 'config'
```'
What is the recommended way in this case, do I have to unwrap the model from the `DataParallel` at inference?
### Expected behavior
Expected behavior is for the `pipeline` call to not throw an Exception.
|
bug
|
low
|
Critical
|
2,794,884,700
|
flutter
|
[Shared Preferences] Kotlin version error in latest shared_preferences_android version (2.4.1)
|
### Steps to reproduce
- Create a new project including Android platform.
- Install shared preferences plugin
- Be sure the latest version of the plugin "shared_preferences_android' is set to "2.4.1" (check on the pubspec.lock file)
- Launch an integration test.
### Expected results
Launch will be successful
### Actual results
Error in console and app won't launch at all:
```
Running Gradle task 'assembleDebug'...
C:\Users\dimit\AppData\Local\Pub\Cache\hosted\pub.dev\shared_preferences_android-2.4.1\android\src\main\java\io\flutter\plugins\sharedpreferences\LegacySharedPreferencesPlugin.java:200: error: cannot find symbol
new StringListObjectInputStream(new ByteArrayInputStream(Base64.decode(listString, 0)));
^
symbol: class StringListObjectInputStream
location: class ListEncoder
1 error
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':shared_preferences_android:compileDebugJavaWithJavac'.
> Compilation failed; see the compiler error output for details.
* Try:
> Run with --info option to get more log output.
> Run with --scan to get full insights.
BUILD FAILED in 2m 24s
```
### Code sample
Just launch a very simple integration test.
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.27.2, on Microsoft Windows [version 10.0.26100.2605], locale fr-FR)
• Flutter version 3.27.2 on channel stable at C:\Users\dimit\flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 68415ad1d9 (4 days ago), 2025-01-13 10:22:03 -0800
• Engine revision e672b006cb
• Dart version 3.6.1
• DevTools version 2.40.2
[✓] Windows Version (Installed version of Windows is version 10 or higher)
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at C:\Users\dimit\AppData\Local\Android\sdk
• Platform android-35, build-tools 35.0.0
• Java binary at: C:\Program Files\Java\jdk-19\bin\java
• Java version Java(TM) SE Runtime Environment (build 19.0.2+7-44)
• All Android licenses accepted.
[✗] Chrome - develop for the web (Cannot find Chrome executable at .\Google\Chrome\Application\chrome.exe)
! Cannot find Chrome. Try setting CHROME_EXECUTABLE to a Chrome executable.
[✗] Visual Studio - develop Windows apps
✗ Visual Studio not installed; this is necessary to develop Windows apps.
Download at https://visualstudio.microsoft.com/downloads/.
Please install the "Desktop development with C++" workload, including all of its default components
[✓] Android Studio (version 2024.2)
• Android Studio at C:\Program Files\Android\Android Studio
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 21.0.3+-12282718-b509.11)
[✓] VS Code (version 1.96.4)
• VS Code at C:\Users\dimit\AppData\Local\Programs\Microsoft VS Code
• Flutter extension version 3.102.0
[✓] Connected device (3 available)
• SM G950F (mobile) • ce051715db73da0601 • android-arm64 • Android 9 (API 28)
• Windows (desktop) • windows • windows-x64 • Microsoft Windows [version 10.0.26100.2605]
• Edge (web) • edge • web-javascript • Microsoft Edge 131.0.2903.112
[✓] Network resources
• All expected network resources are available.
! Doctor found issues in 2 categories.
```
</details>
### Temp workaround
Downgrade the android plugin to 2.4.0.
Add in your `pubspec.yaml` file:
```
dependency_overrides:
shared_preferences_android: '2.4.0'
```
|
waiting for customer response,in triage
|
low
|
Critical
|
2,794,888,056
|
rust
|
ICE: "Missing value for constant, but no error reported?" with unresolvabe const due to trivial bounds
|
### Code
[playground](https://play.rust-lang.org/?version=nightly&mode=debug&edition=2021&gist=a7a73b9b5a81b53a17feb7f8ff506a8e)
```Rust
#![feature(trivial_bounds)]
trait Project {
const ASSOC: usize;
}
fn foo()
where
(): Project,
{
[(); <() as Project>::ASSOC];
}
```
### Rust Version
```Shell
`rustc +nightly --version --verbose`:
rustc 1.86.0-nightly (99db2737c 2025-01-16)
binary: rustc
commit-hash: 99db2737c91d1e4b36b2ffc17dcda5878bcae625
commit-date: 2025-01-16
host: x86_64-unknown-linux-gnu
release: 1.86.0-nightly
LLVM version: 19.1.7
```
### Current error output
```Shell
error: internal compiler error: Missing value for constant, but no error reported?
--> tests/ui/layout/uneval-const-rigid.rs:11:5
|
11 | [(); <() as Project>::ASSOC];
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
note: delayed at compiler/rustc_trait_selection/src/traits/const_evaluatable.rs:132:44 - disabled backtrace
--> tests/ui/layout/uneval-const-rigid.rs:11:5
|
11 | [(); <() as Project>::ASSOC];
|
```
### Backtrace
```Shell
error: internal compiler error: Missing value for constant, but no error reported?
--> tmp.rs:11:5
|
11 | [(); <() as Project>::ASSOC];
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
note: delayed at compiler/rustc_trait_selection/src/traits/const_evaluatable.rs:132:44
0: <rustc_errors::DiagCtxtInner>::emit_diagnostic
1: <rustc_errors::DiagCtxtHandle>::emit_diagnostic
2: <rustc_span::ErrorGuaranteed as rustc_errors::diagnostic::EmissionGuarantee>::emit_producing_guarantee
3: <rustc_errors::DiagCtxtHandle>::span_delayed_bug::<rustc_span::span_encoding::Span, &str>
4: rustc_trait_selection::traits::const_evaluatable::is_const_evaluatable.cold
5: <rustc_trait_selection::traits::fulfill::FulfillProcessor as rustc_data_structures::obligation_forest::ObligationProcessor>::process_obligation
6: <rustc_data_structures::obligation_forest::ObligationForest<rustc_trait_selection::traits::fulfill::PendingPredicateObligation>>::process_obligations::<rustc_trait_selection::traits::fulfill::FulfillProcessor>
7: rustc_hir_typeck::typeck_with_fallback::{closure#0}
8: rustc_query_impl::plumbing::__rust_begin_short_backtrace::<rustc_query_impl::query_impl::typeck::dynamic_query::{closure#2}::{closure#0}, rustc_middle::query::erase::Erased<[u8; 8]>>
9: rustc_query_system::query::plumbing::try_execute_query::<rustc_query_impl::DynamicConfig<rustc_data_structures::vec_cache::VecCache<rustc_span::def_id::LocalDefId, rustc_middle::query::erase::Erased<[u8; 8]>, rustc_query_system::dep_graph::graph::DepNodeIndex>, false, false, false>, rustc_query_impl::plumbing::QueryCtxt, false>
10: rustc_query_impl::query_impl::typeck::get_query_non_incr::__rust_end_short_backtrace
11: <rustc_middle::hir::map::Map>::par_body_owners::<rustc_hir_analysis::check_crate::{closure#4}>::{closure#0}
12: rustc_hir_analysis::check_crate
13: rustc_interface::passes::run_required_analyses
14: rustc_interface::passes::analysis
15: rustc_query_impl::plumbing::__rust_begin_short_backtrace::<rustc_query_impl::query_impl::analysis::dynamic_query::{closure#2}::{closure#0}, rustc_middle::query::erase::Erased<[u8; 0]>>
16: rustc_query_system::query::plumbing::try_execute_query::<rustc_query_impl::DynamicConfig<rustc_query_system::query::caches::SingleCache<rustc_middle::query::erase::Erased<[u8; 0]>>, false, false, false>, rustc_query_impl::plumbing::QueryCtxt, false>
17: rustc_query_impl::query_impl::analysis::get_query_non_incr::__rust_end_short_backtrace
18: rustc_interface::passes::create_and_enter_global_ctxt::<core::option::Option<rustc_interface::queries::Linker>, rustc_driver_impl::run_compiler::{closure#0}::{closure#2}>::{closure#2}::{closure#0}
19: rustc_interface::interface::run_compiler::<(), rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}
20: std::sys::backtrace::__rust_begin_short_backtrace::<rustc_interface::util::run_in_thread_with_globals<rustc_interface::util::run_in_thread_pool_with_globals<rustc_interface::interface::run_compiler<(), rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, ()>::{closure#0}, ()>::{closure#0}::{closure#0}, ()>
21: <<std::thread::Builder>::spawn_unchecked_<rustc_interface::util::run_in_thread_with_globals<rustc_interface::util::run_in_thread_pool_with_globals<rustc_interface::interface::run_compiler<(), rustc_driver_impl::run_compiler::{closure#0}>::{closure#1}, ()>::{closure#0}, ()>::{closure#0}::{closure#0}, ()>::{closure#1} as core::ops::function::FnOnce<()>>::call_once::{shim:vtable#0}
22: std::sys::pal::unix::thread::Thread::new::thread_start
23: <unknown>
24: <unknown>
```
### Anything else?
This error was found during this [PR](https://github.com/rust-lang/rust/pull/135158), in the addition of `TooGeneric` as `LayoutError`.
Updates on this issue will be added later if found.
cc: @lukas-code
|
I-ICE,T-compiler,C-bug,A-const-eval,F-trivial_bounds
|
low
|
Critical
|
2,794,920,055
|
PowerToys
|
Mouse Utilities >> Find My Mouse
|
### Description of the new feature / enhancement
Under Mouse Utilities >> Find my Mouse, is it possible to have a rectangular highlighter. This will have an additional option of chaning "Spotlight Radius (px)" to something like "Spotlight Dimension (Len * Wid)" for rectangular region.
This is particular useful for create educational video tutorial where we need to highlight the section in rectangle instead of a circle.
Thank you.
### Scenario when this would be used?
This is particular useful for create educational video tutorial where we need to highlight the section in rectangle instead of a circle.
### Supporting information
_No response_
|
Needs-Triage
|
low
|
Minor
|
2,794,931,762
|
godot
|
Parse error at runtime even though everything is fine at design time (OpenXR app on Meta Quest 3)
|
### Tested versions
Reproducable in 4.4 dev 3 on Meta Quest 3
### System information
Meta Quest 3, V71.0.0
### Issue description
An OpenXR application for Meta Quest 3 aborts with the error "Could not find type "OpenXRFbSpatialEntity" in the current scope", although everything is OK in the editor and "OpenXRFbSpatialEntity" is found via autocomplete. Everything seems OK in the editor.
Even minimal test projects (see appendix) show this error.
Has anyone had similar experiences?
### Steps to reproduce
Load the project into Meta Quest 3, for example via USB into the "Documents" folder.
Start Godot in Meta Quest 3 and import the project.
Load the plugin "Godot OpenXR Vendors plugin for Godot 4.3" from the assets. (It's too big to paste it here.)
Start the project.
### Minimal reproduction project (MRP)
The project requires the "Godot OpenXR Vendors plugin for Godot 4.3", which would make the file too large here.
The actual project is extremely small and consists of only two very small scenes.
[test.zip](https://github.com/user-attachments/files/18451570/test.zip)
Here are the messages from the Godot log file:
SCRIPT ERROR: Parse Error: Could not find type "OpenXRFbSpatialEntity" in the current scope.
at: GDScript::reload (res://main/DefaultScene.gd:7)
SCRIPT ERROR: Parse Error: Identifier "OpenXRFbSpatialEntity" not declared in the current scope.
at: GDScript::reload (res://main/DefaultScene.gd:11)
SCRIPT ERROR: Parse Error: Identifier "OpenXRFbSpatialEntity" not declared in the current scope.
at: GDScript::reload (res://main/DefaultScene.gd:26)
ERROR: Failed to load script "res://main/DefaultScene.gd" with error "Parse error".
at: load (modules/gdscript/gdscript.cpp:3005)
ERROR: Cannot get class 'OpenXRFbSceneManager'.
at: _instantiate_internal (core/object/class_db.cpp:550)
WARNING: Node OpenXRFbSceneManager of type OpenXRFbSceneManager cannot be created. A placeholder will be created instead.
at: instantiate (scene/resources/packed_scene.cpp:277)
|
bug,needs testing,topic:xr
|
low
|
Critical
|
2,794,937,870
|
rust
|
Inconsistent lifetime inference with return `impl Future`/`BoxFuture` and higher ranked lifetimes
|
<!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
Compiling base line version: https://github.com/weiznich/diesel_async/blob/buggy_lifetimes/src/run_query_dsl/mod.rs#L783-L817
I encountered several inconsistent errors while working on the linked code:
Variant 1: (Remove the `async move` block and box the future directly)
```rust
impl<'b, Changes, Output, Tab, V> UpdateAndFetchResults<Changes, Output>
for crate::AsyncPgConnection
where
Output: Send + 'static,
Changes:
Copy + AsChangeset<Target = Tab> + Send + diesel::associations::Identifiable<Table = Tab>,
Tab: diesel::Table + diesel::query_dsl::methods::FindDsl<Changes::Id> + 'b,
diesel::dsl::Find<Tab, Changes::Id>: IntoUpdateTarget<Table = Tab, WhereClause = V>,
diesel::query_builder::UpdateStatement<Tab, V, Changes::Changeset>:
diesel::query_builder::AsQuery,
diesel::dsl::Update<Changes, Changes>: methods::LoadQuery<'b, Self, Output>,
V: Send + 'b,
Changes::Changeset: Send + 'b,
Tab::FromClause: Send,
{
fn update_and_fetch<'life0, 'async_trait>(
&'life0 mut self,
changeset: Changes,
) -> BoxFuture<'async_trait, QueryResult<Output>>
where
Changes: 'async_trait,
Changes::Changeset: 'async_trait,
'life0: 'async_trait,
Self: 'async_trait,
{
diesel::update(changeset)
.set(changeset)
.get_result(self)
.boxed()
}
}
```
Results in the following compilation error:
```
error: lifetime may not live long enough
--> src/run_query_dsl/mod.rs:809:9
|
784 | impl<'b, Changes, Output, Tab, V> UpdateAndFetchResults<Changes, Output>
| -- lifetime `'b` defined here
...
799 | fn update_and_fetch<'life0, 'async_trait>(
| ------------ lifetime `'async_trait` defined here
...
809 | / diesel::update(changeset)
810 | | .set(changeset)
811 | | .get_result(self)
812 | | .boxed()
| |____________________^ method was supposed to return data with lifetime `'async_trait` but it is returning data with lifetime `'b`
|
= help: consider adding the following bound: `'b: 'async_trait`
```
Variant 2: Use `impl Future` for the return type
```rust
impl<'b, Changes, Output, Tab, V> UpdateAndFetchResults<Changes, Output>
for crate::AsyncPgConnection
where
Output: Send + 'static,
Changes:
Copy + AsChangeset<Target = Tab> + Send + diesel::associations::Identifiable<Table = Tab>,
Tab: diesel::Table + diesel::query_dsl::methods::FindDsl<Changes::Id> + 'b,
diesel::dsl::Find<Tab, Changes::Id>: IntoUpdateTarget<Table = Tab, WhereClause = V>,
diesel::query_builder::UpdateStatement<Tab, V, Changes::Changeset>:
diesel::query_builder::AsQuery,
diesel::dsl::Update<Changes, Changes>: methods::LoadQuery<'b, Self, Output>,
V: Send + 'b,
Changes::Changeset: Send + 'b,
Tab::FromClause: Send,
{
fn update_and_fetch<'life0, 'async_trait>(
&'life0 mut self,
changeset: Changes,
) -> impl Future<Output = QueryResult<Output>> + Send + 'async_trait
where
Changes: 'async_trait,
Changes::Changeset: 'async_trait,
'life0: 'async_trait,
Self: 'async_trait,
{
async move {
diesel::update(changeset)
.set(changeset)
.get_result(self)
.await
}
.boxed()
}
}
```
Error:
```
error[E0207]: the lifetime parameter `'b` is not constrained by the impl trait, self type, or predicates
--> src/run_query_dsl/mod.rs:784:6
|
784 | impl<'b, Changes, Output, Tab, V> UpdateAndFetchResults<Changes, Output>
| ^^ unconstrained lifetime parameter
```
I would expect all three code variations to be the "same" in terms of involved lifetimes, but two of them do not compile with rather surprising errors.
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.86.0-nightly (99db2737c 2025-01-16)
binary: rustc
commit-hash: 99db2737c91d1e4b36b2ffc17dcda5878bcae625
commit-date: 2025-01-16
host: x86_64-unknown-linux-gnu
release: 1.86.0-nightly
LLVM version: 19.1.7
```
<!--
Include a backtrace in the code block by setting `RUST_BACKTRACE=1` in your
environment. E.g. `RUST_BACKTRACE=1 cargo build`.
-->
<details><summary>Backtrace</summary>
<p>
```
<backtrace>
```
</p>
</details>
|
A-impl-trait,C-bug,needs-triage
|
low
|
Critical
|
2,794,945,172
|
flutter
|
[pigeon] Support for default values in data classes
|
### Steps to reproduce
1. Use pigeon 22.7.2
2. Create pigeon file `pigeons/messages.dart ` containing:
```dart
@ConfigurePigeon(
PigeonOptions(
dartOut: 'lib/src/api.g.dart',
kotlinOut:
'android/src/main/kotlin/com/example/api/api/api.g.kt',
swiftOut: 'ios/Classes/api.g.swift',
kotlinOptions: KotlinOptions(package: 'com.example.api.api'),
),
)
class Something {
final int number;
Something({this.number = 5});
}
```
3. run `dart run pigeon --input pigeons/messages.dart `
According to closed [issue](https://github.com/flutter/flutter/issues/98448) and merged [pull-request](https://github.com/flutter/packages/pull/5355) (Adds default values for class constructors and host API methods.) it should work.
### Expected results
Dart output:
```dart
class Something {
Something({
this.number = 5,
});
int number;
Object encode() {
return <Object?>[
number,
];
}
static Something decode(Object result) {
result as List<Object?>;
return Something(
number: result[0]! as int,
);
}
}
```
Swift output:
```swift
struct Something {
var number: Int64 = 5
// swift-format-ignore: AlwaysUseLowerCamelCase
static func fromList(_ pigeonVar_list: [Any?]) -> Something? {
let number = pigeonVar_list[0] as! Int64
return Something(
number: number
)
}
func toList() -> [Any?] {
return [
number
]
}
}
```
Kotlin output:
```kotlin
data class Something(
val number: Long = 5
) {
companion object {
fun fromList(pigeonVar_list: List<Any?>): Something {
val number = pigeonVar_list[0] as Long
return Something(number)
}
}
fun toList(): List<Any?> {
return listOf(
number,
)
}
}
```
### Actual results
Dart output:
```dart
class Something {
Something({
required this.number,
});
int number;
Object encode() {
return <Object?>[
number,
];
}
static Something decode(Object result) {
result as List<Object?>;
return Something(
number: result[0]! as int,
);
}
}
```
Swift output:
```swift
struct Something {
var number: Int64
// swift-format-ignore: AlwaysUseLowerCamelCase
static func fromList(_ pigeonVar_list: [Any?]) -> Something? {
let number = pigeonVar_list[0] as! Int64
return Something(
number: number
)
}
func toList() -> [Any?] {
return [
number
]
}
}
```
Kotlin output:
```kotlin
data class Something (
val number: Long
)
{
companion object {
fun fromList(pigeonVar_list: List<Any?>): Something {
val number = pigeonVar_list[0] as Long
return Something(number)
}
}
fun toList(): List<Any?> {
return listOf(
number,
)
}
}
```
### Code sample
<details open><summary>Code sample</summary>
```dart
@ConfigurePigeon(
PigeonOptions(
dartOut: 'lib/src/api.g.dart',
kotlinOut:
'android/src/main/kotlin/com/example/api/api/api.g.kt',
swiftOut: 'ios/Classes/api.g.swift',
kotlinOptions: KotlinOptions(package: 'com.example.api.api'),
),
)
class Something {
final int number;
Something({this.number = 5});
}
```
</details>
### Screenshots or Video
_No response_
### Logs
_No response_
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
Doctor summary (to see all details, run flutter doctor -v):
[!] Flutter (Channel master, 3.26.0-1.0.pre.168, on macOS 14.5 23F79 darwin-arm64, locale en-RU)
! Warning: `dart` on your path resolves to /opt/homebrew/Cellar/dart/3.5.2/libexec/bin/dart, which is not inside your current Flutter SDK checkout at /Users/feduke-nukem/Desktop/git-projects/flutter/flutter. Consider adding
/Users/feduke-nukem/Desktop/git-projects/flutter/flutter/bin to the front of your path.
! Upstream repository https://github.com/feduke-nukem/flutter is not a standard remote.
Set environment variable "FLUTTER_GIT_URL" to https://github.com/feduke-nukem/flutter to dismiss this error.
[!] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
! Some Android licenses not accepted. To resolve this, run: flutter doctor --android-licenses
[✓] Xcode - develop for iOS and macOS (Xcode 15.4)
[✓] Chrome - develop for the web
[✓] Android Studio (version 2023.3)
[✓] VS Code (version 1.96.2)
[✓] Connected device (5 available)
[✓] Network resources
! Doctor found issues in 2 categories.
```
</details>
|
c: new feature,package,team-ecosystem,p: pigeon,P2,triaged-ecosystem
|
low
|
Critical
|
2,794,988,123
|
flutter
|
[CP] <title>Fix mouse scrolling issue on desktop and web
|
### Issue Link
https://github.com/flutter/flutter/issues/160953#issuecomment-2597744826
### Target
stable
### Cherry pick PR Link
https://github.com/flutter/flutter/pull/156190
### Changelog Description
Fix mouse scrolling problem on desktop and web in TwoDimensionalScrollView
### Impacted Users
end-customers on desktop and web
### Impact Description
Scrollbar hangs on first scroll with mouspointer after short drag distance. Problem occurs again if user changes from vertical to horizontal scrollbar.
(This problem is new since around summer 2024 - it was working in versions before without problems)
### Workaround
- Grab the scrollhandler a second time
- Use the mouse weel to scroll
### Risk
low
### Test Coverage
no
### Validation Steps
Note: I'm not a developer of flutter, so I can not say something about Risk and Test Coverage!!!
Switching to latest master fixes this problem.
|
cp: review
|
low
|
Minor
|
2,795,010,298
|
node
|
Chrome devtools console method autocompletion for node is not working
|
### Version
v22.13.0
### Platform
```text
Microsoft Windows NT 10.0.19045.0 x64
```
### Subsystem
_No response_
### What steps will reproduce the bug?
1. open Chrome 132.0.6834.84 and load chrome://inspect
3. open dedicated devtools for node
4. run "node --inspect"
5. wait for node to connect to devtools
6. enter "global." in the devtools console
7. the autocompletion method list does not pop up
### How often does it reproduce? Is there a required condition?
always
### What is the expected behavior? Why is that the expected behavior?
The autocompletion method list is expected to pop up as it does for browser objects and as it used to for node:

### What do you see instead?
No autocomplete method popup.
### Additional information
Since this is working in Chrome 113, it appeared as a Chrome issue and was submitted on the Chrome bug tracker, but was closed with the following response:
https://issues.chromium.org/u/2/issues/390205856#comment5
> This is a Node.js issue. I can reproduce this down to Node 14, but with Node 12 autocompletion works as expected.
>
> The way autocompletion of object properties works is that we evaluate the input, but abort if this evaluation may cause a side effect. Without side effect, we get the object properties and offer these properties as autocomplete options.
>
> From Node 14, side effect check fails. You can easily check this by running Node with node --trace-side-effect-free-evaluate --inspect.
>
> I would suggest you file an issue on the Node issue tracker.
|
inspector
|
low
|
Critical
|
2,795,019,900
|
kubernetes
|
Reduce relist operations in client-go
|
### What would you like to be added?
Reduce the relist operations performed by the informer when encountering InternalError
### Why is this needed?
Currently, parameter `MaxInternalErrorRetryDuration` exists in the reflector and is only used in the kube-apiserver. It was introduced in this [PR](https://github.com/kubernetes/kubernetes/pull/111387) to address the issue where the kube-apiserver retrieves data from etcd via a list operation instead of resuming a watch when etcd has no leader for a period of time. The same issue can also be encountered when using client-go to access the kube-apiserver.
I encountered a live issue, and the logs show the following messages:
```shell
pkg/mod/k8s.io/client-go@v0.32.0/tools/cache/reflector.go:251: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: got short buffer with n=0, base=4092, cap=163840") has prevented the request from succeeding
```
Currently, when the reflector encounters this error, it triggers a relist operation, which is quite costly. From the conversation https://github.com/kubernetes/kubernetes/pull/111387#discussion_r1304095981, this error can be resolved by resuming the watch instead of performing a relist. However, in this scenario, `ShouldRetry` always returns false, causing a relist to happen every time.
https://github.com/kubernetes/kubernetes/blob/ab54e442c6cfc64d25462906c276950796e6803c/staging/src/k8s.io/client-go/tools/cache/reflector.go#L530-L534
At the same time, I have a question: InternalError is quite vague. Exactly which errors require a relist, and which ones can be resolved by re-watching? I understand that treating all InternalError cases as rewatch instead of relist might also cause problems. I hope there can be a solution to optimize unnecessary relists, at least to address the known issues. Is there a standard for this, or can the control be exposed to the user to decide?
|
kind/bug,sig/api-machinery,kind/feature,needs-triage
|
low
|
Critical
|
2,795,037,805
|
godot
|
Shader displaying artifacts depending on Object scale / curvature
|
### Tested versions
v4.3.stable.official [77dcf97d8]
v4.4.beta1.official [d33da79d3]
### System information
Godot v4.3.stable - Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 2070 SUPER (NVIDIA; 32.0.15.6636) - Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz (16 Threads)
### Issue description
The following Shader code snippet results in some weird artifacts that seem to be dependant on the object's scale / form:
```
void fragment() {
mat3 normal_to_view_rotation = mat3(TANGENT, BINORMAL, NORMAL);
mat3 view_to_normal_rotation = inverse(normal_to_view_rotation);
vec3 vertex_relative_to_node_position = VERTEX - NODE_POSITION_VIEW;
vec3 rotatedVertex = view_to_normal_rotation * vertex_relative_to_node_position;
ALBEDO = vec3(fract(rotatedVertex.x), 0.0, 0.0);
}
```
Result, all three objects use the same Material and Shader:

The middle Sphere is in default scale and displaying the artifacts (either red or black spots, no in-between),
The left Sphere is a duplication of the middle one, but I fiddled a bit with the scale, so it's not exactly spherical anymore and a bit bigger and it's NOT displaying artifacts,
The right Cube is in default scale and it's NOT displaying artifacts.
The artifacts also happen with default-scale torus and tube from my experimentation in another project.
### Steps to reproduce
* create MeshInstance3D
* add SphereMesh
* add shader with the following code:
```
shader_type spatial;
render_mode unshaded, fog_disabled;
void fragment() {
mat3 normal_to_view_rotation = mat3(TANGENT, BINORMAL, NORMAL);
mat3 view_to_normal_rotation = inverse(normal_to_view_rotation);
vec3 vertex_relative_to_node_position = VERTEX - NODE_POSITION_VIEW;
vec3 rotatedVertex = view_to_normal_rotation * vertex_relative_to_node_position;
ALBEDO = vec3(fract(rotatedVertex.x), 0.0, 0.0);
}
```
### Minimal reproduction project (MRP)
[minimal_reproduction_project.zip](https://github.com/user-attachments/files/18452504/minimal_reproduction_project.zip)
|
bug,topic:rendering,topic:3d
|
low
|
Minor
|
2,795,041,016
|
tensorflow
|
Unequal width and height of stride in tf.nn.depthwise_conv2d not supported?
|
Is that right?
IF YES, how can I convert the pretrained weights trained with unequal strides to tensorflow `dw-conv` with some other ops?
THX!
|
stat:awaiting response,type:support
|
medium
|
Minor
|
2,795,061,190
|
flutter
|
GIFs stop playing on Flutter web after about 1min
|
### Steps to reproduce
Add fireball.gif to lib
Add "lib/fireball.gif" to yaml
Run the app with a GIF.
### Expected results
fireball should be playing all the time
### Actual results
Stops playing the fireball after around 30s every time, on Flutter web since 3.27.x on all browsers I have, recent Safari, recent Chrome and recent Firefox, doesn't stop on OS X native, on Android and I believe not on Linux native either. Probably related to new image handlers...
### Code sample
```dart
import 'package:flutter/material.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'GIF error Demo',
home: const MyHomePage(title: 'GIF error Demo'),
);
}
}
class MyHomePage extends StatefulWidget {
const MyHomePage({super.key, required this.title});
final String title;
@override
State<MyHomePage> createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text(widget.title),
),
body: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
const Text(
'Here is the fireball:',
),
Image( width: 40, height: 40, fit: BoxFit.fill, image: AssetImage( "lib/fireball.gif") ),
],
),
),
);
}
}
```

### Screenshots or Video
On MacOS native and Android, the fireball keeps being a fire
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
flutter doctor -v
```
[!] Flutter (Channel stable, 3.27.2, on macOS 15.1 24B2082 darwin-arm64, locale de-DE)
• Flutter version 3.27.2 on channel stable at /Users/robert/flutter
! The dart binary is not on your path. Consider adding /Users/robert/flutter/bin to your path.
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 68415ad1d9 (4 days ago), 2025-01-13 10:22:03 -0800
• Engine revision e672b006cb
• Dart version 3.6.1
• DevTools version 2.40.2
• If those were intentional, you can disregard the above warnings; however it is recommended to
use "git" directly to perform update checks and upgrades.
```
|
engine,a: assets,platform-web,c: rendering,has reproducible steps,team-web,found in release: 3.27,found in release: 3.28
|
low
|
Critical
|
2,795,084,500
|
electron
|
Minidumps for child process crashes
|
I added some [additional e2e tests](https://github.com/getsentry/sentry-electron/pull/1050) to the Sentry Electron SDK to test for crashes in child processes.
Firstly, I was surprised that `child-process-gone` is only triggered for Electron utility process crashes. There are no events for child processes created using the `child_process` module.
My additional tests tested across a wide range of Electron versions going back to Electron v15.
## Does `crashReporter` result in a minidump?
| | `process_type`|macOS | Windows | Linux |
|--------------------|-------|---------|-------|---|
| Electron `utilityProcess`|`utility`|✅|✅|✅ <sup>1</sup>|
| `child_process.fork` | `node`|✅ | ✅ | ❌ |
| `child_process.exec` | |✅ | ❌ | ❌ |
- <sup>1</sup> Not working with v29.4.6. Possibly an Electron bug?
Is this the expected behaviour on all platforms? Is there any way to improve platform coverage here?
### `child_process.fork` Test
`main.js`
```ts
const path = require('path');
const child_process = require('child_process');
const { app, crashReporter } = require('electron');
crashReporter.start({
companyName: '',
ignoreSystemCrashHandler: true,
productName: app.name || app.getName(),
submitURL: 'https://f.a.k/e',
uploadToServer: false,
compress: true,
});
app.on('ready', () => {
child_process.fork(path.join(__dirname, 'child.js'));
});
```
`child.js`
```ts
const { raiseSegfault } = require('sadness-generator')
setTimeout(() => {
raiseSegfault();
}, 1000);
```
### `child_process.exec` Test
`main.js`
```ts
const path = require('path');
const child_process = require('child_process');
const { getPath } = require('crashy-cli');
const { app, crashReporter } = require('electron');
crashReporter.start({
companyName: '',
ignoreSystemCrashHandler: true,
productName: app.name || app.getName(),
submitURL: 'https://f.a.k/e',
uploadToServer: false,
compress: true,
});
app.on('ready', () => {
try {
child_process.execSync(getPath());
} catch (_) { }
});
```
|
bug :beetle:,has-repro-comment,35-x-y
|
low
|
Critical
|
2,795,085,386
|
go
|
syscall: inconsistent error messages for syscall.ESTALE across architectures
|
### Go version
go 1.23.4
### Output of `go env` in your module/workspace:
```shell
go env
GO111MODULE=''
GOARCH='amd64'
GOBIN=''
GOCACHE='/home/vipinydv_google_com/.cache/go-build'
GOENV='/home/vipinydv_google_com/.config/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='amd64'
GOHOSTOS='linux'
GOINSECURE=''
GOMODCACHE='/home/vipinydv_google_com/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='linux'
GOPATH='/home/vipinydv_google_com/go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/home/vipinydv_google_com/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.23.4.linux-amd64'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/home/vipinydv_google_com/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.23.4.linux-amd64/pkg/tool/linux_amd64'
GOVCS=''
GOVERSION='go1.23.4'
GODEBUG=''
GOTELEMETRY='local'
GOTELEMETRYDIR='/home/vipinydv_google_com/.config/go/telemetry'
GCCGO='gccgo'
GOAMD64='v1'
AR='ar'
CC='gcc'
CXX='g++'
CGO_ENABLED='1'
GOMOD='/home/vipinydv_google_com/gcsfuse/go.mod'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -m64 -pthread -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=/tmp/go-build342134929=/tmp/go-build -gno-record-gcc-switches'
```
### What did you do?
Steps to Reproduce:
1. Write a Go program that returns the `syscall.ESTALE` error.
2. Run the program on an arm64 and amd64 machines and observe the error message.
FYI : This discrepancy is not limited to these two architectures and exists on other platforms as well.
### What did you see happen?
When returning the `syscall.ESTALE` error in a Go program, the error message displayed to the user varies depending on the machine's architecture. On arm64 machines, users see "stale file handle", while on amd64 machines, they see "stale NFS file handle". This inconsistency violates the principle of consistent error reporting and contradicts the [Linux Manual Page](https://man7.org/linux/man-pages/man3/errno.3.html#:~:text=ESTALE%20Stale%20file%20handle%20%28POSIX.1%2D2001%29.%0A%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20This%20error%20can%20occur%20for%20NFS%20and%20for%20other%20filesystems), which states that `ESTALE` applies to both NFS and other filesystems.
### What did you expect to see?
The error message for `syscall.ESTALE` should be "stale file handle" on all architectures.
If needed I can raise a PR for the same.
|
NeedsInvestigation,compiler/runtime,BugReport
|
low
|
Critical
|
2,795,117,764
|
transformers
|
Qwen2VL exhibits significant performance differences under different attention implementations.
|
### System Info
`transformers=4.47.1 `
`pytorh=2.3.0`
`flash-attn=2.7.2`
`python=3.10`
### Who can help?
@amyeroberts @qubvel @zucchini-nlp
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I'm using the lmms-eval framework to evaluate qwen2vl models on various of benchmarks.
here are the scrips:
```
python3 -m accelerate.commands.launch \
--main_process_port=28175 \
--mixed_precision=bf16 \
--num_processes=2 \
-m lmms_eval \
--model qwen2_vl_with_kvcache \
--model_args pretrained=/share/home/models/Qwen2-VL-7B-Instruct,use_flash_attention_2=true\
--tasks chartqa \
--batch_size 1 \
--log_samples \
--log_samples_suffix chartqa \
--output_path ./logs/qwen2vl/chatqa/
```
### Expected behavior
Recently, I've been using Qwen2VL-7B for evaluation under the lmms-eval framework and discovered some confusing phenomena.
Taking the ChartQA task as an example, when both the vision and LLM utilize flash-attention2, I can achieve a score of 81.56. However, when both vision and LLM use eager attention, the score drops significantly to 72.64.
To explore further, I conducted additional experiments and found that regardless of which attention implementation the vision module uses, the score remains around 82.
However, when the vision module uses flash-attention2 while the LLM employs eager attention, the score drops to just 0.0008, and the model loses its generative ability, endlessly repeating one or two words.
| LLM Attention | Vision: Flash | Vision: Eager |
|---------------|---------------|---------------|
| **Flash** | 81.56 | 82.00 |
| **Eager** | **0.0008** | 72.64 |
the model's response under 0.0008 setting:
"The value of the the the the the the the the the the the the the"
"````````````````````````````````````````````````"
"A is a person assistant. A is a person assistant. A is a person"
"The following are the the the the the the the the the the the the the"
The above results are all based on BF16 precision.
I also conducted a check regarding precision. For all modules use eager attention, I converted QKV to float to ensure that attention calculations during the forward pass were in FP32. Unfortunately, the final result remained the same as BF16 (72.64).
|
bug
|
low
|
Major
|
2,795,253,523
|
rust
|
Tracking issue for release notes of #127154: Tracking Issue for anonymous pipe API
|
This issue tracks the release notes text for #127154.
### Steps
- [ ] Proposed text is drafted by PR author (or team) making the noteworthy change.
- [ ] Issue is nominated for release team review of clarity for wider audience.
- [ ] Release team includes text in release notes/blog posts.
### Release notes text
The responsible team for the underlying change should edit this section to replace the automatically generated link with a succinct description of what changed, drawing upon text proposed by the author (either in discussion or through direct editing).
````markdown
# Category (e.g. Language, Compiler, Libraries, Compatibility notes, ...)
- [Tracking Issue for anonymous pipe API](https://github.com/rust-lang/rust/issues/127154)
````
> [!TIP]
> Use the [previous releases](https://doc.rust-lang.org/nightly/releases.html) categories to help choose which one(s) to use.
> The category will be de-duplicated with all the other ones by the release team.
>
> *More than one section can be included if needed.*
### Release blog section
If the change is notable enough for inclusion in the blog post, the responsible team should add content to this section.
*Otherwise leave it empty.*
````markdown
````
cc @NobodyXu -- origin issue/PR authors and assignees for starting to draft text
|
T-libs-api,relnotes,needs-triage,relnotes-tracking-issue,F-anonymous_pipe
|
low
|
Minor
|
2,795,259,369
|
pytorch
|
`_pdist_forward` causes segmentation fault for 3D tensor with last dimension of size 0
|
### 🐛 Describe the bug
When passing a 3D tensor where the last dimension has size 0 to the torch.ops.aten._pdist_forward function, a segmentation fault occurs.
```python
import torch
print(torch.__version__)
input = torch.rand((11, 15, 0))
torch.ops.aten._pdist_forward(input, p=2.0)
```
output:
```
2.7.0.dev20250116+cu124
fish: Job 2, 'python3 test.py' terminated by signal SIGSEGV (Address boundary error)
```
### Versions
```
Collecting environment information...
PyTorch version: 2.7.0.dev20250116+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Manjaro Linux (x86_64)
GCC version: (GCC) 14.2.1 20240805
Clang version: 18.1.8
CMake version: version 3.30.2
Libc version: glibc-2.40
Python version: 3.11.11 | packaged by conda-forge | (main, Dec 5 2024, 14:17:24) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.6.47-1-MANJARO-x86_64-with-glibc2.40
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4070
Nvidia driver version: 550.107.02
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.9.2.1
/usr/lib/libcudnn_adv.so.9.2.1
/usr/lib/libcudnn_cnn.so.9.2.1
/usr/lib/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/libcudnn_graph.so.9.2.1
/usr/lib/libcudnn_heuristic.so.9.2.1
/usr/lib/libcudnn_ops.so.9.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
架构: x86_64
CPU 运行模式: 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
字节序: Little Endian
CPU: 16
在线 CPU 列表: 0-15
厂商 ID: GenuineIntel
型号名称: 13th Gen Intel(R) Core(TM) i5-13400F
CPU 系列: 6
型号: 191
每个核的线程数: 2
每个座的核数: 10
座: 1
步进: 2
CPU(s) scaling MHz: 25%
CPU 最大 MHz: 4600.0000
CPU 最小 MHz: 800.0000
BogoMIPS: 4993.00
标记: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
虚拟化: VT-x
L1d 缓存: 416 KiB (10 instances)
L1i 缓存: 448 KiB (10 instances)
L2 缓存: 9.5 MiB (7 instances)
L3 缓存: 20 MiB (1 instance)
NUMA 节点: 1
NUMA 节点0 CPU: 0-15
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.7.0.dev20250116+cu124
[pip3] torchaudio==2.6.0.dev20250116+cu124
[pip3] torchvision==0.22.0.dev20250116+cu124
[conda] numpy 2.1.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git0d4682f0 pypi_0 pypi
[conda] torch 2.7.0.dev20250116+cu124 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250116+cu124 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250116+cu124 pypi_0 pypi
```
cc @malfet
|
module: crash,module: error checking,triaged,actionable,module: empty tensor,topic: fuzzer
|
low
|
Critical
|
2,795,264,041
|
storybook
|
[Bug]: Initial Next.JS template failed
|
### Describe the bug
Failed to initial Next.JS template.
```txt
╔ 🔎 Empty directory detected ═════════════════════════════════════════════════════════════════════╗
║ ║
║ Would you like to generate a new project from the following list? ║
║ ║
║ Note: ║
║ Storybook supports many more frameworks and bundlers than listed below. If you don't see ║
║ your ║
║ preferred setup, you can still generate a project then rerun this command to add Storybook. ║
║ ║
║ Press ^C at any time to quit. ║
║ ║
╚══════════════════════════════════════════════════════════════════════════════════════════════════╝
√ Choose a project template » Next.js (TS)
Creating a new "Next.js (TS)" project with pnpm...
SB_CLI_INIT_0003 (GenerateNewProjectOnInitError): There was an error while using pnpm to create a new nextjs-ts project.
Command failed with exit code 1: pnpm create next-app^14 . --typescript --use-pnpm --eslint --tailwind --no-app --import-alias="@/*" --src-dir
C:\Users\{masked}\AppData\Local\pnpm-cache\dlx\rkqwh2evvza7eolfvjaqkj7snq\194740212af-d100:
ERR_PNPM_FETCH_404 GET https://registry.npmjs.org/create-next-app14: Not Found - 404
This error happened while installing a direct dependency of C:\Users\{masked}\AppData\Local\pnpm-cache\dlx\rkqwh2evvza7eolfvjaqkj7snq\194740212af-d100
create-next-app14 is not in the npm registry, or you have no permission to fetch it.
An authorization header was used: Bearer npm_[hidden]
More info:
at scaffoldNewProject (C:\Users\{masked}\AppData\Local\npm-cache\_npx\f0725cbdc52d7264\node_modules\create-storybook\dist\bin\index.cjs:84:1241)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async doInitiate (C:\Users\{masked}\AppData\Local\npm-cache\_npx\f0725cbdc52d7264\node_modules\create-storybook\dist\bin\index.cjs:96:237)
at async withTelemetry (C:\Users\{masked}\AppData\Local\npm-cache\_npx\f0725cbdc52d7264\node_modules\@storybook\core\dist\core-server\index.cjs:35750:12)
at async initiate (C:\Users\{masked}\AppData\Local\npm-cache\_npx\f0725cbdc52d7264\node_modules\create-storybook\dist\bin\index.cjs:127:214) {
data: {
error: Error: Command failed with exit code 1: pnpm create next-app^14 . --typescript --use-pnpm --eslint --tailwind --no-app --import-alias="@/*" --src-dir
C:\Users\{masked}\AppData\Local\pnpm-cache\dlx\rkqwh2evvza7eolfvjaqkj7snq\194740212af-d100:
ERR_PNPM_FETCH_404 GET https://registry.npmjs.org/create-next-app14: Not Found - 404
This error happened while installing a direct dependency of C:\Users\{masked}\AppData\Local\pnpm-cache\dlx\rkqwh2evvza7eolfvjaqkj7snq\194740212af-d100
create-next-app14 is not in the npm registry, or you have no permission to fetch it.
An authorization header was used: Bearer npm_[hidden]
at makeError (C:\Users\{masked}\AppData\Local\npm-cache\_npx\f0725cbdc52d7264\node_modules\execa\lib\error.js:60:11)
at handlePromise (C:\Users\{masked}\AppData\Local\npm-cache\_npx\f0725cbdc52d7264\node_modules\execa\index.js:118:26)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async scaffoldNewProject (C:\Users\{masked}\AppData\Local\npm-cache\_npx\f0725cbdc52d7264\node_modules\create-storybook\dist\bin\index.cjs:84:1128)
at async doInitiate (C:\Users\{masked}\AppData\Local\npm-cache\_npx\f0725cbdc52d7264\node_modules\create-storybook\dist\bin\index.cjs:96:237)
at async withTelemetry (C:\Users\{masked}\AppData\Local\npm-cache\_npx\f0725cbdc52d7264\node_modules\@storybook\core\dist\core-server\index.cjs:35750:12)
at async initiate (C:\Users\{masked}\AppData\Local\npm-cache\_npx\f0725cbdc52d7264\node_modules\create-storybook\dist\bin\index.cjs:127:214) {
shortMessage: 'Command failed with exit code 1: pnpm create next-app^14 . --typescript --use-pnpm --eslint --tailwind --no-app --import-alias="@/*" --src-dir',
command: 'pnpm create next-app^14 . --typescript --use-pnpm --eslint --tailwind --no-app --import-alias="@/*" --src-dir',
escapedCommand: 'pnpm create "next-app^14" . --typescript --use-pnpm --eslint --tailwind --no-app "--import-alias=\\"@/*\\"" --src-dir',
exitCode: 1,
signal: undefined,
signalDescription: undefined,
stdout: 'C:\\Users\\hokin\\AppData\\Local\\pnpm-cache\\dlx\\rkqwh2evvza7eolfvjaqkj7snq\\194740212af-d100:\r\n' +
' ERR_PNPM_FETCH_404 GET https://registry.npmjs.org/create-next-app14: Not Found - 404\n' +
'\n' +
'This error happened while installing a direct dependency of C:\\Users\\hokin\\AppData\\Local\\pnpm-cache\\dlx\\rkqwh2evvza7eolfvjaqkj7snq\\194740212af-d100\n' +
'\n' +
'create-next-app14 is not in the npm registry, or you have no permission to fetch it.\n' +
'\n' +
'An authorization header was used: Bearer npm_[hidden]',
stderr: '',
failed: true,
timedOut: false,
isCanceled: false,
killed: false
},
packageManager: 'pnpm',
projectType: 'nextjs-ts'
},
fromStorybook: true,
category: 'CLI_INIT',
documentation: '',
code: 3
}
```
### Reproduction link
n/a
### Reproduction steps
1. `pnpm dlx storybook init`
2. select `Next.js (TS)` as tamplate
3. Error
### System
```bash
Storybook Environment Info:
System:
OS: Windows 11 10.0.22631
CPU: (20) x64 13th Gen Intel(R) Core(TM) i5-13500HX
Binaries:
Node: 20.11.1 - C:\Program Files\nodejs\node.EXE
Yarn: 1.22.21 - ~\AppData\Roaming\npm\yarn.CMD
npm: 10.2.5 - C:\Program Files\nodejs\npm.CMD
pnpm: 9.14.2 - ~\AppData\Roaming\npm\pnpm.CMD <----- active
Browsers:
Edge: Chromium (127.0.2651.74)
```
### Additional context
I tried to run the failed command `pnpm create next-app^14 . --typescript --use-pnpm --eslint --tailwind --no-app --import-alias="@/*" --src-dir` independently. Yes, it failed. However when I remove the `^14` it WORKS.
Related code,
https://github.com/storybookjs/storybook/blob/6c2a260f7844ea7f2a2c5653e15e23ebf1a79dfc/code/lib/create-storybook/src/scaffold-new-project.ts#L49-L52
|
bug,good first issue,help wanted,cli,nextjs,sev:S2
|
low
|
Critical
|
2,795,289,399
|
godot
|
[3.6] ShapeCast exhibits unexpected behavior.
|
### Tested versions
Reproducible in: 3.6
### System information
Linux MInt 22 Cinnamon - Godot 3.6.0
### Issue description
1. When given a negative value in `ShapeCast.target_position` it will register hits _in any direction_, even 180 degrees in the opposite direction. This bug respects the variable's magnitude, but only respects it's directionality in positive directions.
2. When placed inside of a RigidBody's CollisionShape, it does not register any hits even with `ShapeCast.target_position` set to true. It is important to clarify that it is detecting _nothing, at all_. This bug is inverted when the target position is negative.
### Steps to reproduce
**Behavior 1:**
1. Create a new ShapeCast and rotate it 180 degrees so that it points straight up.
2. Position above a collider and no more that 1 meter away from touching it.
3. Run the scene with visible collision shapes and observe how it detects a collision despite facing away from the StaticBody.
420. Play around with the ShapeCast's position and rotation.
**Behavior 2:**
I'm unable to figure out a reliable sequence of steps to reproduce one. It's affected by the node's position and the 'target_position' variable, but it doesn't always work. It might be a byproduct of Behavior 1 based on how my attempt to recreate it in the MRP turned out, but it doesn't seem like it would be related.
I'll post a proper list of steps in the comments if I find one. Try playing around with "test 3" and "test 4" in the MRP to see if you can trigger this behavior.
### Minimal reproduction project (MRP)
[ShapeCast Issue.zip](https://github.com/user-attachments/files/18472955/ShapeCast.Issue.zip)
|
bug,discussion,topic:physics,topic:3d
|
low
|
Critical
|
2,795,330,533
|
godot
|
minimal project doesn't import errorless (main_scene uid)
|
### Tested versions
reproducible in 4.4-dev7, 4.4-beta1
### System information
Godot v4.4.beta1 - Windows 10 (build 19045) - Multi-window, 2 monitors - Vulkan (Mobile) - dedicated NVIDIA GeForce RTX 4070 (NVIDIA; 32.0.15.6636) - AMD Ryzen 9 3950X 16-Core Processor (32 threads)
### Issue description
MRP import has errors related to main_scene being referenced as uid.
4.4-dev7:
```
Godot Engine v4.4.dev7.official (c) 2007-present Juan Linietsky, Ariel Manzur & Godot Contributors.
ERROR: core/io/resource_uid.cpp:155 - Condition "!unique_ids.has(p_id)" is true. Returning: String()
--- Debug adapter server started on port 6006 ---
--- GDScript language server started on port 6005 ---
ERROR: core/io/resource_uid.cpp:155 - Condition "!unique_ids.has(p_id)" is true. Returning: String()
ERROR: core/io/resource_uid.cpp:155 - Condition "!unique_ids.has(p_id)" is true. Returning: String()
ERROR: core/io/resource_uid.cpp:155 - Condition "!unique_ids.has(p_id)" is true. Returning: String()
```
4.4-beta1 (4owdun8sxw04 is root_scene.tscn, which is set as main_scene):
```
Godot Engine v4.4.beta1.official (c) 2007-present Juan Linietsky, Ariel Manzur & Godot Contributors.
ERROR: Unrecognized UID: "uid://4owdun8sxw04".
--- Debug adapter server started on port 6006 ---
--- GDScript language server started on port 6005 ---
ERROR: Unrecognized UID: "uid://4owdun8sxw04".
ERROR: Unrecognized UID: "uid://4owdun8sxw04".
ERROR: Unrecognized UID: "uid://4owdun8sxw04".
```
### Steps to reproduce
import MRP
### Minimal reproduction project (MRP)
[44b1expdebug.zip](https://github.com/user-attachments/files/18454066/44b1expdebug.zip)
|
bug,topic:editor,confirmed
|
low
|
Critical
|
2,795,345,470
|
kubernetes
|
Changelog is missing for kubectl DEB package
|
**What happened**:
Apt install/upgrade isn't able fetch the changelog for the DEB package (see output below).
```text
Calling ['apt-get', '-qq', 'changelog', 'kubectl=1.29.13-1.1'] to retrieve changelog
apt-listchanges: Unable to retrieve changelog for package kubectl; 'apt-get changelog' failed with: E: Failed to fetch changelog:/kubectl.changelog Changelog unavailable for kubectl=1.29.13-1.1
```
**What you expected to happen**:
To see a summary of changes for that DEB package in my emails.
**How to reproduce it (as minimally and precisely as possible)**:
Steps:
1. Add the K8s Repository to your Apt sources
2. Add the following to `etc/apt/listchanges.conf` file...
```ini
[apt]
frontend=mail
which=both
email_address=root
email_format=text
confirm=false
headers=false
reverse=false
save_seen=/var/lib/apt/listchanges.db
```
3. Install or upgrade `kubectl`
**Anything else we need to know?**:
**Environment**:
- Kubernetes client and server versions (use `kubectl version`):
```text
Client Version: v1.29.13
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.3
```
- OS (e.g: `cat /etc/os-release`):
```text
PRETTY_NAME="Ubuntu 22.04.5 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.5 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy
```
|
kind/feature,sig/release,needs-triage
|
low
|
Critical
|
2,795,355,233
|
pytorch
|
SIGFPE error when passing very large kernel_size to `avg_pool1d`
|
### 🐛 Describe the bug
Passing a very large value for the kernel_size parameter to the `torch.ops.aten.avg_pool1d` function results in a SIGFPE error.
```python
import torch
print(torch.__version__)
sym_0 = (0, 1)
sym_1 = torch.double
sym_2 = torch.strided
sym_3 = (9223372036854775807,)
sym_4 = (-1,)
sym_5 = (0,)
sym_6 = True
sym_7 = False
var_393 = torch.rand(sym_0, dtype=sym_1, layout=sym_2, device=None, pin_memory=None)
var_773 = torch.ops.aten.avg_pool1d(var_393, kernel_size=sym_3, stride=sym_4, padding=sym_5, ceil_mode=sym_6, count_include_pad=sym_7)
```
output:
```
2.7.0.dev20250116+cu124
fish: Job 2, 'python3 test.py' terminated by signal SIGFPE (Floating point exception)
```
### Versions
```
Collecting environment information...
PyTorch version: 2.7.0.dev20250116+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Manjaro Linux (x86_64)
GCC version: (GCC) 14.2.1 20240805
Clang version: 18.1.8
CMake version: version 3.30.2
Libc version: glibc-2.40
Python version: 3.11.11 | packaged by conda-forge | (main, Dec 5 2024, 14:17:24) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.6.47-1-MANJARO-x86_64-with-glibc2.40
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4070
Nvidia driver version: 550.107.02
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.9.2.1
/usr/lib/libcudnn_adv.so.9.2.1
/usr/lib/libcudnn_cnn.so.9.2.1
/usr/lib/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/libcudnn_graph.so.9.2.1
/usr/lib/libcudnn_heuristic.so.9.2.1
/usr/lib/libcudnn_ops.so.9.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
架构: x86_64
CPU 运行模式: 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
字节序: Little Endian
CPU: 16
在线 CPU 列表: 0-15
厂商 ID: GenuineIntel
型号名称: 13th Gen Intel(R) Core(TM) i5-13400F
CPU 系列: 6
型号: 191
每个核的线程数: 2
每个座的核数: 10
座: 1
步进: 2
CPU(s) scaling MHz: 25%
CPU 最大 MHz: 4600.0000
CPU 最小 MHz: 800.0000
BogoMIPS: 4993.00
标记: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
虚拟化: VT-x
L1d 缓存: 416 KiB (10 instances)
L1i 缓存: 448 KiB (10 instances)
L2 缓存: 9.5 MiB (7 instances)
L3 缓存: 20 MiB (1 instance)
NUMA 节点: 1
NUMA 节点0 CPU: 0-15
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.7.0.dev20250116+cu124
[pip3] torchaudio==2.6.0.dev20250116+cu124
[pip3] torchvision==0.22.0.dev20250116+cu124
[conda] numpy 2.1.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git0d4682f0 pypi_0 pypi
[conda] torch 2.7.0.dev20250116+cu124 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250116+cu124 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250116+cu124 pypi_0 pypi
```
|
module: crash,triaged,topic: fuzzer
|
low
|
Critical
|
2,795,375,310
|
pytorch
|
SIGSEGV error when passing a 0-sized tensor to `_local_scalar_dense`
|
### 🐛 Describe the bug
Passing a tensor with size `(0,)` to the `torch.ops.aten._local_scalar_dense` function results in a segmentation fault (SIGSEGV).
```python
import torch
print(torch.__version__)
input = torch.randn(size=(0,))
torch.ops.aten._local_scalar_dense(input)
```
output:
```
2.7.0.dev20250116+cu124
fish: Job 2, 'python3 test.py' terminated by signal SIGSEGV (Address boundary error)
```
### Versions
```
Collecting environment information...
PyTorch version: 2.7.0.dev20250116+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Manjaro Linux (x86_64)
GCC version: (GCC) 14.2.1 20240805
Clang version: 18.1.8
CMake version: version 3.30.2
Libc version: glibc-2.40
Python version: 3.11.11 | packaged by conda-forge | (main, Dec 5 2024, 14:17:24) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.6.47-1-MANJARO-x86_64-with-glibc2.40
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4070
Nvidia driver version: 550.107.02
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.9.2.1
/usr/lib/libcudnn_adv.so.9.2.1
/usr/lib/libcudnn_cnn.so.9.2.1
/usr/lib/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/libcudnn_graph.so.9.2.1
/usr/lib/libcudnn_heuristic.so.9.2.1
/usr/lib/libcudnn_ops.so.9.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
架构: x86_64
CPU 运行模式: 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
字节序: Little Endian
CPU: 16
在线 CPU 列表: 0-15
厂商 ID: GenuineIntel
型号名称: 13th Gen Intel(R) Core(TM) i5-13400F
CPU 系列: 6
型号: 191
每个核的线程数: 2
每个座的核数: 10
座: 1
步进: 2
CPU(s) scaling MHz: 25%
CPU 最大 MHz: 4600.0000
CPU 最小 MHz: 800.0000
BogoMIPS: 4993.00
标记: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
虚拟化: VT-x
L1d 缓存: 416 KiB (10 instances)
L1i 缓存: 448 KiB (10 instances)
L2 缓存: 9.5 MiB (7 instances)
L3 缓存: 20 MiB (1 instance)
NUMA 节点: 1
NUMA 节点0 CPU: 0-15
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.7.0.dev20250116+cu124
[pip3] torchaudio==2.6.0.dev20250116+cu124
[pip3] torchvision==0.22.0.dev20250116+cu124
[conda] numpy 2.1.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git0d4682f0 pypi_0 pypi
[conda] torch 2.7.0.dev20250116+cu124 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250116+cu124 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250116+cu124 pypi_0 pypi
```
|
module: crash,triaged,module: empty tensor,topic: fuzzer
|
low
|
Critical
|
2,795,393,261
|
neovim
|
Calling `redraw` clears `inputlist()` choices
|
### Problem
When `redraw` is called async(?), it will clear the choices presented by `inputlist()`.
Before `redraw`:
<img width="602" alt="Image" src="https://github.com/user-attachments/assets/06bf084a-95b1-41bc-8726-1f9bb548b5b3" />
After:
<img width="602" alt="Image" src="https://github.com/user-attachments/assets/e5f64e18-f527-4ea4-b840-1a1d29b00062" />
Some related issues:
- https://github.com/vim/vim/issues/1843
- https://github.com/sphamba/smear-cursor.nvim/issues/82 (originally reported)
### Steps to reproduce
```
nvim --clean --noplugin
:lua vim.defer_fn(vim.cmd.redraw, 5000)
:call inputlist(['Select color:', '1. red', '2. green', '3. blue'])
```
Wait for the deferred `redraw` to kick in. Seems to be the same for Vim using `timer_start` except worse (everything just goes blank).
### Expected behavior
I expect the choices to still be present after redraw.
### Nvim version (nvim -v)
v0.11.0-dev-1557+ga78eddd541-Homebrew
### Vim (not Nvim) behaves the same?
9.1.1000
### Operating system/version
macOS 15.1.1
### Terminal name/version
Ghostty 72d08552
### $TERM environment variable
xterm-ghostty
### Installation
Homebrew (brew install --head neovim)
|
bug,ui,cmdline-mode
|
low
|
Minor
|
2,795,395,459
|
stable-diffusion-webui
|
[Bug]: Installing K diffusion
|
### Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [x] The issue has been reported before but has not been fixed yet
### What happened?
Tried to install k diffusion but gives me an error, it used to have many errors now am stuck with is one (RROR: Command errored out with exit status 128: git clone -q https://github.com/hlky/k-diffusion-sd 'C:\1111\stable-diffusion-cpuonly\src\k-diffusion')
### Steps to reproduce the problem
Obtaining k_diffusion from git+https://github.com/hlky/k-diffusion-sd#egg=k_diffusion
Cloning https://github.com/hlky/k-diffusion-sd to c:\1111\stable-diffusion-cpuonly\src\k-diffusion
ERROR: Command errored out with exit status 128: git clone -q https://github.com/hlky/k-diffusion-sd 'C:\1111\stable-diffusion-cpuonly\src\k-diffusion' Check the logs for full command output.
1 file(s) copied.
1 file(s) copied.
### What should have happened?
It should have installed k diffusion
### What browsers do you use to access the UI ?
Google Chrome, Microsoft Edge
### Sysinfo
IDK?
### Console logs
```Shell
(base) PS C:\WINDOWS\system32> cd C:\1111\stable-diffusion-cpuonly
(base) PS C:\1111\stable-diffusion-cpuonly> git clone git+https://github.com/hlky/k-diffusion-sd#egg=k_diffusion
Obtaining k_diffusion from git+https://github.com/hlky/k-diffusion-sd#egg=k_diffusion
Cloning https://github.com/hlky/k-diffusion-sd to c:\1111\stable-diffusion-cpuonly\src\k-diffusion
ERROR: Command errored out with exit status 128: git clone -q https://github.com/hlky/k-diffusion-sd 'C:\1111\stable-diffusion-cpuonly\src\k-diffusion' Check the logs for full command output.
1 file(s) copied.
1 file(s) copied.
```
### Additional information
_No response_
|
not-an-issue
|
low
|
Critical
|
2,795,395,772
|
godot
|
A transparent window prevent me to do anything after hot reload GDExtension
|
### Tested versions
Version: 4.2.2 stable
I'm using the same version godot-cpp to bind extension
### System information
Godot v4.2.2.stable - Linux Mint 21.3 (Virginia) - X11 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 4060 Laptop GPU () - 13th Gen Intel(R) Core(TM) i7-13650HX (20 Threads)
### Issue description
A transparent window on the top of the editor.
I could click nothing except minimize, maximize and close but close also couldn't work. However, I could kill editor without -9.
Maybe I just opened the editor not long ago, or maybe I have been working for a long time
The shadows around the window make its presence known.

### Steps to reproduce
This may happen when I hot reload a gdextension library, and it happens about once every five times (just an average).
I'm not sure this is the cause of the problem
At the same time, I use Clion.
### Minimal reproduction project (MRP)
N/A
|
bug,topic:editor,needs testing,topic:gdextension
|
low
|
Minor
|
2,795,409,315
|
pytorch
|
DISABLED test_sparse_add_cuda_complex64 (__main__.TestSparseCSRCUDA)
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_sparse_add_cuda_complex64&suite=TestSparseCSRCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35768157832).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 8 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_sparse_add_cuda_complex64`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/test_sparse_csr.py", line 2338, in test_sparse_add
run_test(m, n, index_dtype)
File "/var/lib/jenkins/pytorch/test/test_sparse_csr.py", line 2330, in run_test
self.assertEqual(actual, expected)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4036, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Tensor-likes are not close!
Mismatched elements: 3 / 15 (20.0%)
Greatest absolute difference: 1028.479736328125 at index (4, 0) (up to 1e-05 allowed)
Greatest relative difference: inf at index (4, 1) (up to 1.3e-06 allowed)
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/test_sparse_csr.py TestSparseCSRCUDA.test_sparse_add_cuda_complex64
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_sparse_csr.py`
cc @alexsamardzic @nikitaved @pearu @cpuhrsch @amjames @bhosmer @jcaip @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr
|
module: sparse,module: rocm,triaged,module: flaky-tests,skipped
|
low
|
Critical
|
2,795,438,332
|
pytorch
|
Segment fault on CPU and IndexError on CUDA for `_adaptive_avg_pool2d_backward`
|
### 🐛 Describe the bug
When calling the `torch.ops.aten._adaptive_avg_pool2d_backward` function with mismatched tensor dimensions, it causes a segmentation fault (SIGSEGV) on the CPU, but an `IndexError` on CUDA.
For exmple in CUDA:
```python
import torch
print(torch.__version__)
sym_0 = (1, 3, 8, 3)
sym_1 = torch.strided
sym_2 = 'cuda'
sym_3 = (1, 48)
v0 = torch.randn(size=sym_0, dtype=None, layout=sym_1, device=sym_2)
v1 = torch.rand(size=sym_3, device=sym_2)
torch.ops.aten._adaptive_avg_pool2d_backward(grad_output=v0, self=v1)
```
output:
```
2.7.0.dev20250116+cu124
Traceback (most recent call last):
File "/home/yvesw/reborn2-expr/250117-bugs/test.py", line 12, in <module>
torch.ops.aten._adaptive_avg_pool2d_backward(grad_output=v0, self=v1)
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_ops.py", line 1158, in __call__
return self._op(*args, **(kwargs or {}))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
IndexError: Dimension out of range (expected to be in range of [-2, 1], but got -3)
```
and in CPU:
```python
import torch
print(torch.__version__)
sym_0 = (1, 3, 8, 3)
sym_1 = torch.strided
sym_2 = 'cpu'
sym_3 = (1, 48)
v0 = torch.randn(size=sym_0, dtype=None, layout=sym_1, device=sym_2)
v1 = torch.rand(size=sym_3, device=sym_2)
torch.ops.aten._adaptive_avg_pool2d_backward(grad_output=v0, self=v1)
```
output:
```
2.7.0.dev20250116+cu124
fish: Job 3, 'python3 test.py' terminated by signal SIGSEGV (Address boundary error)
```
### Versions
```
Collecting environment information...
PyTorch version: 2.7.0.dev20250116+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Manjaro Linux (x86_64)
GCC version: (GCC) 14.2.1 20240805
Clang version: 18.1.8
CMake version: version 3.30.2
Libc version: glibc-2.40
Python version: 3.11.11 | packaged by conda-forge | (main, Dec 5 2024, 14:17:24) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.6.47-1-MANJARO-x86_64-with-glibc2.40
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4070
Nvidia driver version: 550.107.02
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.9.2.1
/usr/lib/libcudnn_adv.so.9.2.1
/usr/lib/libcudnn_cnn.so.9.2.1
/usr/lib/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/libcudnn_graph.so.9.2.1
/usr/lib/libcudnn_heuristic.so.9.2.1
/usr/lib/libcudnn_ops.so.9.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
架构: x86_64
CPU 运行模式: 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
字节序: Little Endian
CPU: 16
在线 CPU 列表: 0-15
厂商 ID: GenuineIntel
型号名称: 13th Gen Intel(R) Core(TM) i5-13400F
CPU 系列: 6
型号: 191
每个核的线程数: 2
每个座的核数: 10
座: 1
步进: 2
CPU(s) scaling MHz: 25%
CPU 最大 MHz: 4600.0000
CPU 最小 MHz: 800.0000
BogoMIPS: 4993.00
标记: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
虚拟化: VT-x
L1d 缓存: 416 KiB (10 instances)
L1i 缓存: 448 KiB (10 instances)
L2 缓存: 9.5 MiB (7 instances)
L3 缓存: 20 MiB (1 instance)
NUMA 节点: 1
NUMA 节点0 CPU: 0-15
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.7.0.dev20250116+cu124
[pip3] torchaudio==2.6.0.dev20250116+cu124
[pip3] torchvision==0.22.0.dev20250116+cu124
[conda] numpy 2.1.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git0d4682f0 pypi_0 pypi
[conda] torch 2.7.0.dev20250116+cu124 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250116+cu124 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250116+cu124 pypi_0 pypi
```
cc @malfet
|
module: crash,module: error checking,triaged,topic: fuzzer
|
low
|
Critical
|
2,795,458,053
|
storybook
|
[Bug]: React autodoc with JSDoc
|
### Describe the bug
As React 19 is now stable and they don't support PropTypes and defaultProps anymore, we don't want to rewrite all our code to TypeScript, a good aternative is the JSDoc, so we migrated all our PropTypes and defaultProps to JSDoc.
But it seems the autodoc of storybook 8 with storybook/react-vite doesn't support or can't read the JSDoc, most of my type description on the storybook are now marked as "unknown" and also broke all the relative input controllers.


### Reproduction link
https://codesandbox.io/p/devbox/priceless-snowflake-srk6t3?file=%2Fsrc%2Fstories%2FButton.jsx
### Reproduction steps
```
/**
* @typedef {Object} Props
* @property {"primary"|"secondary"} variant - Variant color
* @property {boolean} [active=true] - Enable/Disable active
*
* @param {Props} props
* @returns {JSX.Element}
*/
const MyComponent=({variant,active=true}) => (...)
export default MyComponent;
```
### System
```bash
System:
OS: Windows 11 10.0.22631
CPU: (24) x64 12th Gen Intel(R) Core(TM) i7-12800HX
Binaries:
Node: 20.17.0 - ~\Documents\node_v20\node.EXE
npm: 10.8.2 - ~\Documents\node_v20\npm.CMD <----- active
Browsers:
Edge: Chromium (127.0.2651.74)
npmPackages:
@storybook/addon-essentials: ^8.4.7 => 8.4.7
@storybook/addon-interactions: ^8.4.7 => 8.4.7
@storybook/addon-links: ^8.4.7 => 8.4.7
@storybook/blocks: ^8.4.7 => 8.4.7
@storybook/preview-api: ^8.4.7 => 8.4.7
@storybook/react: ^8.4.7 => 8.4.7
@storybook/react-vite: ^8.4.7 => 8.4.7
@storybook/test: ^8.4.7 => 8.4.7
storybook: ^8.4.7 => 8.4.7
```
### Additional context
_No response_
|
feature request,react,has workaround,docgen
|
low
|
Critical
|
2,795,466,258
|
flutter
|
Add ability to switch color palettes for Coloured Fonts
|
### Use case
It seems like Flutter already has support for [colored fonts](https://css-tricks.com/colrv1-and-css-font-palette-web-typography/), but it lacks a feature to switch the color palettes.
A font file can have more than 1 color palettes that user can chooses. In HTML, we can choose the palettes using `font-palette-values`, for example:
```html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Demo</title>
<style>
body {
font-size: 7rem;
display: flex;
justify-content: space-around;
}
@font-face {
font-family: MyTestFont;
src: url('MyTestFont-Regular.ttf');
}
@font-palette-values --Dark {
font-family: MyTestFont;
base-palette: 1;
}
@font-palette-values --Light {
font-family: MyTestFont;
base-palette: 0;
}
.my-text-light {
font-family: "MyTestFont";
font-palette: --Light;
}
.my-text-dark {
font-family: "MyTestFont";
font-palette: --Dark;
}
</style>
</head>
<body>
<h1 class="my-text-light">F</h1>
<h1 class="my-text-dark">F</h1>
</body>
</html>
```
The results show two different colors but the same fonts.

However, in Flutter, the first palettes are choosen by default, but there is no way to choose other palettes or even customize one.
```dart
import 'package:flutter/material.dart';
void main() {
runApp(const MainApp());
}
class MainApp extends StatelessWidget {
const MainApp({super.key});
@override
Widget build(BuildContext context) {
return const MaterialApp(
home: Scaffold(
body: Center(
child: Text(
'F',
style: TextStyle(
fontFamily: 'MyTestFont',
fontSize: 144,
),
),
),
),
);
}
}
```

The font file: https://github.com/iqfareez/demo_color_fonts_for_flutter_feature_report/blob/main/MyTestFont-Regular.ttf
Current workaround is maybe (not tested) is two have two different font file with different palettes. But that's far from ideal.
### Proposal
Add the ability to switch the font's color palettes.
|
framework,a: typography,c: proposal,P3,team-engine,triaged-engine
|
low
|
Minor
|
2,795,482,157
|
flutter
|
Discrete `Slider` applies thumb padding when using custom Slider shapes
|
### Steps to reproduce
Originally reported in https://github.com/rydmike/flex_color_picker/issues/90 by @rydmike
Last year, I've updated the `Slider` widget to properly align the thumb shape with the tick marks, which has been last standing visual bug. However, with the 3.27 release, Mike reported that his discrete `Slider` with custom shapes cannot make the thumb reach extreme ends.
Such thumb padding is applied if the `Slider` discrete or thumb shape implementation overrides `isRounded` property.
```dart
final double padding = isDiscrete || _sliderTheme.trackShape!.isRounded ? trackRect.height : 0.0;
```
Thumb padding is essential when the track shape rounded in the `Slider`(indicated by `isRounded`).
However, we can remove `isDiscrete` when applying thumb padding as this doesn't need to apply to custom shapes by default..
Developers should've the flexibility to avoid such padding. Which can they can do so with `isRounded` flag in the custom shape.
```dart
@override
bool get isRounded => true;
```
### Expected results
<img width="526" alt="Image" src="https://github.com/user-attachments/assets/08c9a9c9-1f91-41f2-91de-8698d7f6c39a" />
### Actual results
<img width="514" alt="Image" src="https://github.com/user-attachments/assets/29f5c55b-e44d-4259-b86e-e607134ba881" />
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'dart:async';
import 'package:flutter/material.dart';
import 'dart:math' as math;
import 'dart:ui' as ui;
void main() => runApp(const MyApp());
class MyApp extends StatefulWidget {
const MyApp({super.key});
@override
State<MyApp> createState() => _MyAppState();
}
class _MyAppState extends State<MyApp> {
double _value = 125;
ui.Image? _backgroundImage;
@override
void initState() {
super.initState();
_loadImage();
}
Future<void> _loadImage() async {
const imageProvider = NetworkImage('https://i.imgur.com/hSwyziG.png');
final ImageStream stream = imageProvider.resolve(ImageConfiguration.empty);
final Completer<ui.Image> completer = Completer<ui.Image>();
stream.addListener(ImageStreamListener((ImageInfo info, bool _) {
completer.complete(info.image);
}));
final ui.Image image = await completer.future;
setState(() {
_backgroundImage = image;
});
}
@override
Widget build(BuildContext context) {
if (_backgroundImage == null) {
return const Center(child: CircularProgressIndicator());
}
return MaterialApp(
debugShowCheckedModeBanner: false,
home: Scaffold(
body: Center(
child: SizedBox(
width: 400,
child: SliderTheme(
data: SliderThemeData(
trackHeight: 48,
trackShape: OpacitySliderTrackShape(
color: Colors.red,
image: _backgroundImage!,
),
thumbShape: const OpacitySliderThumbShape(color: Colors.yellow),
),
child: Slider(
value: _value,
max: 255,
divisions: 255,
onChanged: (double value) {
setState(() {
_value = value;
});
},
),
),
),
),
),
);
}
}
/// A custom slider track for the opacity slider.
///
/// Has rounded edges and a background image that repeats to show the common
/// image pattern used as background on images that has transparency. It
/// results in a nice effect where we can better judge visually how transparent
/// the current opacity value is directly on the slider.
class OpacitySliderTrackShape extends SliderTrackShape {
/// Constructor for the opacity slider track.
OpacitySliderTrackShape({
required this.color,
this.thumbRadius = 14,
required this.image,
}) : bgImagePaint = Paint()
..shader = ImageShader(
image,
TileMode.repeated,
TileMode.repeated,
Matrix4.identity().storage,
);
/// Currently selected color.
final Color color;
/// The radius of the adjustment thumb on the opacity slider track.
///
/// Defaults to 14.
final double thumbRadius;
/// The image used a background image in the slider track.
final ui.Image image;
/// Paint used to draw the background image on the slider track.
final Paint bgImagePaint;
/// Returns a rect that represents the track bounds that fits within the
/// [Slider].
///
/// The width is the width of the [Slider] or [RangeSlider], but padded by
/// the max of the overlay and thumb radius. The height is defined by the
/// [SliderThemeData.trackHeight].
///
/// The [Rect] is centered both horizontally and vertically within the slider
/// bounds.
@override
Rect getPreferredRect({
required RenderBox parentBox,
Offset offset = Offset.zero,
required SliderThemeData sliderTheme,
bool isEnabled = false,
bool isDiscrete = false,
}) {
final double thumbWidth =
sliderTheme.thumbShape!.getPreferredSize(isEnabled, isDiscrete).width;
final double overlayWidth =
sliderTheme.overlayShape!.getPreferredSize(isEnabled, isDiscrete).width;
final double trackHeight = sliderTheme.trackHeight!;
assert(overlayWidth >= 0, 'overlayWidth must be >= 0');
assert(trackHeight >= 0, 'trackHeight must be >= 0');
final double trackLeft =
offset.dx + math.max(overlayWidth / 2, thumbWidth / 2);
final double trackTop =
offset.dy + (parentBox.size.height - trackHeight) / 2;
final double trackRight =
trackLeft + parentBox.size.width - math.max(thumbWidth, overlayWidth);
final double trackBottom = trackTop + trackHeight;
// If the parentBox size less than slider's size the trackRight will
// be less than trackLeft, so switch them.
return Rect.fromLTRB(math.min(trackLeft, trackRight), trackTop,
math.max(trackLeft, trackRight), trackBottom);
}
@override
void paint(
PaintingContext context,
Offset offset, {
required RenderBox parentBox,
required SliderThemeData sliderTheme,
required Animation<double> enableAnimation,
required TextDirection textDirection,
required Offset thumbCenter,
bool isDiscrete = false,
bool isEnabled = false,
double additionalActiveTrackHeight = 2,
Offset? secondaryOffset,
}) {
assert(sliderTheme.disabledActiveTrackColor != null,
'disabledActiveTrackColor cannot be null.');
assert(sliderTheme.disabledInactiveTrackColor != null,
'disabledInactiveTrackColor cannot be null.');
assert(sliderTheme.activeTrackColor != null,
'activeTrackColor cannot be null.');
assert(sliderTheme.inactiveTrackColor != null,
'inactiveTrackColor cannot be null.');
assert(sliderTheme.thumbShape != null, 'thumbShape cannot be null.');
// If we have no track height, no point in doing anything, no-op exit.
if ((sliderTheme.trackHeight ?? 0) <= 0) {
return;
}
final Rect trackRect = getPreferredRect(
parentBox: parentBox,
offset: offset,
sliderTheme: sliderTheme,
isEnabled: isEnabled,
isDiscrete: isDiscrete,
);
final Radius trackRadius = Radius.circular(trackRect.height / 2);
final Radius activeTrackRadius = Radius.circular(trackRect.height / 2 + 1);
final Paint activePaint = Paint()..color = Colors.transparent;
final Paint inactivePaint = Paint()
..shader = ui.Gradient.linear(
Offset.zero,
Offset(trackRect.width, 0),
<Color>[color.withOpacity(0), color.withOpacity(1)],
<double>[0.05, 0.95]);
Paint leftTrackPaint;
Paint rightTrackPaint;
switch (textDirection) {
case TextDirection.ltr:
leftTrackPaint = activePaint;
rightTrackPaint = inactivePaint;
case TextDirection.rtl:
leftTrackPaint = inactivePaint;
rightTrackPaint = activePaint;
}
final RRect shapeRect = ui.RRect.fromLTRBAndCorners(
trackRect.left - thumbRadius,
(textDirection == TextDirection.ltr)
? trackRect.top - (additionalActiveTrackHeight / 2)
: trackRect.top,
trackRect.right + thumbRadius,
(textDirection == TextDirection.ltr)
? trackRect.bottom + (additionalActiveTrackHeight / 2)
: trackRect.bottom,
topLeft: (textDirection == TextDirection.ltr)
? activeTrackRadius
: trackRadius,
bottomLeft: (textDirection == TextDirection.ltr)
? activeTrackRadius
: trackRadius,
topRight: (textDirection == TextDirection.ltr)
? activeTrackRadius
: trackRadius,
bottomRight: (textDirection == TextDirection.ltr)
? activeTrackRadius
: trackRadius,
);
context.canvas.drawRRect(shapeRect, leftTrackPaint);
context.canvas.drawRRect(shapeRect, bgImagePaint);
context.canvas.drawRRect(shapeRect, rightTrackPaint);
}
}
class OpacitySliderThumbShape extends RoundSliderThumbShape {
/// Create a slider thumb that draws a circle filled with [color]
/// and shows the slider `value` * 100 in the thumb.
const OpacitySliderThumbShape({
required this.color,
super.enabledThumbRadius = 16.0,
super.disabledThumbRadius,
super.elevation,
super.pressedElevation = 4.0,
});
/// Color used to fill the inside of the thumb.
final Color color;
double get _disabledThumbRadius => disabledThumbRadius ?? enabledThumbRadius;
@override
void paint(
PaintingContext context,
Offset center, {
required Animation<double> activationAnimation,
required Animation<double> enableAnimation,
required bool isDiscrete,
required TextPainter labelPainter,
required RenderBox parentBox,
required SliderThemeData sliderTheme,
required TextDirection textDirection,
required double value,
required double textScaleFactor,
required Size sizeWithOverflow,
}) {
assert(sliderTheme.disabledThumbColor != null,
'disabledThumbColor cannot be null');
assert(sliderTheme.thumbColor != null, 'thumbColor cannot be null');
final Canvas canvas = context.canvas;
final Tween<double> radiusTween = Tween<double>(
begin: _disabledThumbRadius,
end: enabledThumbRadius,
);
final double radius = radiusTween.evaluate(enableAnimation);
final Path path = Path()
..addArc(
Rect.fromCenter(
center: center,
width: 2 * radius,
height: 2 * radius,
),
0,
math.pi * 2,
);
canvas.drawShadow(path, Colors.black, 1.5, true);
canvas.drawCircle(center, radius, Paint()..color = Colors.white);
canvas.drawCircle(center, radius - 1.8, Paint()..color = color);
final TextSpan span = TextSpan(
style: TextStyle(
fontSize: enabledThumbRadius * 0.78,
fontWeight: FontWeight.w600,
color: sliderTheme.thumbColor,
),
text: (value * 100).toStringAsFixed(0),
);
final TextPainter textPainter = TextPainter(
text: span,
textAlign: TextAlign.center,
textDirection: TextDirection.ltr,
);
textPainter.layout();
final Offset textCenter = Offset(
center.dx - (textPainter.width / 2),
center.dy - (textPainter.height / 2),
);
textPainter.paint(canvas, textCenter);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
3.27.x
```
</details>
|
c: regression,framework,f: material design,has reproducible steps,P2,team-design,triaged-design,found in release: 3.27
|
low
|
Critical
|
2,795,495,140
|
node
|
RSS growing into several hundreds of megabytes and never going down in size in any significant way when no activity is being served.
|
### Version
18.20.4
### Platform
```text
FreeBSD 14.1
```
### Subsystem
_No response_
### What steps will reproduce the bug?
I would like to ask if following behaviour of RSS part of memory of node process
is within a norm. Or is this an abnormal behaviour caused by memory leaks in some ibraries.
It's hard to judge based on valgrind, as it producess several dozens of thousands of lines of log
covering several libraries after termination of server.js process by Ctrl-C.
mkdir certs
openssl req -x509 -newkey rsa:2048 -nodes -sha256 -subj '/CN=localhost' -keyout certs/cert.key -out certs/cert.pem
node server.js
run several instances of at the same time
node client
RSS starts to grow into region of several hundred of megabytes when multiple instances of cliet.js are connecting to it.
No substantial decrease of size of RSS reported by server.js been observed
in time period of up to one hour. It decreased only by few dozens of megabytes
shortly after connections stopped but remained elevated into many hundreds
of megabytes region.
It's unclear if that memory would ever be available to the OS in case on need for memory by other processes. There are many environments with limited resources where node holding right to big areas of memory without actual need to hold it for prolonged periods of time could cause problems and starve other processes of needed memory. This is why some people could find it concerning.
After terminating (Ctrl+C) server.js running as result of
valgrind --leak-check=full --show-leak-kinds=all node server.js
Few dozens of thousands of lines is being generated covering many libraries.
Very last lines of them are following
==79512==
==79512== LEAK SUMMARY:
==79512== definitely lost: 0 bytes in 0 blocks
==79512== indirectly lost: 0 bytes in 0 blocks
==79512== possibly lost: 0 bytes in 0 blocks
==79512== still reachable: 5,348,193 bytes in 23,822 blocks
==79512== suppressed: 12,556 bytes in 37 blocks
==79512==
==79512== Use --track-origins=yes to see where uninitialised values come from
==79512== For lists of detected and suppressed errors, rerun with: -s
==79512== ERROR SUMMARY: 24 errors from 12 contexts (suppressed: 0 from 0)
At this point I'm not sure if adding full log would be of any help as it's szie
reached over 3 megabytes.
Files below:
server.js
--------------------------------------------------------------------------------
```
const process = require('node:process');
const https = require('https');
const fs = require('fs');
//const heapdump = require('heapdump');
const port = ; //TODO set port number
process.on('uncaughtException', (err, origin) => {
console.log(err);
fs.writeSync(
process.stderr.fd,
`Caught exception: ${err}\n` +
`Exception origin: ${origin}\n`,
);
});
const privateKey = fs.readFileSync(__dirname + '/certs/cert.key');
const certificate = fs.readFileSync(__dirname + '/certs/cert.pem');
const options = { key: privateKey,
cert: certificate, enableTrace: false };
const server = https.createServer(options, (req, res) => {
res.write("Hello there!");
// res.writeHead(200);
res.end();
});
server.on('error', (e) => {
console.error(e);
});
server.on('connection', (socket) => {
//console.log('New connection established.');
socket.on('error', (err) => {
console.error('Socket error:', err);
socket.destroy(); // Close in case of error
});
socket.on('close', () => {
//console.log('Socket closed.');
});
});
server.listen(port, () => {
console.log(`Server listen on https://:${port}`);
});
setInterval(function(){
const formatMemoryUsage = (data) => `${Math.round(data / 1024 / 1024 * 100) / 100} MB`;
const memoryData = process.memoryUsage();
const memoryUsage = {
rss: `${formatMemoryUsage(memoryData.rss)} -> Resident Set Size - total memory allocated for the process execution`,
heapTotal: `${formatMemoryUsage(memoryData.heapTotal)} -> total size of the allocated heap`,
heapUsed: `${formatMemoryUsage(memoryData.heapUsed)} -> actual memory used during the execution`,
external: `${formatMemoryUsage(memoryData.external)} -> V8 external memory`,
};
console.log(memoryUsage);
}, 5*1000);
setInterval(function(){
server.getConnections((err, count) => console.log('Active connections: ' + count));
if(global.gc){
global.gc();
console.log("garbage free");
}
}, 5*1000);
/**
setTimeout(function () {
const filename = Date.now() + '.heapsnapshot';
heapdump.writeSnapshot(function(err, filename) {
// console.log('dump written to', filename);
});
}, 10*1000);
setTimeout(function () {
const filename = Date.now() + '.heapsnapshot';
heapdump.writeSnapshot(function(err, filename) {
// console.log('dump2 written to', filename);
});
}, 120*1000);
**/
```
--------------------------------------------------------------------------------
client.js
--------------------------------------------------------------------------------
```
const https = require('https');
const fs = require('fs');
const port = ; //TODO set port number here
const host = 'host address here'; //TODO set host name here
const requestCount = 3000;
const options = {
hostname: host,
port: port,
path: '/',
method: 'GET',
rejectUnauthorized: false
};
function makeRequest(id) {
const req = https.request(options, (res) => {
let data = '';
res.on('data', (chunk) => {
data += chunk;
});
res.on('end', () => {
console.log(`Request ${id} completed. Response: ${data}`);
});
});
req.on('error', (e) => {
console.error(`Request ${id} encountered an error: ${e.message}`);
console.error(e);
});
req.end();
}
for (let i = 1; i <= requestCount; i++) {
makeRequest(i);
}
```
--------------------------------------------------------------------------------
### How often does it reproduce? Is there a required condition?
Always.
### What is the expected behavior? Why is that the expected behavior?
Expected behavior: Passing control over unused memory back to the OS.
Rationale: There are many environments with limited resources where node holding right to big areas of memory without actual need to hold it for prolonged periods of time could cause problems and starve other processes of needed memory. This is why some people could find it concerning.
### What do you see instead?
RSS growing into hundreds of megabytes and never decreasing significantly even after very long time.
### Additional information
_No response_
|
question,memory
|
low
|
Critical
|
2,795,510,053
|
pytorch
|
Illegal memory access and segmentation fault due to large `storage_offset` in `as_strided`
|
### 🐛 Describe the bug
Passing a very large value for the `storage_offset` parameter in `torch.as_strided` causes different errors on CPU and CUDA:
* On CPU, it leads to a segmentation fault (SIGSEGV).
* On CUDA, it results in an illegal memory access error when attempting to print or access the result after performing tensor operations.
For example in cuda:
```python
import torch
print(torch.__version__)
sym_0 = (0, 0, 1, 5, 5)
sym_1 = 6.0
sym_2 = torch.long
sym_3 = 'cuda'
sym_4 = (1,)
sym_5 = (1,)
sym_6 = 9223372036854775807
sym_7 = (-1,)
sym_8 = False
var_349 = torch.full(size=sym_0, fill_value=sym_1, dtype=sym_2, layout=None, device=sym_3, pin_memory=None)
var_568 = torch.as_strided(var_349, size=sym_4, stride=sym_5, storage_offset=sym_6)
res = torch.amax(var_568, dim=sym_7, keepdim=sym_8)
print(res)
```
output:
```
2.7.0.dev20250116+cu124
Traceback (most recent call last):
File "/home/yvesw/reborn2-expr/250117-bugs/test.py", line 18, in <module>
print(res)
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_tensor.py", line 590, in __repr__
return torch._tensor_str._str(self, tensor_contents=tensor_contents)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_tensor_str.py", line 704, in _str
return _str_intern(self, tensor_contents=tensor_contents)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_tensor_str.py", line 621, in _str_intern
tensor_str = _tensor_str(self, indent)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_tensor_str.py", line 353, in _tensor_str
formatter = _Formatter(get_summarized_data(self) if summarize else self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_tensor_str.py", line 141, in __init__
value_str = f"{value}"
^^^^^^^^^^
File "/home/yvesw/miniconda3/envs/torch-preview/lib/python3.11/site-packages/torch/_tensor.py", line 1119, in __format__
return self.item().__format__(format_spec)
^^^^^^^^^^^
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
and in cpu:
```python
import torch
print(torch.__version__)
sym_0 = (0, 0, 1, 5, 5)
sym_1 = 6.0
sym_2 = torch.long
sym_3 = 'cpu'
sym_4 = (1,)
sym_5 = (1,)
sym_6 = 9223372036854775807
sym_7 = (-1,)
sym_8 = False
var_349 = torch.full(size=sym_0, fill_value=sym_1, dtype=sym_2, layout=None, device=sym_3, pin_memory=None)
var_568 = torch.as_strided(var_349, size=sym_4, stride=sym_5, storage_offset=sym_6)
res = torch.amax(var_568, dim=sym_7, keepdim=sym_8)
print(res)
```
we got:
```
2.7.0.dev20250116+cu124
fish: Job 3, 'python3 test.py' terminated by signal SIGSEGV (Address boundary error)
```
### Versions
```
Collecting environment information...
PyTorch version: 2.7.0.dev20250116+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Manjaro Linux (x86_64)
GCC version: (GCC) 14.2.1 20240805
Clang version: 18.1.8
CMake version: version 3.30.2
Libc version: glibc-2.40
Python version: 3.11.11 | packaged by conda-forge | (main, Dec 5 2024, 14:17:24) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.6.47-1-MANJARO-x86_64-with-glibc2.40
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4070
Nvidia driver version: 550.107.02
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.9.2.1
/usr/lib/libcudnn_adv.so.9.2.1
/usr/lib/libcudnn_cnn.so.9.2.1
/usr/lib/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/libcudnn_graph.so.9.2.1
/usr/lib/libcudnn_heuristic.so.9.2.1
/usr/lib/libcudnn_ops.so.9.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
架构: x86_64
CPU 运行模式: 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
字节序: Little Endian
CPU: 16
在线 CPU 列表: 0-15
厂商 ID: GenuineIntel
型号名称: 13th Gen Intel(R) Core(TM) i5-13400F
CPU 系列: 6
型号: 191
每个核的线程数: 2
每个座的核数: 10
座: 1
步进: 2
CPU(s) scaling MHz: 25%
CPU 最大 MHz: 4600.0000
CPU 最小 MHz: 800.0000
BogoMIPS: 4993.00
标记: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
虚拟化: VT-x
L1d 缓存: 416 KiB (10 instances)
L1i 缓存: 448 KiB (10 instances)
L2 缓存: 9.5 MiB (7 instances)
L3 缓存: 20 MiB (1 instance)
NUMA 节点: 1
NUMA 节点0 CPU: 0-15
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.7.0.dev20250116+cu124
[pip3] torchaudio==2.6.0.dev20250116+cu124
[pip3] torchvision==0.22.0.dev20250116+cu124
[conda] numpy 2.1.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git0d4682f0 pypi_0 pypi
[conda] torch 2.7.0.dev20250116+cu124 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250116+cu124 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250116+cu124 pypi_0 pypi
```
|
module: crash,triaged,topic: fuzzer
|
low
|
Critical
|
2,795,514,347
|
transformers
|
How can we use CPU offloading when using AutoModelForCausalLM and THUDM/cogvlm2-llama3-chat-19B
|
It is working great with below however still not sufficient. Uses around 16 GB VRAM
I want to further lower the requirement if possible
How can I achieve that?
model path is : `THUDM/cogvlm2-llama3-chat-19B`
```
model = AutoModelForCausalLM.from_pretrained(
MODEL_PATH,
torch_dtype=TORCH_TYPE,
trust_remote_code=True,
quantization_config=BitsAndBytesConfig(load_in_4bit=True),
low_cpu_mem_usage=True
).eval()
```
### Who can help?
text models: @ArthurZucker
vision models: @amyeroberts, @qubvel
pipelines: @Rocketknight1
quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
|
bug
|
low
|
Minor
|
2,795,535,727
|
deno
|
Running simple TypeScript crashes Deno
|
Version: Deno 2.1.5
I upgraded to Deno 2.1.5 and found the Deno process immediately crashes when running my TypeScript. I've boiled down the code to a reproducible test case [here](https://github.com/garethj2/deno-2-crash).
Here's the resultant error...
```
./run.sh: line 2: 61461 Trace/BPT trap: 5 deno run ./core/src/main.ts
```
# System
- Macbook Pro M2
- 16 GB of RAM
- macOS Sequoia
Considering the code is so very minor, I find this extremely worrying that it completely crashes the process outright.
|
bug,upstream,swc
|
low
|
Critical
|
2,795,540,568
|
PowerToys
|
Taskbar Separator / Divider
|
### Description of the new feature / enhancement
I would like to see the taskbar separator in Power Toys
In terms of settings, it is good to have an option to control the separation width and maybe a separator icon itself
### Scenario when this would be used?
To organize apps pinned to the taskbar
### Supporting information
Here is my screenshot of it using 3rd party software

|
Needs-Triage
|
low
|
Minor
|
2,795,549,326
|
vscode
|
provide a way to collapse the extension info
|
<!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version:
- OS Version:
Steps to Reproduce:
1. Hello, I have a problem with visual
2.studio coder in zoom mode because
I am visually impaired when I am on
a page of an extension and I want to
read the description of this extension
and well I do not have the complete
view because logo is the name
of the extension and all the buttons
bother me I cannot go down properly
and see everything here is an
example with a video
https://github.com/user-attachments/assets/e104cd78-6bc4-4a35-9df8-8023d1091bcd
2.
|
accessibility,extensions
|
low
|
Critical
|
2,795,571,497
|
flutter
|
[Proposal] Add spacing property to ListView and ListView.builder
|
### Use case
it's more convenient to add this feature ,it's makes since to be consistent with row and column
### Proposal
this have to be provided by Flutter directly
|
c: new feature,framework,f: scrolling,c: proposal,team-framework
|
low
|
Minor
|
2,795,572,611
|
opencv
|
Issue in decomposeProjectionMatrix documentation: confusion about the translation vector
|
### Describe the doc issue
Hello OpenCV team,
Thanks for all your hard work on OpenCV! I hope this feedback helps improve the clarity of the documentation.
I noticed a point of confusion in the documentation for the function [decomposeProjectionMatrix](https://github.com/opencv/opencv/blob/1d701d1690b8cc9aa6b86744bffd5d9841ac6fd3/modules/calib3d/include/opencv2/calib3d.hpp#L796).
In the description parameter, the documentation states that the transVect output is a 4x1 "translation vector" : "[@param transVect Output 4x1 translation vector T.](https://github.com/opencv/opencv/blob/1d701d1690b8cc9aa6b86744bffd5d9841ac6fd3/modules/calib3d/include/opencv2/calib3d.hpp#L779C1-L779C50)".
However, this description can be misleading because transVect is actually the camera position (in homogeneous coordinates) as it is mentioned below in the doc : "[The function computes a decomposition of a projection matrix into a calibration and a rotation matrix and the position of a camera](https://github.com/opencv/opencv/blob/1d701d1690b8cc9aa6b86744bffd5d9841ac6fd3/modules/calib3d/include/opencv2/calib3d.hpp#L786C1-L787C37)".
### Fix suggestion
In the parameter list, I suggest updating the description of transVect to clarify that it represents the camera position. It will be even better to replace the parameter name "transVect" by "homogCameraCenter" but it would be necessary to change the code source.
Here's a proposed change to the description:
Current description:
"@param transVect Output 4x1 translation vector T."
Proposed description:
"@param transVect Output 4x1 vector representing the camera position in homogeneous coordinates.
To obtain the translation vector, use t = -rotMatrix * transVect."
|
category: documentation
|
low
|
Minor
|
2,795,611,127
|
pytorch
|
AssertionError: increase TRITON_MAX_BLOCK['X'] to 4096 Again!
|
### 🐛 Describe the bug
I have ran into the compile issue of flax attention modules again, where I get the notorious: `AssertionError: increase TRITON_MAX_BLOCK['X'] to 4096`.
I have read this issue: https://github.com/pytorch/pytorch/issues/135028 and tried this workaround:
`If you set torch._inductor.config.realize_opcount_threshold = 100 (or some other large number), it'll workaround your issue.` But sadly it didn't work.
Neither did setting the environment variable TRITON_MAX_BLOCK_X with os.environ or exporting it within the starting script.
### Error logs
[rank0]: File "/usr/local/lib/python3.11/dist-packages/torch/_dynamo/output_graph.py", line 1465, in _call_user_compiler
[rank0]: raise BackendCompilerFailed(self.compiler_fn, e) from e
[rank0]: torch._dynamo.exc.BackendCompilerFailed: backend='compile_fn' raised:
[rank0]: AssertionError: increase TRITON_MAX_BLOCK['X'] to 4096
### Versions
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.11.11 (main, Dec 4 2024, 08:55:07) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-4.18.0-372.9.1.el8.x86_64-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Quadro RTX 6000
Nvidia driver version: 565.57.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7352 24-Core Processor
CPU family: 23
Model: 49
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
Stepping: 0
BogoMIPS: 4591.50
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca
Virtualization: AMD-V
L1d cache: 1.5 MiB (48 instances)
L1i cache: 1.5 MiB (48 instances)
L2 cache: 24 MiB (48 instances)
L3 cache: 256 MiB (16 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.2.1
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] Could not collect
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @zou3519 @ydwu4 @bdhirsh @Chillee @drisspg @yanboliang @BoyuanFeng
|
triaged,oncall: pt2,module: inductor,module: higher order operators,module: pt2-dispatcher,module: flex attention
|
low
|
Critical
|
2,795,618,198
|
rust
|
Give recursion limit errors a span
|
For example in https://github.com/rust-lang/rust/blob/master/tests/ui/infinite/infinite-struct.rs we get an error like
```
error: reached the recursion limit finding the struct tail for `Take`
|
= help: consider increasing the recursion limit by adding a `#![recursion_limit = "256"]`
```
We could pass in the obligation cause to `struct_tail_raw` and use its span for the main message and report a note for the obligation
_Originally posted by @oli-obk in https://github.com/rust-lang/rust/pull/135464#discussion_r1914325523_
<!-- TRIAGEBOT_START -->
<!-- TRIAGEBOT_ASSIGN_START -->
<!-- TRIAGEBOT_ASSIGN_DATA_START$${"user":"tanvincible"}$$TRIAGEBOT_ASSIGN_DATA_END -->
<!-- TRIAGEBOT_ASSIGN_END -->
<!-- TRIAGEBOT_END -->
|
E-easy,A-diagnostics
|
low
|
Critical
|
2,795,620,157
|
react
|
[Compiler Bug]: eslint plugin erroneously flags third-party functions starting with "use" as hooks
|
### What kind of issue is this?
- [ ] React Compiler core (the JS output is incorrect, or your app works incorrectly after optimization)
- [ ] babel-plugin-react-compiler (build issue installing or using the Babel plugin)
- [x] eslint-plugin-react-compiler (build issue installing or using the eslint plugin)
- [ ] react-compiler-healthcheck (build issue installing or using the healthcheck script)
### Link to repro
https://playground.react.dev/#N4Igzg9grgTgxgUxALhASwLYAcIwC4AEwBUYCAMgIYCe0eAogGaMJyEC+BjMEGBA5DASU2-ANwAdAHaYc+AgCoClMMowBhABaV8q7rwEABShjjbdAehNmdeMABYLcXAnHTpCAB5zCAEwSMlFAANoSMUFJsaBBSBACy1ACCWFgAFACURNIEJGRUtFAMzKx4qRkEALwAfFmxOWpatmAAdKQIACqaCBgIqVIhwemSdewANAQA2gC66e51QniwsanZ9QA8vmgAbgQWVauzUuzSIOxAA
### Repro steps
In the simplified playground example above, I'm using `amCharts.useTheme` which is not a hook at all, just a random function of an external library that happen to start with **use**.
### How often does this bug happen?
Every time
### What version of React are you using?
react@19.0.0
### What version of React Compiler are you using?
babel-plugin-react-compiler@19.0.0-beta-e552027-20250112
|
Type: Bug,Status: Unconfirmed,Component: Optimizing Compiler
|
medium
|
Critical
|
2,795,628,194
|
godot
|
Using Tween on `global_position` and `scale` at the same time results in a completely different scale center
|
### Tested versions
v4.3.stable.steam [77dcf97d8]
### System information
Godot v4.3.stable (77dcf97d8) - Windows 10.0.22631 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 3070 (NVIDIA; 32.0.15.6094) - AMD Ryzen 5 5600X 6-Core Processor (12 Threads)
### Issue description
After modifying the pivot offset property of the Control node, theoretically, scaling should expand with the new pivot offset as the center. However, if Tween is used to modify both `global_position` and `scale` simultaneously, the `pivot_offset` is then scaled using the `(0,0)` point.
As a result, modifying `global_position` and `scale` simultaneously with Tween will behave completely differently from modifying `position` and `scale` simultaneously with Tween.
Should this be considered a bug?
### Steps to reproduce
First, change the `pivot_offset` of the `Control` node to something other than `(0,0)`
```
func go_to_global_position_and_scale() -> void:
var tween_move_and_scale = create_tween().set_parallel()
tween_move_and_scale.tween_property(self, "global_position", global_position + Vector2(1, 1), 0.5)
tween_move_and_scale.tween_property(self, "scale", Vector2(2, 2), 0.5)
func go_to_position_and_scale() -> void:
var tween_move_and_scale = create_tween().set_parallel()
tween_move_and_scale.tween_property(self, "position", position+ Vector2(1, 1), 0.5)
tween_move_and_scale.tween_property(self, "scale", Vector2(2, 2), 0.5)
```
Copy two Control nodes and try to execute the above code, result

### Minimal reproduction project (MRP)
[scale_bug_mini_reproduction.zip](https://github.com/user-attachments/files/18455874/scale_bug_mini_reproduction.zip)
|
bug,topic:gui,topic:animation
|
low
|
Critical
|
2,795,643,398
|
flutter
|
Built-in Flutter development web server serves Javascript files with wrong 'text/plain' MIME type if a query string is used
|
### Steps to reproduce
1. Create a Flutter web application using Flutter version 3.24+
2. In your `index.html` file, modify your `<body>` section to have the following Flutter bootstrap code:
```html
<script>
{{flutter_js}}
{{flutter_build_config}}
_flutter.loader.load({
onEntrypointLoaded: async function (engineInitializer) {
const appRunner = await engineInitializer.initializeEngine();
await appRunner.runApp();
},
serviceWorkerSettings: {
serviceWorkerVersion: "12345678"
}
});
</script>
```
3. Debug your application on Chrome. I personally use VS Code to start a debug session.
4. Open Chrome Dev Tools (F12), click the Network pane, and refresh the page so that you can see network requests and responses.
5. You will see a request for `flutter_service_worker.js?v=12345678`. It is served up as `text/plain`, not `application/javascript`, as required. In a debug session, the Flutter service worker [is empty](https://github.com/flutter/flutter/blob/3297454732841b1a5a25d9f35f1fd5d7a4479e12/packages/flutter_tools/lib/src/isolated/devfs_web.dart#L1006). Nevertheless, having the wrong MIME type is dangerous, as browsers require that a javascript MIME type be used for `<script>` tags and other things (note the Flutter bootstrap code may itself use a `<script>` tag if `timeOutMillis` has expired, which may happen on slow machines). This also means that, eg, a Firebase service worker (often named `firebase-messaging-sw.js` but highly customized by many developers and usually given an explicit version number in the query string), which is widely used, will always be served as `text/plain` and not `application/javascript`, causing errors.
6. Open a new tab and hit F12, switch to the Network pane. Assuming the web site is hosted on http://localhost:5555 (change port as appropriate), enter http://localhost:5555/flutter_service_worker.js : the correct MIME type is used in the response. Try http://localhost:5555/flutter_service_worker.js?v=12345678 again, to be sure : `text/plain` is used.
### Expected results
The Flutter web development server should serve `.js` resources with an `application/javascript` MIME type, even if a query string is present on the URL. This scenario is fairly common when using the very common `v=VERSION` pattern, not just for flutter_service_worker.js, but for all service workers, include Firebase's.
### Actual results
The Flutter web development server does not serve `.js` resources with an `application/javascript` MIME type when a query string is used. `text/plain` is used instead.
### Code sample
<details open><summary>Code sample</summary>
```dart
// See above for the repro steps
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[!] Flutter (Channel [user-branch], 3.24.5, on Microsoft Windows [version 10.0.26100.2894], locale fr-FR)
! Flutter version 3.24.5 on channel [user-branch] at C:\Users\andy\Documents\flutter
Currently on an unknown channel. Run `flutter channel` to switch to an official channel.
If that doesn't fix the issue, reinstall Flutter by following instructions at https://flutter.dev/setup.
! Upstream repository unknown source is not a standard remote.
Set environment variable "FLUTTER_GIT_URL" to unknown source to dismiss this error.
• Framework revision dec2ee5c1f (9 weeks ago), 2024-11-13 11:13:06 -0800
• Engine revision a18df97ca5
• Dart version 3.5.4
• DevTools version 2.37.3
• If those were intentional, you can disregard the above warnings; however it is recommended to use "git" directly to perform update checks and upgrades.
[√] Windows Version (Installed version of Windows is version 10 or higher)
[√] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at C:\Users\andy\AppData\Local\Android\sdk
• Platform android-34, build-tools 34.0.0
• Java binary at: C:\Program Files\Android\Android Studio\jbr\bin\java
• Java version OpenJDK Runtime Environment (build 17.0.6+0-b2043.56-10027231)
• All Android licenses accepted.
[√] Chrome - develop for the web
• Chrome at C:\Program Files\Google\Chrome\Application\chrome.exe
[√] Visual Studio - develop Windows apps (Visual Studio Community 2022 17.9.6)
• Visual Studio at C:\Program Files\Microsoft Visual Studio\2022\Community
• Visual Studio Community 2022 version 17.9.34728.123
• Windows 10 SDK version 10.0.19041.0
[√] Android Studio (version 2022.3)
• Android Studio at C:\Program Files\Android\Android Studio
• Flutter plugin can be installed from:
https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.6+0-b2043.56-10027231)
[√] VS Code (version 1.96.4)
• VS Code at C:\Users\andy\AppData\Local\Programs\Microsoft VS Code
• Flutter extension version 3.102.0
[√] Connected device (3 available)
• Windows (desktop) • windows • windows-x64 • Microsoft Windows [version 10.0.26100.2894]
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.265
• Edge (web) • edge • web-javascript • Microsoft Edge 131.0.2903.146
[√] Network resources
• All expected network resources are available.
! Doctor found issues in 1 category.
```
</details>
|
tool,platform-web,has reproducible steps,P2,team-web,triaged-web,found in release: 3.27,found in release: 3.28
|
low
|
Critical
|
2,795,650,611
|
pytorch
|
Negative values in stride causing error in `avg_pool2d` (on both CPU and CUDA)
|
### 🐛 Describe the bug
Passing a tuple with negative values (such as sym_6) as the stride parameter to the `torch.nn.functional.avg_pool2d` function causes an error on both CPU and CUDA. The function currently checks for zero values but does not handle negative values, leading to unexpected behavior when negative stride values are passed.
For example:
```python
import torch
print(torch.__version__)
sym_0 = (8, 2, 1, 1)
sym_1 = torch.float32
sym_2 = torch.device("cpu")
sym_3 = 0
sym_4 = True
sym_5 = (9223372036854775807, 5868783964474102731)
sym_6 = (-1, 3010182406857593769)
sym_7 = (0,)
sym_8 = True
sym_9 = True
sym_10 = 33554427
var_546 = torch.randn(size=sym_0, dtype=sym_1, device=sym_2)
var_124 = torch.ops.aten.alias(var_546)
var_360 = torch.argmax(var_124, dim=sym_3, keepdim=sym_4)
torch.nn.functional.avg_pool2d(var_360, kernel_size=sym_5, stride=sym_6, padding=sym_7, ceil_mode=sym_8, count_include_pad=sym_9, divisor_override=sym_10)
```
output:
```
2.7.0.dev20250116+cu124
fish: Job 2, 'python3 test.py' terminated by signal SIGFPE (Floating point exception)
```
### Versions
```
Collecting environment information...
PyTorch version: 2.7.0.dev20250116+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Manjaro Linux (x86_64)
GCC version: (GCC) 14.2.1 20240805
Clang version: 18.1.8
CMake version: version 3.30.2
Libc version: glibc-2.40
Python version: 3.11.11 | packaged by conda-forge | (main, Dec 5 2024, 14:17:24) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.6.47-1-MANJARO-x86_64-with-glibc2.40
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4070
Nvidia driver version: 550.107.02
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.9.2.1
/usr/lib/libcudnn_adv.so.9.2.1
/usr/lib/libcudnn_cnn.so.9.2.1
/usr/lib/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/libcudnn_graph.so.9.2.1
/usr/lib/libcudnn_heuristic.so.9.2.1
/usr/lib/libcudnn_ops.so.9.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
架构: x86_64
CPU 运行模式: 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
字节序: Little Endian
CPU: 16
在线 CPU 列表: 0-15
厂商 ID: GenuineIntel
型号名称: 13th Gen Intel(R) Core(TM) i5-13400F
CPU 系列: 6
型号: 191
每个核的线程数: 2
每个座的核数: 10
座: 1
步进: 2
CPU(s) scaling MHz: 25%
CPU 最大 MHz: 4600.0000
CPU 最小 MHz: 800.0000
BogoMIPS: 4993.00
标记: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
虚拟化: VT-x
L1d 缓存: 416 KiB (10 instances)
L1i 缓存: 448 KiB (10 instances)
L2 缓存: 9.5 MiB (7 instances)
L3 缓存: 20 MiB (1 instance)
NUMA 节点: 1
NUMA 节点0 CPU: 0-15
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.7.0.dev20250116+cu124
[pip3] torchaudio==2.6.0.dev20250116+cu124
[pip3] torchvision==0.22.0.dev20250116+cu124
[conda] numpy 2.1.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git0d4682f0 pypi_0 pypi
[conda] torch 2.7.0.dev20250116+cu124 pypi_0 pypi
[conda] torchaudio 2.6.0.dev20250116+cu124 pypi_0 pypi
[conda] torchvision 0.22.0.dev20250116+cu124 pypi_0 pypi
```
cc @malfet
|
module: crash,module: error checking,triaged,actionable,topic: fuzzer
|
low
|
Critical
|
2,795,669,647
|
godot
|
Code editor scrolls to show caret on text entry in Project Settings window
|
### Tested versions
Reproducible in 4.3.stable
### System information
Godot v4.3.stable - macOS 15.2.0
### Issue description
Normally, if you enter a text character in the script code editor with the editor scrolled so the caret is not visible, the view is scrolled so the entry will be visible and the character is then entered. This is expected.
However, it appear that the first part of this, detecting text entry and scrolling the code editor if need be, is triggered even if the text entry is done in another window. So, for example if you enter text in a text field in the Project Settings, the code editor might scroll unexpectedly in the background. The text isn't entered, so it's mostly harmless, but it's unexpected and it looks like the code might have been changed since the code editor reacted to the key press.
https://github.com/user-attachments/assets/da47632d-20f1-49da-a7ab-4c28a577aa1b
The issue only appears if you've had the code editor focused and scrolled away from the caret when you opened the project settings window. If you've focused another control first, it doesn't happen. So perhaps there's something wrong with the code editor checking if it's the focused control for the scroll to caret operation, which doesn't account for focus possibly being in another window?
### Steps to reproduce
- Have a script file long enough to scroll.
- Move the cursor to a row at the bottom and scroll up from it so it's not visible.
- Open the Project settings.
- Enter a single character in a settings text field and observe the code editor scroll to make the caret visible in the background.
### Minimal reproduction project (MRP)
Only editor required (and any project with a scrolling script file).
|
bug,topic:editor
|
low
|
Minor
|
2,795,690,301
|
godot
|
Custom anchor Container Sizing switching to Full Rect
|
### Tested versions
4.4 beta 1
### System information
Windows 11
### Issue description
First, If you set any node to Anchor Mode, and then set Preset to "Full Rect", it will change a default of:
Anchor Points: 0,0,1,1 and
Anchor Offsets: 0,0,0,0.
thus, Full Rect is:
Anchor Points: 0,0,1,1 and
Anchor Offsets: 0,0,0,0.
You must choose "Custom" if you want to change these settings.
However, if you close the scene and then reopen it, Godot will only look at whether the Anchor Points are 0,0,1,1 and not the Anchor Offsets
If you change the offsets, Godot needs to keep the Preset to "Custom", however, Godot keeps reassigning the Preset to "Full Rect", which is incorrect.
Godot needs to also check if the Anchor Offsets are 0,0,0,0 before automatically selecting "Full Rect" on reopening. The user has to reselect "Custom" just to get to the Anchor Offset settings. As well, if an aware of the bug, they will not know why the margins are offset when it states "Full Rect"
### Steps to reproduce
Create a scene with a control and parent.
Add a child control.
Set child Layout mode to Anchors.
Set the Anchors Preset to Custom.
Set Anchor Points to 0,0,1,1
Set Anchors Offset to 25,25,-25,-25
Save the scene and close it.
Upon reopening, the Anchors Preset has switched from "Custom" to "Full Rect", now hiding the Anchor Offset settings.
### Minimal reproduction project (MRP)
[project.zip](https://github.com/user-attachments/files/18456092/project.zip)
|
discussion,topic:gui
|
low
|
Critical
|
2,795,716,674
|
tensorflow
|
Could not get sample weight from customized loss
|
### Issue type
Feature Request
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
tf 2.13.1
### Custom code
Yes
### OS platform and distribution
CentOS 7.9
### Mobile device
_No response_
### Python version
3.8.3
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
We used customized loss for the model training, and would like to get sample weight to calculate the loss.
However, sample weight does not pass to loss function as expected.
### Standalone code to reproduce the issue
```shell
#Here are the code to reproduce the issue.
import tensorflow as tf
from tensorflow.keras.layers import Dense
import numpy as np
def weighted_zero_mean_r2_loss(y_true, y_pred, sample_weight=None):
y_true = tf.cast(y_true, tf.float32)
y_pred = tf.cast(y_pred, tf.float32)
sample_weight = tf.cast(sample_weight, tf.float32)
weighted_squared_error = sample_weight * (y_true - y_pred) ** 2
weighted_true_squared = sample_weight * y_true ** 2
numerator = tf.reduce_sum(weighted_squared_error)
denominator = tf.reduce_sum(weighted_true_squared)
r2_score = 1 - numerator / denominator
return r2_score
def build_model():
metrics = 'mae'
loss = weighted_zero_mean_r2_loss #'mse'
output_num = 1
inputs = tf.keras.layers.Input(shape=(40, 36))
lstm_out = tf.keras.layers.LSTM(32, return_sequences=False)(inputs)
output = Dense(output_num, activation='linear')(lstm_out)
model = tf.keras.Model(inputs, output)
model.compile(loss=loss, optimizer=tf.keras.optimizers.Adam(), metrics=metrics)
model.summary()
return model
def data_gen(class_weights):
while True:
x_batch = np.random.rand(128, 40, 36)
y_batch = np.random.randint(0, 3, (128, 1))
# Apply class weights to the labels
sample_weights = np.vectorize(lambda x: class_weights[x])(y_batch)
yield x_batch, y_batch, sample_weights
model = build_model()
cw = {0: 0.3, 1: 2.5, 2: 3.2}
model.fit(data_gen(cw), epochs=2, steps_per_epoch=10)
```
### Relevant log output
```shell
Error messages:
File "/root/.virtualenvs/infinity_stock/lib/python3.8/site-packages/keras/src/engine/training.py", line 1338, in train_function *
return step_function(self, iterator)
File "/data/release/kagglejanestreet/scripts/python/test.py", line 8, in weighted_zero_mean_r2_loss *
sample_weight = tf.cast(sample_weight, tf.float32)
ValueError: None values not supported.
```
|
type:feature,comp:keras,TF 2.13
|
low
|
Critical
|
2,795,721,851
|
TypeScript
|
tsserver seems to watch entire home directory
|
### 🔎 Search Terms
When developing on a typescript project I keep needing to restart the tsserver because it always gets unresponsive after the first iteration checking my opened file.
Today I finally had a look at the log file.
I found the following lines (redacted my user name):
```
DirectoryWatcher:: Added:: WatchInfo: /home/REDACTED/ 1 undefined WatchType: node_modules for closed script infos and package.jsons affecting module specifier cache
...
Info 72523[16:05:52.615] sysLog:: /home/REDACTED/some/file/somewhere/in/my/home:: Defaulting to watchFile
...
Elapsed:: 263231.954163ms DirectoryWatcher:: Added:: WatchInfo: /home/REDACTED/ 1 undefined WatchType: node_modules for closed script infos and package.jsons affecting module specifier cache
```
The line in between the ellipsis is repeated hundred thousands of times, apparently watching all the files in my home directory.
What could be the issue?
### 🕗 Version & Regression Information
- This behavior was observed in version 5.3.3, 5.4.5 & 5.7.3
### ⏯ Playground Link
_No response_
### 💻 Code
```ts
// Your code here
```
### 🙁 Actual behavior
-
### 🙂 Expected behavior
-
### Additional information about the issue
_No response_
|
Needs More Info
|
low
|
Minor
|
2,795,726,193
|
neovim
|
`vim.lsp.config/enable` improvements
|
Tracking issue to collect requirements to help guide how to design improvements for `vim.lsp.config/enable`.
- Ability to prevent attaching an LSP to a buffer.
- If there are no workspace folders
- Based on buffer name/path
|
lsp
|
low
|
Minor
|
2,795,731,426
|
pytorch
|
list comprehension in SkipFiles are always skipped with no way to override
|
Proposal: list comprehensions should always be inlined and never markable as skip.
Internal xref: https://fb.workplace.com/groups/1075192433118967/posts/1585141438790728/?comment_id=1585152455456293&reply_comment_id=1586067422031463
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames
|
triaged,oncall: pt2,module: dynamo
|
low
|
Minor
|
2,795,741,847
|
PowerToys
|
Add Fancy zone integration with workspaces.
|
### Description of the new feature / enhancement
We would have the ability to assign a distinct FancyZone layout to each monitor, ensuring that any application opened on a given monitor can then be placed in a specific zone within that layout.
### Scenario when this would be used?
Currently, Workspaces can launch applications and place them in my predefined spots, but that only covers the initial setup for my day. Like many other users, I open additional applications as I work, and FancyZones might revert to a previous layout or change for unclear reasons—possibly a bug I’ll also report. By integrating FancyZones directly into Workspaces, I wouldn’t have to deal with these unexpected changes or constantly reconfigure my window layouts.
### Supporting information
_No response_
|
Needs-Triage
|
low
|
Critical
|
2,795,744,885
|
godot
|
Compute Texture Demo is failing on Metal rendering backend
|
### Tested versions
4.4-beta1
### System information
macOS 13.7.2, Apple M1 Pro 16 GB, Godot 4.4-beta1
### Issue description
Compute Texture Demo is failing on Metal rendering backend, but works as usual on Vulkan
### Steps to reproduce
1. Run Godot 4.4-beta1
2. Install the Compute Texture Demo from AssetLib
3. Edit it (press OK to convert the project from Godot 4.2 to Godot 4.4)
4. Run it or just look in the console in the Godot editor
5. When running no mouse interaction is happening
6. There is endless error spam:
```
ERROR: Set: 1 Binding: 0 Type: Image Writable: N Length: 1
ERROR: Set: 2 Binding: 0 Type: Image Writable: Y Length: 1
ERROR: Uniforms supplied for set (2):
ERROR: Set: 0 Binding: 0 Type: Image Writable: N Length: 1
ERROR: are not the same format as required by the pipeline shader. Pipeline shader requires the following bindings:
ERROR: Set: 0 Binding: 0 Type: Image Writable: N Length: 1
ERROR: Set: 1 Binding: 0 Type: Image Writable: N Length: 1
ERROR: Set: 2 Binding: 0 Type: Image Writable: Y Length: 1
ERROR: Uniforms supplied for set (2):
ERROR: Set: 0 Binding: 0 Type: Image Writable: N Length: 1
ERROR: are not the same format as required by the pipeline shader. Pipeline shader requires the following bindings:
ERROR: Set: 0 Binding: 0 Type: Image Writable: N Length: 1
ERROR: Set: 1 Binding: 0 Type: Image Writable: N Length: 1
ERROR: Set: 2 Binding: 0 Type: Image Writable: Y Length: 1
ERROR: Uniforms supplied for set (2):
ERROR: Set: 0 Binding: 0 Type: Image Writable: N Length: 1
ERROR: are not the same format as required by the pipeline shader. Pipeline shader requires the following bindings:
ERROR: Set: 0 Binding: 0 Type: Image Writable: N Length: 1
ERROR: Set: 1 Binding: 0 Type: Image Writable: N Length: 1
ERROR: Set: 2 Binding: 0 Type: Image Writable: Y Length: 1
ERROR: Uniforms supplied for set (2):
ERROR: Set: 0 Binding: 0 Type: Image Writable: N Length: 1
ERROR: are not the same format as required by the pipeline shader. Pipeline shader requires the following bindings:
ERROR: Set: 0 Binding: 0 Type: Image Writable: N Length: 1
ERROR: Set: 1 Binding: 0 Type: Image Writable: N Length: 1
ERROR: Set: 2 Binding: 0 Type: Image Writable: Y Length: 1
ERROR: Uniforms supplied for set (2):
ERROR: Set: 0 Binding: 0 Type: Image Writable: N Length: 1
ERROR: are not the same format as required by the pipeline shader. Pipeline shader requires the following bindings:
ERROR: Set: 0 Binding: 0 Type: Image Writable: N Length: 1
ERROR: Set: 1 Binding: 0 Type: Image Writable: N Length: 1
ERROR: Set: 2 Binding: 0 Type: Image Writable: Y Length: 1
```
7. Switch the renderer from Metal to Vulkan in Project Settings and restart Godot
8. All is working then, no error spam
### Minimal reproduction project (MRP)
Compute Texture Demo from AssetLib
|
bug,platform:macos,topic:rendering
|
low
|
Critical
|
2,795,747,763
|
go
|
x/tools/gopls: preserve comments when invoking fillstruct on partially filled composite literals
|
It looks like in #39804, there was a desire to preserve comments when filling partial literals, but it was deemed too challenging.
I think with recent work by @madelinekalil to reassemble the resulting literal, this should be a solvable problem, and a nice UX improvement.
Tentatively assigning for v0.18.0. It would be nice to bundle this improvement.
|
FeatureRequest,gopls,Tools
|
low
|
Minor
|
2,795,775,400
|
react-native
|
fetch does not work with the android content:// uri scheme
|
### Description
fetch does not handle the Android 'content:' scheme.
It specifically fails [here](https://github.com/JakeChampion/fetch/blob/ba5cf1ed2e02ebb96fa1e60b4fd2eb04071b60e4/fetch.js#L547) as success status is 0 for blobs and scheme is 'content://' and not 'file://'
### Steps to reproduce
1. Install the application with yarn android
2. Select a (non-large) file by granting limited permissions. This should provide a url using the 'content://' scheme on newer Androids (API 35 for me)
3. Crashes in the js side of fetch
### React Native Version
0.76.6
### Affected Platforms
Runtime - Android
### Output of `npx react-native info`
```text
System:
OS: macOS 14.4
CPU: (16) arm64 Apple M3 Max
Memory: 231.39 MB / 48.00 GB
Shell:
version: 3.2.57
path: /bin/bash
Binaries:
Node:
version: 20.17.0
path: /usr/local/bin/node
Yarn:
version: 1.22.19
path: /usr/local/bin/yarn
npm:
version: 10.8.3
path: /usr/local/bin/npm
Watchman: Not Found
Managers:
CocoaPods:
version: 1.15.2
path: /opt/homebrew/bin/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 23.5
- iOS 17.5
- macOS 14.5
- tvOS 17.5
- visionOS 1.2
- watchOS 10.5
Android SDK:
API Levels:
- "23"
- "27"
- "28"
- "29"
- "30"
- "31"
- "32"
- "33"
- "33"
- "33"
- "34"
- "35"
Build Tools:
- 19.1.0
- 20.0.0
- 21.1.2
- 22.0.1
- 23.0.1
- 23.0.2
- 23.0.3
- 24.0.0
- 24.0.1
- 24.0.2
- 24.0.3
- 25.0.0
- 25.0.1
- 25.0.2
- 25.0.3
- 26.0.0
- 26.0.1
- 26.0.2
- 26.0.3
- 27.0.0
- 27.0.1
- 27.0.2
- 27.0.3
- 28.0.0
- 28.0.1
- 28.0.2
- 28.0.3
- 29.0.0
- 29.0.1
- 29.0.2
- 29.0.3
- 30.0.0
- 30.0.1
- 30.0.2
- 30.0.3
- 31.0.0
- 32.0.0
- 32.1.0
- 33.0.0
- 33.0.1
- 33.0.2
- 34.0.0
- 34.0.0
- 34.0.0
- 34.0.0
- 35.0.0
System Images:
- android-30 | Google Play ARM 64 v8a
- android-32 | Google Play ARM 64 v8a
- android-33 | Google Play ARM 64 v8a
Android NDK: Not Found
IDEs:
Android Studio: 2022.3 AI-223.8836.35.2231.10811636
Xcode:
version: 15.4/15F31d
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.8.1
path: /usr/bin/javac
Ruby:
version: 3.3.0
path: /opt/homebrew/opt/ruby/bin/ruby
npmPackages:
"@react-native-community/cli":
installed: 15.0.1
wanted: 15.0.1
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.6
wanted: 0.76.6
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: true
iOS:
hermesEnabled: true
newArchEnabled: true
```
### Stacktrace or Logs
```text
RangeError: Failed to construct 'Response': The status provided (0) is outside the range [200, 599].
the crash is at [this line](https://github.com/JakeChampion/fetch/blob/ba5cf1ed2e02ebb96fa1e60b4fd2eb04071b60e4/fetch.js#L547) in fetch (js)
```
### Reproducer
https://github.com/giantslogik/blob-large-file-fetch
### Screenshots and Videos
_No response_
|
🌐Networking,Platform: Android,Needs: Triage :mag:
|
low
|
Critical
|
2,795,824,628
|
pytorch
|
partitioner hangs for some long chains of ops with many users
|
Causing the compile hang / NCCL timeout in https://fb.workplace.com/groups/1075192433118967/posts/1585106652127540/?comment_id=1585174555454083
Here's a min repro, which still hangs for me after several minutes of compiling:
```
import torch
import time
class Mod(torch.nn.Module):
def forward(self, x):
tmps = [x + i for i in range(32)]
tmps = [x + tmp for tmp in tmps]
for i in range(len(tmps) - 4):
tmps[i] = tmps[i].sin().mul(tmps[i])
tmps[i + 1] -= tmps[i]
tmps[i + 2] -= tmps[i]
tmps[i + 3] -= tmps[i]
return sum(tmps)
m = Mod()
m = torch.compile(m, backend="aot_eager_decomp_partition")
x = torch.randn(4, 4, requires_grad=True)
start = time.time()
out = m(x)
end = time.time()
print(end - start)
```
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @yf225
|
high priority,triaged,oncall: pt2,module: aotdispatch,module: pt2-dispatcher
|
low
|
Minor
|
2,795,826,351
|
godot
|
[4.4-beta.1] Editor reads keyboard inputs incorrectly when using uncommon Keyboard Layout BÉPO
|
### Tested versions
- Reproducible in : 4.4.beta1
- Not Reproducible in : 4.3.stable
### System information
Godot v4.4.beta1 - Windows 10 (build 19045) - Multi-window, 2 monitors - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1070 (NVIDIA; 32.0.15.6603) - Intel(R) Core(TM) i5-7600K CPU @ 3.80GHz (4 threads)
### Issue description
I use a uncommon keyboard layout on Windows called BÉPO [bepo-1.1rc2-full-azerty.exe](https://cdn.bepo.fr/windows/1.1rc2/bepo-1.1rc2-full-azerty.exe) that emulates shortcuts from the AZERTY layout, like Ctrl+S, Ctrl+C, Ctrl+V, etc.
My shortcuts now don't work anymore, which means I can't copy, paste, or save in the script editor using the correct shortcuts I use everywhere else and those shortcuts aren't editable so I can't rebind them.
Godot seems to have changed the way it reads input from Physical position to the letter that would give this position.
If I try to record the Save shortcut (Ctrl+S) on Godot4.3, it reads :
"Ctrl+S or Ctrl+S (Physical) or Ctrl+U (Unicode)"
If I try the same on Godot4.4, it reads :
"**Ctrl+U** or Ctrl+S (Physical) or Ctrl+U (Unicode)"
### Steps to reproduce
Install AZERTY layout
Install the BÉPO layout [bepo-1.1rc2-full-azerty.exe](https://cdn.bepo.fr/windows/1.1rc2/bepo-1.1rc2-full-azerty.exe) (the one using AZERTY shortcuts)
Open Godot4.4 and navigate towards Editor > Editor Settings > Shortcuts
Try to bind shortcuts that use the Ctrl key in both layouts
Notice it's not the same (when it's the same in Godot4.3)
### Minimal reproduction project (MRP)
N/A
|
bug,platform:windows,topic:input
|
low
|
Minor
|
2,795,836,056
|
go
|
x/tools/gopls/internal/analyzer/modernize: bug in slices.Contains transformation
|
[split out of https://github.com/golang/go/issues/70815#issuecomment-2598663385]
@findleyr says:
I encountered a bug in the slices.Contains modernizer today.
Modernizing using slices.Contains here results in an error:
https://cs.opensource.google/go/x/tools/+/master:gopls/internal/golang/highlight.go;l=348;drc=344e48255740736de8c8277e9a286cf3231c7e13
```go
case *ast.CallExpr:
// If cursor is an arg in a callExpr, we don't want control flow highlighting.
if i > 0 {
for _, arg := range n.Args {
if arg == path[i-1] {
return
}
}
}
```
Error is: `S (type []ast.Expr) does not satisfy ~[]E`.
For now, I think we need to check that the types in the match condition are identical.
|
gopls,Tools,BugReport
|
low
|
Critical
|
2,795,841,407
|
neovim
|
Too many redraws when reparsing becomes async
|
### Problem
The highlighter runs an async parse in `on_win`, which redraws in the callback to update once parsing finishes. This redraw is *not* run when the parsing was able to complete synchronously, because we can just call `on_line` after so it is not necessary. Reparses will *usually* run synchronously because they are much quicker, so we don't need an async parse all the time.
*However*, some files are so large that even reparsing will get broken up over multiple event loop iterations. In this case, a reparse triggers a redraw, which triggers `on_win`, which triggers a reparse, which triggers a redraw, etc. The result is that large files (like the big linux one) will have CPU usage at 100% when idling in a buffer, and high amounts of flickering when scrolling.
### Steps to reproduce
Download https://raw.githubusercontent.com/torvalds/linux/master/drivers/gpu/drm/amd/include/asic_reg/dcn/dcn_3_2_0_sh_mask.h
```lua
-- minimal.lua
for name, url in pairs {
nvim_treesitter = 'https://github.com/nvim-treesitter/nvim-treesitter.git',
} do
local install_path = vim.fn.fnamemodify('nvim_issue/' .. name, ':p')
if vim.fn.isdirectory(install_path) == 0 then
vim.fn.system { 'git', 'clone', '--depth=1', url, install_path }
end
vim.opt.runtimepath:append(install_path)
end
-- Increase if needed
vim.o.rdt = 6000
require('nvim-treesitter.configs').setup {
ensure_installed = { 'lua', 'cpp', 'c', 'comment' },
highlight = { enable = true },
}
```
run
`nvim --clean -u minimal.lua dcn_3_2_0_sh_mask.h`
Note that CPU usage is consistently at 100% due to the above reasons
### Expected behavior
Reparsing should not be run in the first place; rather the `LanguageTree` should recognize that the region is valid and should return the trees immediately.
A proper solution (in my opinion) is to let `is_valid()` accept a range, which will allow it to tell if a languagetree is valid over that range. The downside is it will no longer be able to flatten `_valid` to `true` (since we won't know if it is valid for all possible ranges). IMO this is still overall an optimization
### Nvim version (nvim -v)
v0.11.0-nightly+e8a6c1b
### Vim (not Nvim) behaves the same?
NA
### Operating system/version
NixOS 25.05
### Terminal name/version
Ghostty 1.0.1
### $TERM environment variable
xterm-ghostty
### Installation
Nixpkgs nightly overlay
|
highlight,treesitter
|
low
|
Critical
|
2,795,860,904
|
rust
|
Tracking issue for release notes of #132268: Impl TryFrom<Vec<u8>> for String
|
This issue tracks the release notes text for #132268.
### Steps
- [ ] Proposed text is drafted by PR author (or team) making the noteworthy change.
- [ ] Issue is nominated for release team review of clarity for wider audience.
- [ ] Release team includes text in release notes/blog posts.
### Release notes text
The responsible team for the underlying change should edit this section to replace the automatically generated link with a succinct description of what changed, drawing upon text proposed by the author (either in discussion or through direct editing).
````markdown
# Category (e.g. Language, Compiler, Libraries, Compatibility notes, ...)
- [Impl TryFrom<Vec<u8>> for String](https://github.com/rust-lang/rust/pull/132268)
````
> [!TIP]
> Use the [previous releases](https://doc.rust-lang.org/nightly/releases.html) categories to help choose which one(s) to use.
> The category will be de-duplicated with all the other ones by the release team.
>
> *More than one section can be included if needed.*
### Release blog section
If the change is notable enough for inclusion in the blog post, the responsible team should add content to this section.
*Otherwise leave it empty.*
````markdown
````
cc @elichai, @Amanieu -- origin issue/PR authors and assignees for starting to draft text
|
T-libs-api,relnotes,needs-triage,relnotes-tracking-issue
|
low
|
Minor
|
2,795,860,973
|
rust
|
Tracking issue for release notes of #91399: Tracking Issue for `float_next_up_down`
|
This issue tracks the release notes text for #91399.
### Steps
- [ ] Proposed text is drafted by PR author (or team) making the noteworthy change.
- [ ] Issue is nominated for release team review of clarity for wider audience.
- [ ] Release team includes text in release notes/blog posts.
### Release notes text
The responsible team for the underlying change should edit this section to replace the automatically generated link with a succinct description of what changed, drawing upon text proposed by the author (either in discussion or through direct editing).
````markdown
# Category (e.g. Language, Compiler, Libraries, Compatibility notes, ...)
- [Tracking Issue for `float_next_up_down`](https://github.com/rust-lang/rust/issues/91399)
````
> [!TIP]
> Use the [previous releases](https://doc.rust-lang.org/nightly/releases.html) categories to help choose which one(s) to use.
> The category will be de-duplicated with all the other ones by the release team.
>
> *More than one section can be included if needed.*
### Release blog section
If the change is notable enough for inclusion in the blog post, the responsible team should add content to this section.
*Otherwise leave it empty.*
````markdown
````
cc @yaahc -- origin issue/PR authors and assignees for starting to draft text
|
T-libs-api,relnotes,A-floating-point,needs-triage,relnotes-tracking-issue
|
low
|
Minor
|
2,795,873,185
|
godot
|
Linux unable to compile `window template_release` with `lto=full production=yes`
|
### Tested versions
4.4 master
### System information
ArchLinux 6.12.9-zen1-1.1-zen (64)
### Issue description
Full command:
`pyston-scons platform=windows target=template_release module_mono_enabled=yes linker=mold lto=full optimize=size production=yes vulkan=no speechd=no fast_unsafe=yes disable_3d=yes disable_2d_physics=yes disable_3d_physics=yes disable_navigation=yes openxr=no rendering_device=no`
Error:
```
./core/templates/cowdata.h: In function '_copy_on_write.constprop.isra':
./core/templates/cowdata.h:301: internal compiler error: in binds_to_current_def_p, at symtab.cc:2589
301 | typename CowData<T>::USize CowData<T>::_copy_on_write() {
0x1d64449 internal_error(char const*, ...)
???:0
0x6eb1a2 fancy_abort(char const*, int, char const*)
???:0
0xd425a8 ref_maybe_used_by_stmt_p(gimple*, ao_ref*, bool)
???:0
0xd6bb99 dse_classify_store(ao_ref*, gimple*, bool, simple_bitmap_def*, bool*, tree_node*)
???:0
Please submit a full bug report, with preprocessed source (by using -freport-bug).
Please include the complete backtrace with any bug report.
See <https://bugs.archlinux.org/> for instructions.
make: *** [/tmp/ccVBDCsI.mk:374: /tmp/ccebIWPQ.ltrans124.ltrans.o] Error 1
make: *** Waiting for unfinished jobs....
during GIMPLE pass: dse
./core/templates/cowdata.h: In member function '_copy_on_write.isra':
./core/templates/cowdata.h:301:28: internal compiler error: in binds_to_current_def_p, at symtab.cc:2589
301 | typename CowData<T>::USize CowData<T>::_copy_on_write() {
| ^
0x1d64449 internal_error(char const*, ...)
???:0
0x6eb1a2 fancy_abort(char const*, int, char const*)
???:0
0xd425a8 ref_maybe_used_by_stmt_p(gimple*, ao_ref*, bool)
???:0
0xd6bb99 dse_classify_store(ao_ref*, gimple*, bool, simple_bitmap_def*, bool*, tree_node*)
???:0
Please submit a full bug report, with preprocessed source (by using -freport-bug).
Please include the complete backtrace with any bug report.
See <https://bugs.archlinux.org/> for instructions.
make: *** [/tmp/ccVBDCsI.mk:383: /tmp/ccebIWPQ.ltrans127.ltrans.o] Error 1
during GIMPLE pass: dse
./core/templates/cowdata.h: In member function '_copy_on_write.isra':
./core/templates/cowdata.h:301: internal compiler error: in binds_to_current_def_p, at symtab.cc:2589
301 | typename CowData<T>::USize CowData<T>::_copy_on_write() {
0x1d64449 internal_error(char const*, ...)
???:0
0x6eb1a2 fancy_abort(char const*, int, char const*)
???:0
0xd425a8 ref_maybe_used_by_stmt_p(gimple*, ao_ref*, bool)
???:0
0xd6bb99 dse_classify_store(ao_ref*, gimple*, bool, simple_bitmap_def*, bool*, tree_node*)
???:0
Please submit a full bug report, with preprocessed source (by using -freport-bug).
Please include the complete backtrace with any bug report.
See <https://bugs.archlinux.org/> for instructions.
make: *** [/tmp/ccVBDCsI.mk:377: /tmp/ccebIWPQ.ltrans125.ltrans.o] Error 1
during GIMPLE pass: dse
./core/templates/cowdata.h: In member function '_copy_on_write.isra':
./core/templates/cowdata.h:301: internal compiler error: in binds_to_current_def_p, at symtab.cc:2589
301 | typename CowData<T>::USize CowData<T>::_copy_on_write() {
0x1d64449 internal_error(char const*, ...)
???:0
0x6eb1a2 fancy_abort(char const*, int, char const*)
???:0
0xd425a8 ref_maybe_used_by_stmt_p(gimple*, ao_ref*, bool)
???:0
0xd6bb99 dse_classify_store(ao_ref*, gimple*, bool, simple_bitmap_def*, bool*, tree_node*)
???:0
Please submit a full bug report, with preprocessed source (by using -freport-bug).
Please include the complete backtrace with any bug report.
See <https://bugs.archlinux.org/> for instructions.
make: *** [/tmp/ccVBDCsI.mk:380: /tmp/ccebIWPQ.ltrans126.ltrans.o] Error 1
during GIMPLE pass: dse
./core/object/ref_counted.h: In member function 'instantiate.constprop':
./core/object/ref_counted.h:191:14: internal compiler error: in binds_to_current_def_p, at symtab.cc:2589
191 | void instantiate(VarArgs... p_params) {
| ^
0x1d64449 internal_error(char const*, ...)
???:0
0x6eb1a2 fancy_abort(char const*, int, char const*)
???:0
0xd425a8 ref_maybe_used_by_stmt_p(gimple*, ao_ref*, bool)
???:0
0xd6bb99 dse_classify_store(ao_ref*, gimple*, bool, simple_bitmap_def*, bool*, tree_node*)
???:0
Please submit a full bug report, with preprocessed source (by using -freport-bug).
Please include the complete backtrace with any bug report.
See <https://bugs.archlinux.org/> for instructions.
make: *** [/tmp/ccVBDCsI.mk:371: /tmp/ccebIWPQ.ltrans123.ltrans.o] Error 1
during GIMPLE pass: dse
core/string/ustring.cpp: In function 'num':
core/string/ustring.cpp:1601: internal compiler error: in binds_to_current_def_p, at symtab.cc:2589
1601 | String String::num(double p_num, int p_decimals) {
0x1d64449 internal_error(char const*, ...)
???:0
0x6eb1a2 fancy_abort(char const*, int, char const*)
???:0
0xd425a8 ref_maybe_used_by_stmt_p(gimple*, ao_ref*, bool)
???:0
0xd6bb99 dse_classify_store(ao_ref*, gimple*, bool, simple_bitmap_def*, bool*, tree_node*)
???:0
Please submit a full bug report, with preprocessed source (by using -freport-bug).
Please include the complete backtrace with any bug report.
See <https://bugs.archlinux.org/> for instructions.
make: *** [/tmp/ccVBDCsI.mk:344: /tmp/ccebIWPQ.ltrans114.ltrans.o] Error 1
lto-wrapper: fatal error: make returned 2 exit status
compilation terminated.
/usr/lib/gcc/x86_64-w64-mingw32/14.2.0/../../../../x86_64-w64-mingw32/bin/ld: error: lto-wrapper failed
collect2: error: ld returned 1 exit status
scons: *** [bin/godot.windows.template_release.x86_64.mono.exe] Error 1
scons: building terminated because of errors.
```
### Steps to reproduce
Use `platform=windows target=template_release linker=mold lto=full production=yes` to compile godot on linux
### Minimal reproduction project (MRP)
N/A
|
bug,platform:linuxbsd,topic:buildsystem,needs testing
|
low
|
Critical
|
2,795,884,718
|
godot
|
iOS export fails with provisioning profile error
|
### Tested versions
4.4.beta1.mono
### System information
Godot v4.4.beta1.mono - macOS Sonoma (14.6.1) - Multi-window, 1 monitor - Metal (Mobile) - integrated Apple M2 (Apple8) - Apple M2 (8 threads)
### Issue description
iOS export was working in Godot 4.3 stable but fails to create the .ipa when using version 4.4 beta 1:
error: "rockhopper" requires a provisioning profile. Select a provisioning profile in the Signing & Capabilities editor.
I'm using the same export settings as the version that is currently live on the App Store:
Export Method Release: Development
Code Sign Identity Release: iPhone Developer
Provisioning Profile UUID Release: [EMPTY]
Previously, this would create the ipa file and Xcode project with "Automatically manage signing" set.
### Steps to reproduce
Reproducible with a minimal Godot project.
Export project to iOS (debug or release)
### Minimal reproduction project (MRP)
N/A
|
bug,platform:ios,topic:editor,topic:export
|
low
|
Critical
|
2,795,896,228
|
node
|
Excessive slowness during AES-GCM decryption
|
### Version
22.2.0
### Platform
```text
Darwin MBP.local 21.6.0 Darwin Kernel Version 21.6.0: Wed Aug 10 14:28:23 PDT 2022; root:xnu-8020.141.5~2/RELEASE_ARM64_T6000 arm64
```
### Subsystem
_No response_
### What steps will reproduce the bug?
Profile the following:
```js
import webcrypto from 'tiny-webcrypto';
import encryptor from 'tiny-encryptor';
const SECRET = 'P@ssword!';
const SAMPLE_1GB = new Uint8Array ( 1024 * 1024 * 1024 );
const enc = await encryptor.encrypt ( SAMPLE_1GB, SECRET );
const dec = await encryptor.decrypt ( enc, SECRET );
```
It should produce a trace like this:
<img width="1152" alt="Image" src="https://github.com/user-attachments/assets/778a4a5f-a214-4b0c-b299-302238da9b99" />
Basically we can see that decryption is way slower than encryption, but how come? There seems to be a useless (arguably) [copy of the data buffer](https://github.com/nodejs/node/blob/74717cb7fa21eb7d7c2abc579334f28c66d96fb0/lib/internal/crypto/aes.js#L187) when decrypting, but I think it would be totally reasonable to simply delete this copy and getting a subarray instead. It's obvious that you shouldn't be modifying something while its being processed by something else if you don't want problems.
Basically this copy has a tangible cost, obviously, but a very intangible benefit, especially since a "tag" is used, if the underlying buffer gets modified under our nose the decryption should fail, I think.
So basically I think we should switch to a subarray instead of a slice there and get rid of this unnecessary slowness.
### How often does it reproduce? Is there a required condition?
Always.
### What is the expected behavior? Why is that the expected behavior?
No slowness caused by operations of dubious utility at best.
### What do you see instead?
Slowness caused by operations of dubious utility at best.
### Additional information
_No response_
|
crypto
|
low
|
Critical
|
2,795,899,526
|
kubernetes
|
[Sig-Network] Networking Granular Checks: Pods should function for intra-pod communication: sctp [LinuxOnly] [Feature:SCTPConnectivity]
|
### Which jobs are failing?
- https://testgrid.k8s.io/sig-node-containerd#cos-cgroupv2-containerd-node-features
- https://testgrid.k8s.io/sig-node-containerd#node-e2e-features
- https://testgrid.k8s.io/sig-node-containerd#cos-cgroupv1-containerd-node-features
### Which tests are failing?
[Sig-Network] Networking Granular Checks: Pods should function for intra-pod communication: sctp [LinuxOnly] [Feature:SCTPConnectivity]
### Since when has it been failing?
As of Jan 16th.
### Testgrid link
https://testgrid.k8s.io/sig-node-containerd#cos-cgroupv2-containerd-node-features
### Reason for failure (if possible)
We dropped NodeFeature in all of our sig-node jobs. This test is being picked up but it isn't obvious to me why it is failing.
I can filter it out but I wanted to bring this up here to see if this is actually an issue.
### Anything else we need to know?
https://github.com/kubernetes/kubernetes/blob/master/test/e2e/common/network/networking.go#L135
The other test is skipped.
### Relevant SIG(s)
/sig networking
|
sig/network,kind/failing-test,needs-triage
|
low
|
Critical
|
2,795,908,353
|
PowerToys
|
[Settings] ImageResizer settings files may be saved too often
|
### Microsoft PowerToys version
0.87.1
### Installation method
PowerToys auto-update
### Running as admin
No
### Area(s) with issue?
Settings
### Steps to reproduce
1. Open the Settings application and navigate to the ImageResizer page
2. Click on the Edit button for any one of the listed size presets

4. Click on either Width or Height in the flyout edit dialog
5. Click or hold down on either the up or down arrow

6. Observe that the "settings.json" and "sizes.json" files are saved after every change in the width or height value
### ✔️ Expected Behavior
The settings files are only saved after a short delay, to account for multiple edits in a short period of time.
### ❌ Actual Behavior
The settings files are saved immediately after every change, even rapid changes via the up/down arrows.
### Other Software
_No response_
|
Issue-Bug,Idea-Enhancement,Product-Settings,Area-Quality,Needs-Triage
|
low
|
Major
|
2,795,954,517
|
PowerToys
|
Auto Text Expand or Auto Text
|
### Description of the new feature / enhancement
Integrate a "Auto Text " feature into Microsoft PowerToys, providing users with advanced text input tools. This feature would include capabilities such as auto-correction, text expansion, customizable shortcuts, predictive text, and multi-language support. The goal is to enhance typing efficiency and accuracy across all Windows applications.
### Scenario when this would be used?
The feature would be useful in the following scenarios:
Professional Writing: When users are drafting emails, reports, or documents, this feature can auto-correct typos, suggest better word choices, or complete frequently used phrases.
Code Writing: Developers can use customizable text expansions or shortcuts for repetitive code snippets, improving productivity.
Language Learning: Users working in a second language can benefit from real-time grammar suggestions and translation tools.
Accessibility Needs: Individuals with physical disabilities or typing difficulties can use predictive text to reduce effort and improve input speed.
### Supporting information
Customizability: Allow users to create custom dictionaries, phrase expansions, and configure features to fit their needs.
Machine Learning Models: Incorporate lightweight, on-device AI models to ensure privacy and enhance predictive accuracy.
Cross-App Functionality: Ensure that feature works seamlessly across all applications, including browsers, text editors, and command-line interfaces.
Existing Examples: Inspiration could be drawn from tools like TextExpander, Grammarly, or Apple's QuickType, but designed to align with PowerToys' open-source, modular approach.
This feature would empower users to work more efficiently and effectively, making it a valuable addition to the PowerToys suite.
|
Needs-Triage
|
low
|
Minor
|
2,795,954,591
|
vscode
|
Shell type should be undefined when running an unrecognized shell
|
Repro:
1. Open pwsh on Windows
2. Open R shell
3. Try paste with ctrl+v, 🐛 doesn't work
Context: https://github.com/microsoft/vscode/issues/238126#issuecomment-2598624657
|
feature-request,terminal-shell-integration
|
low
|
Minor
|
2,795,954,655
|
pytorch
|
[torchbench] torch._dynamo.exc.Unsupported: Graph break due to unsupported builtin None.morphologyEx
|
### 🐛 Describe the bug
Repro:
```
python benchmarks/dynamo/torchbench.py --accuracy --no-translation-validation --inference --amp --export --disable-cudagraphs --device cuda --only doctr_det_predictor
```
```
cuda eval doctr_det_predictor
ERROR:common:
Traceback (most recent call last):
File "/data/users/ivankobzarev/a/pytorch/benchmarks/dynamo/common.py", line 3055, in check_accuracy
optimized_model_iter_fn = optimize_ctx(
File "/data/users/ivankobzarev/a/pytorch/benchmarks/dynamo/common.py", line 1623, in export
ep = torch.export.export(
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/export/__init__.py", line 270, in export
return _export(
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/export/_trace.py", line 1017, in wrapper
raise e
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/export/_trace.py", line 990, in wrapper
ep = fn(*args, **kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/export/exported_program.py", line 114, in wrapper
return fn(*args, **kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/export/_trace.py", line 1880, in _export
export_artifact = export_func( # type: ignore[operator]
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/export/_trace.py", line 1224, in _strict_export
return _strict_export_lower_to_aten_ir(
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/export/_trace.py", line 1252, in _strict_export_lower_to_aten_ir
gm_torch_level = _export_to_torch_ir(
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/export/_trace.py", line 560, in _export_to_torch_ir
gm_torch_level, _ = torch._dynamo.export(
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 1432, in inner
result_traced = opt_f(*args, **kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 465, in _fn
return fn(*args, **kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 1269, in __call__
return self._torchdynamo_orig_callable(
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 526, in __call__
return _compile(
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 924, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 666, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_utils_internal.py", line 87, in wrapper_function
return function(*args, **kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 699, in _compile_inner
out_code = transform_code_object(code, transform)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/bytecode_transformation.py", line 1322, in transform_code_object
transformations(instructions, code_options)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 219, in _fn
return fn(*args, **kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 634, in transform
tracer.run()
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 2796, in run
super().run()
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
while self.step():
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 582, in wrapper
return inner_fn(self, inst)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1602, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 830, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/variables/user_defined.py", line 928, in call_function
return self.call_method(tx, "__call__", args, kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/variables/user_defined.py", line 788, in call_method
return UserMethodVariable(method, self, source=source).call_function(
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 385, in call_function
return super().call_function(tx, args, kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 324, in call_function
return super().call_function(tx, args, kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 111, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 836, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3011, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3139, in inline_call_
tracer.run()
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
while self.step():
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 582, in wrapper
return inner_fn(self, inst)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1602, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 830, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 111, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 836, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3011, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3139, in inline_call_
tracer.run()
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
while self.step():
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 582, in wrapper
return inner_fn(self, inst)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1602, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 830, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 111, in call_function
return tx.inline_user_function_return(self, [*self.self_args(), *args], kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 836, in inline_user_function_return
return InliningInstructionTranslator.inline_call(self, fn, args, kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3011, in inline_call
return cls.inline_call_(parent, func, args, kwargs)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 3139, in inline_call_
tracer.run()
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 983, in run
while self.step():
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 895, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 582, in wrapper
return inner_fn(self, inst)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 1602, in CALL_FUNCTION
self.call_function(fn, args, {})
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/symbolic_convert.py", line 830, in call_function
self.push(fn.call_function(self, args, kwargs)) # type: ignore[arg-type]
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/variables/functions.py", line 727, in call_function
unimplemented(msg)
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/torch/_dynamo/exc.py", line 297, in unimplemented
raise Unsupported(msg, case_name=case_name)
torch._dynamo.exc.Unsupported: Graph break due to unsupported builtin None.morphologyEx. This function is either a Python builtin (e.g. _warnings.warn) or a third-party C/C++ Python extension (perhaps created with pybind). If it is a Python builtin, please file an issue on GitHub so the PyTorch team can add support for it and see the next case for a workaround. If it is a third-party C/C++ Python extension, please either wrap it into a PyTorch-understood custom operator (see https://pytorch.org/tutorials/advanced/custom_ops_landing_page.html for more details) or, if it is traceable, use torch.compiler.allow_in_graph.
from user code:
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/doctr/models/detection/differentiable_binarization/pytorch.py", line 211, in forward
for preds in self.postprocessor(prob_map.detach().cpu().permute((0, 2, 3, 1)).numpy())
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/doctr/models/detection/core.py", line 90, in __call__
bin_map = [
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/doctr/models/detection/core.py", line 91, in <listcomp>
[
File "/home/ivankobzarev/local/a/pytorch-env/lib/python3.10/site-packages/doctr/models/detection/core.py", line 92, in <listcomp>
cv2.morphologyEx(bmap[..., idx], cv2.MORPH_OPEN, self._opening_kernel)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
TorchDynamo optimized model failed to run because of following error
fail_to_run
```
### Error logs
_No response_
### Versions
torch main Jan 17
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
|
oncall: pt2,module: dynamo,oncall: export,pt2-pass-rate-regression
|
low
|
Critical
|
2,795,963,924
|
PowerToys
|
Automation for PowerToys
|
### Description of the new feature / enhancement
Integrate a "Microsoft Automation" feature into Microsoft PowerToys, enabling users to automate repetitive tasks and workflows directly within the PowerToys suite. This feature would provide tools for creating scripts, automating UI interactions, and streamlining processes such as file management, text manipulation, and application control.
### Scenario when this would be used?
The Microsoft Automation feature would be useful in the following scenarios:
Routine Task Automation: Users can automate repetitive actions like renaming files, copying data, or scheduling tasks, saving time and effort.
UI Interaction Automation: Automate mouse clicks, keyboard inputs, or menu navigation for tasks requiring repetitive user interface interactions.
Custom Workflow Creation: Build custom scripts to integrate various applications or services for complex workflows, such as exporting data from one app and formatting it for another.
Enhanced Productivity: Automate actions like launching a set of applications or performing batch operations, improving overall productivity.
Accessibility Needs: Help users with physical disabilities by automating actions they find difficult to perform manually.
### Supporting information
Integration with PowerToys Philosophy: Microsoft Automation would align with PowerToys' goal of empowering users with advanced, customizable tools to enhance their productivity.
Low-Code/No-Code Interface: Provide an intuitive drag-and-drop or low-code interface for users without programming experience. Advanced users could also utilize scripting for greater flexibility.
Cross-Application Compatibility: Ensure automation scripts work seamlessly across various Windows applications and environments.
Prebuilt Templates: Offer templates for common automation tasks (e.g., bulk renaming, text formatting, data processing) to simplify the user experience.
Existing Inspiration: Features could draw inspiration from tools like Power Automate Desktop, AutoHotkey, or macro recorders, tailored to PowerToys' modular and open-source framework.
Adding Microsoft Automation to PowerToys would make the suite a more comprehensive tool for power users, enabling them to automate tasks effortlessly and improve their daily workflows.
|
Needs-Triage
|
low
|
Minor
|
2,795,974,752
|
godot
|
SpringBoneSimulator3D raises Node3D asserts when exiting tree
|
### Tested versions
- v4.4.beta1.official [d33da79d3]
### System information
Godot v4.4.beta1 - Windows 11 (build 22631) - Multi-window, 2 monitors - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 2070 SUPER (NVIDIA; 32.0.15.6603) - Intel(R) Core(TM) i7-10700F CPU @ 2.90GHz (16 threads)
### Issue description
Whenever an active SpringBoneSimulator3D exits the scene tree, it raises this assert: https://github.com/godotengine/godot/blob/d33da79d3f8fe84be2521d25b9ba8e440cf25a88/scene/3d/node_3d.cpp#L466
When the modifier is made inactive, the asserts no longer occur.
### Steps to reproduce
The MRP sets this up in a simple demo scene: press space to toggle remove/adding the tail armature to the scene tree.
### Minimal reproduction project (MRP)
[springbone_error_2025-01-17_11-29-16.zip](https://github.com/user-attachments/files/18457939/springbone_error_2025-01-17_11-29-16.zip)
|
bug,topic:animation,topic:3d
|
low
|
Critical
|
2,795,988,104
|
ant-design
|
Table组件在设置了scroll后,切换路由头部会闪烁一下
|
### Reproduction link
[](https://codesandbox.io/p/sandbox/84lz8y)
### Steps to reproduce
来回切换路由可以看见头部闪一下
### What is expected?
不要闪烁
### What is actually happening?
切换路由就会闪烁
| Environment | Info |
| --- | --- |
| antd | 5.23.1 |
| React | "react": "18.2.0", |
| System | mac |
| Browser | chrome版本 131.0.6778.205(正式版本) (x86_64) |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE -->
|
unconfirmed
|
low
|
Minor
|
2,796,017,429
|
ui
|
[bug]:The list width is 2 pixels wider than the trigger width
|
### Describe the bug
The list width is 2 pixels wider than the trigger width. I haven't tested the component in the project, I just noticed this imperfection in the documentation
<img width="886" alt="Image" src="https://github.com/user-attachments/assets/a67aee98-cd5e-4ced-bb59-4c0cd8dec9f9" />
### Affected component/components
Select
### How to reproduce
Just see it in docs
### Codesandbox/StackBlitz link
_No response_
### Logs
```bash
```
### System Info
```bash
Win11
Chrome 131.0.6778.265
```
### Before submitting
- [x] I've made research efforts and searched the documentation
- [x] I've searched for existing issues
|
bug
|
low
|
Critical
|
2,796,058,489
|
flutter
|
[go_router] Implement go_router devtool using DevTools extensions
|
### Use case
Sometimes developers need to navigate to a specific route for testing. This can be challenging if the route originates from an HTTP request or if you need to perform certain actions to invoke the routing action.
Another troublesome issue arises when your app has numerous complex redirects, and you need to debug potential problems.
### Proposal
The proposal is to create a [DevTools extension](https://docs.flutter.dev/tools/devtools/extensions) for go_router.
This `devtool` would have some capabilities like:
- Displaying the registered routes in a tree-like structure, similar to Flutter's widget tree inspector.
- Allowing the invocation of navigation methods (go, push, pop, etc.) directly from within the devtool.
- Holding the navigation history to allow better `redirect` debugging.
|
team-go_router
|
low
|
Critical
|
2,796,061,961
|
node
|
libuv assertion on Windows with Node.js 23.x
|
### Version
23.x
### Platform
```text
Windows
```
### Subsystem
_No response_
### What steps will reproduce the bug?
Write the following file as `registryServer.mjs`:
<details><summary><code>registryServer.mjs</code></summary>
The original file is https://github.com/nodejs/corepack/blob/main/tests/_registryServer.mjs, I tried to trim the unrelated stuff but it's still a large file:
```js
import { createHash, createSign, generateKeyPairSync } from "node:crypto";
import { once } from "node:events";
import { createServer } from "node:http";
import { gzipSync } from "node:zlib";
let privateKey, keyid;
({ privateKey } = generateKeyPairSync(`ec`, {
namedCurve: `sect239k1`,
}));
const { privateKey: p, publicKey } = generateKeyPairSync(`ec`, {
namedCurve: `sect239k1`,
publicKeyEncoding: {
type: `spki`,
format: `pem`,
},
});
privateKey ??= p;
keyid = `SHA256:${createHash(`SHA256`).end(publicKey).digest(`base64`)}`;
process.env.COREPACK_INTEGRITY_KEYS = JSON.stringify({
npm: [
{
expires: null,
keyid,
keytype: `ecdsa-sha2-sect239k1`,
scheme: `ecdsa-sha2-sect239k1`,
key: publicKey.split(`\n`).slice(1, -2).join(``),
},
],
});
function createSimpleTarArchive(fileName, fileContent, mode = 0o644) {
const contentBuffer = Buffer.from(fileContent);
const header = Buffer.alloc(512); // TAR headers are 512 bytes
header.write(fileName);
header.write(`100${mode.toString(8)} `, 100, 7, `utf-8`); // File mode (octal) followed by a space
header.write(`0001750 `, 108, 8, `utf-8`); // Owner's numeric user ID (octal) followed by a space
header.write(`0001750 `, 116, 8, `utf-8`); // Group's numeric user ID (octal) followed by a space
header.write(`${contentBuffer.length.toString(8)} `, 124, 12, `utf-8`); // File size in bytes (octal) followed by a space
header.write(
`${Math.floor(new Date(2000, 1, 1) / 1000).toString(8)} `,
136,
12,
`utf-8`
); // Last modification time in numeric Unix time format (octal) followed by a space
header.fill(` `, 148, 156); // Fill checksum area with spaces for calculation
header.write(`ustar `, 257, 8, `utf-8`); // UStar indicator
// Calculate and write the checksum. Note: This is a simplified calculation not recommended for production
const checksum = header.reduce((sum, value) => sum + value, 0);
header.write(`${checksum.toString(8)}\0 `, 148, 8, `utf-8`); // Write checksum in octal followed by null and space
return Buffer.concat([
header,
contentBuffer,
Buffer.alloc(512 - (contentBuffer.length % 512)),
]);
}
const mockPackageTarGz = gzipSync(
Buffer.concat([
createSimpleTarArchive(
`package/bin/pnpm.js`,
`#!/usr/bin/env node\nconsole.log("pnpm: Hello from custom registry");\n`,
0o755
),
createSimpleTarArchive(
`package/package.json`,
JSON.stringify({
bin: {
pnpm: `bin/pnpm.js`,
},
})
),
Buffer.alloc(1024),
])
);
const shasum = createHash(`sha1`).update(mockPackageTarGz).digest(`hex`);
const integrity = `sha512-${createHash(`sha512`)
.update(mockPackageTarGz)
.digest(`base64`)}`;
const registry = {
__proto__: null,
pnpm: [`42.9998.9999`],
};
function generateSignature(packageName, version) {
if (privateKey == null) return undefined;
const sign = createSign(`SHA256`).end(
`${packageName}@${version}:${integrity}`
);
return {
integrity,
signatures: [
{
keyid,
sig: sign.sign(privateKey, `base64`),
},
],
};
}
function generateVersionMetadata(packageName, version) {
return {
name: packageName,
version,
bin: {
[packageName]: `./bin/${packageName}.js`,
},
dist: {
shasum,
size: mockPackageTarGz.length,
tarball: `https://registry.npmjs.org/${packageName}/-/${packageName}-${version}.tgz`,
...generateSignature(packageName, version),
},
};
}
const server = createServer((req, res) => {
let slashPosition = req.url.indexOf(`/`, 1);
if (req.url.charAt(1) === `@`)
slashPosition = req.url.indexOf(`/`, slashPosition + 1);
const packageName = req.url.slice(
1,
slashPosition === -1 ? undefined : slashPosition
);
if (packageName in registry) {
if (req.url === `/${packageName}`) {
// eslint-disable-next-line @typescript-eslint/naming-convention
res.end(
JSON.stringify({
"dist-tags": {
latest: registry[packageName].at(-1),
},
versions: Object.fromEntries(
registry[packageName].map((version) => [
version,
generateVersionMetadata(packageName, version),
])
),
})
);
return;
}
const isDownloadingRequest =
req.url.slice(packageName.length + 1, packageName.length + 4) === `/-/`;
let version;
if (isDownloadingRequest) {
const match = /^(.+)-(.+)\.tgz$/.exec(
req.url.slice(packageName.length + 4)
);
if (match?.[1] === packageName) {
version = match[2];
}
} else {
version = req.url.slice(packageName.length + 2);
}
if (version === `latest`) version = registry[packageName].at(-1);
if (registry[packageName].includes(version)) {
res.end(
isDownloadingRequest
? mockPackageTarGz
: JSON.stringify(generateVersionMetadata(packageName, version))
);
} else {
res.writeHead(404).end(`Not Found`);
throw new Error(`unsupported request`, {
cause: { url: req.url, packageName, version, isDownloadingRequest },
});
}
} else {
res.writeHead(500).end(`Internal Error`);
throw new Error(`unsupported request`, {
cause: { url: req.url, packageName },
});
}
});
server.listen(0, `localhost`);
await once(server, `listening`);
const { address, port } = server.address();
process.env.COREPACK_NPM_REGISTRY = `http://user:pass@${
address.includes(`:`) ? `[${address}]` : address
}:${port}`;
server.unref();
```
</details>
Then run the following commands:
```powershell
$env:COREPACK_ENABLE_PROJECT_SPEC=0
$env:NODE_OPTIONS="--import ./registryServer.mjs"
corepack pnpm@42.x --version
```
### How often does it reproduce? Is there a required condition?
Always on Windows with Node.js 23.x, no required condition, tested with 23.0.0 (libuv 1.48.0), 23.4.0 (libuv 1.49.1), and 23.6.0 (libuv 1.49.2).
It does not reproduce on Linux nor macOS.
It does not reproduce on 22.13.2 (libuv 1.49.2), which makes me think it's not a libuv bug, but a Node.js one.
### What is the expected behavior? Why is that the expected behavior?
No assertions, the exit code should be 1
### What do you see instead?
`Assertion failed: !(handle->flags & UV_HANDLE_CLOSING), file c:\ws\deps\uv\src\win\async.c, line 76`
The exit code is 3221226505.
### Additional information
My initial thought was that it might be related to having an exception thrown while handling an HTTP request, but I wasn't able to reproduce with just that.
|
windows,libuv
|
low
|
Critical
|
2,796,073,888
|
pytorch
|
Inductor aten.clone lowering ignores Conjugate and Negative dispatch keys
|
### 🐛 Describe the bug
In runtime-dispatched `torch`, conjugation and certain forms of negation are lazily evaluated at dispatch. The current lowering for `aten.clone` ignores this. See the minimal reproducer below:
```python
import torch
fn = torch.compile(torch.ops.aten.clone.default) # this issue does not occur in "eager" or "aot_eager"
u = torch.randn(5, dtype=torch.complex64).conj().imag # sets Negative dispatch key
assert torch.all(fn(u) == u) # fails
```
### Error logs
_No response_
### Versions
```
Collecting environment information...
PyTorch version: 2.7.0a0+git6759d9c
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (conda-forge gcc 12.4.0-1) 12.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.39
Python version: 3.9.21 | packaged by conda-forge | (main, Dec 5 2024, 13:51:40) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Quadro RTX 8000
GPU 1: Quadro RTX 8000
Nvidia driver version: 560.35.05
cuDNN version: Probably one of the following:
/usr/local/cuda-12.6.3/targets/x86_64-linux/lib/libcudnn.so.9
/usr/local/cuda-12.6.3/targets/x86_64-linux/lib/libcudnn_adv.so.9
/usr/local/cuda-12.6.3/targets/x86_64-linux/lib/libcudnn_cnn.so.9
/usr/local/cuda-12.6.3/targets/x86_64-linux/lib/libcudnn_engines_precompiled.so.9
/usr/local/cuda-12.6.3/targets/x86_64-linux/lib/libcudnn_engines_runtime_compiled.so.9
/usr/local/cuda-12.6.3/targets/x86_64-linux/lib/libcudnn_graph.so.9
/usr/local/cuda-12.6.3/targets/x86_64-linux/lib/libcudnn_heuristic.so.9
/usr/local/cuda-12.6.3/targets/x86_64-linux/lib/libcudnn_ops.so.9
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: False
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper 2970WX 24-Core Processor
CPU family: 23
Model: 8
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 2
Frequency boost: enabled
CPU(s) scaling MHz: 77%
CPU max MHz: 3000.0000
CPU min MHz: 2200.0000
BogoMIPS: 5987.89
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid amd_dcm aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb hw_pstate ssbd ibpb vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif overflow_recov succor smca sev sev_es
Virtualization: AMD-V
L1d cache: 768 KiB (24 instances)
L1i cache: 1.5 MiB (24 instances)
L2 cache: 12 MiB (24 instances)
L3 cache: 64 MiB (8 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-5,24-29
NUMA node1 CPU(s): 12-17,36-41
NUMA node2 CPU(s): 6-11,30-35
NUMA node3 CPU(s): 18-23,42-47
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT vulnerable
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] bert_pytorch==0.0.1a4
[pip3] flake8==6.1.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.15.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] functorch==1.14.0a0+b71aa0b
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] onnx==1.17.0
[pip3] optree==0.13.0
[pip3] pytorch-labs-segment-anything-fast==0.2
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.7.0a0+git6759d9c
[pip3] torch_geometric==2.4.0
[pip3] torchao==0.7.0
[pip3] torchaudio==2.6.0a0+b6d4675
[pip3] torchdata==0.11.0a0+227d3d7
[pip3] torchmultimodal==0.1.0b0
[pip3] torchtext==0.17.0a0+1d4ce73
[pip3] torchvision==0.22.0a0+d3beb52
[conda] bert-pytorch 0.0.1a4 dev_0 <develop>
[conda] cuda-cudart 12.4.127 he02047a_2 conda-forge
[conda] cuda-cudart-dev 12.4.127 he02047a_2 conda-forge
[conda] cuda-cudart-dev_linux-64 12.4.127 h85509e4_2 conda-forge
[conda] cuda-cudart-static 12.4.127 he02047a_2 conda-forge
[conda] cuda-cudart-static_linux-64 12.4.127 h85509e4_2 conda-forge
[conda] cuda-cudart_linux-64 12.4.127 h85509e4_2 conda-forge
[conda] cuda-cupti 12.4.127 he02047a_2 conda-forge
[conda] cuda-cupti-dev 12.4.127 he02047a_2 conda-forge
[conda] cuda-libraries-dev 12.4.1 ha770c72_1 conda-forge
[conda] cuda-nvrtc 12.4.127 he02047a_2 conda-forge
[conda] cuda-nvrtc-dev 12.4.127 he02047a_2 conda-forge
[conda] cuda-nvtx 12.4.127 he02047a_2 conda-forge
[conda] cuda-nvtx-dev 12.4.127 ha770c72_2 conda-forge
[conda] cuda-opencl 12.4.127 he02047a_1 conda-forge
[conda] cuda-opencl-dev 12.4.127 he02047a_1 conda-forge
[conda] cudnn 9.3.0.75 h62a6f1c_2 conda-forge
[conda] functorch 1.14.0a0+b71aa0b pypi_0 pypi
[conda] libcublas 12.4.5.8 he02047a_2 conda-forge
[conda] libcublas-dev 12.4.5.8 he02047a_2 conda-forge
[conda] libcufft 11.2.1.3 he02047a_2 conda-forge
[conda] libcufft-dev 11.2.1.3 he02047a_2 conda-forge
[conda] libcurand 10.3.5.147 he02047a_2 conda-forge
[conda] libcurand-dev 10.3.5.147 he02047a_2 conda-forge
[conda] libcusolver 11.6.1.9 he02047a_2 conda-forge
[conda] libcusolver-dev 11.6.1.9 he02047a_2 conda-forge
[conda] libcusparse 12.3.1.170 he02047a_2 conda-forge
[conda] libcusparse-dev 12.3.1.170 he02047a_2 conda-forge
[conda] libmagma 2.8.0 h0af6554_0 conda-forge
[conda] libmagma_sparse 2.8.0 h0af6554_0 conda-forge
[conda] libnvjitlink 12.4.127 he02047a_2 conda-forge
[conda] libnvjitlink-dev 12.4.127 he02047a_2 conda-forge
[conda] magma 2.8.0 h51420fd_0 conda-forge
[conda] mkl 2025.0.0 h901ac74_941 conda-forge
[conda] mkl-include 2025.0.0 hf2ce2f3_941 conda-forge
[conda] numpy 1.26.4 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] pytorch-labs-segment-anything-fast 0.2 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git0d4682f0 pypi_0 pypi
[conda] torch 2.7.0a0+git6759d9c dev_0 <develop>
[conda] torch-geometric 2.4.0 pypi_0 pypi
[conda] torchao 0.7.0 pypi_0 pypi
[conda] torchaudio 2.6.0a0+b6d4675 pypi_0 pypi
[conda] torchdata 0.11.0a0+227d3d7 pypi_0 pypi
[conda] torchfix 0.4.0 pypi_0 pypi
[conda] torchmultimodal 0.1.0b0 pypi_0 pypi
[conda] torchtext 0.17.0a0+1d4ce73 pypi_0 pypi
[conda] torchvision 0.22.0a0+d3beb52 pypi_0 pypi
```
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @zou3519 @bdhirsh
|
triaged,module: correctness (silent),bug,oncall: pt2,module: inductor
|
low
|
Critical
|
2,796,074,082
|
TypeScript
|
Branded string literals revert to `string` in some cases
|
### 🔎 Search Terms
If I create a branded string literal it seems to respect the string literal in the basic case of assigning to a property, however it loses this information and just reverts to string when assigning to a property or in a template string.
It makes sense that the brand object needs to be dropped in these cases, but rather than converting to string, it should convert to the narrower literal type.
### 🕗 Version & Regression Information
- This is the behavior in every version I tried, and I reviewed the FAQ for entries about branded string
### ⏯ Playground Link
https://www.typescriptlang.org/play/?#code/C4TwDgpgBAsiAyBLYEBOBDANgHgMpQgA8UA7AEwGcoLhVESBzAPigF4p8AyKAbwH0ARhnIAuXgF9xAbgBQAYwD2JGlHRsoAcnQbVVOEhQYcWjU1nylKwmJPr0UqAHpHUAO4LUAawoBCC8uAoEBttdQADABIedHEwh2coADN0RExffxUALzFosRIAVwBbATRxdR4AbXQAXTEADjKE5NTfIA
### 💻 Code
```ts
type MyLiteral<S extends string> = S & {_brand: {}};
const a = 'a' as MyLiteral<'a'>;
const x: 'a' = a; // works!
const y: 'a' = `${a}`; // fails!
const z: {a: number} = {[a]: 8} // fails!
```
### 🙁 Actual behavior
For `y` and `z`, the types don't match because `a` is converted to `string` rather than `'a'`
### 🙂 Expected behavior
The types should match in all 3 cases, `x`, `y`, `z`
### Additional information about the issue
_No response_
|
Help Wanted,Possible Improvement
|
low
|
Minor
|
2,796,078,456
|
pytorch
|
Tracking issue: Incorrect Meta Strides / Turn On PyDispatcher in FakeTensor Mode
|
### 🐛 Describe the bug
Incorrect Strides can manifest in errors within torch.compile. Potentially what makes them trickier is that they only sometimes cause errors. An incorrect stride can lay dormant for a while and then cause a problem.
See, [this discussion](https://github.com/pytorch/pytorch/issues/144699#issuecomment-2591018702) with @ezyang, @bdhirsh and myself about incorrect strides.
There are a number of known issues that yet unfixed. Some of them have outstanding prs, please check with the pr author before taking it on.
- [ ] `full_like`: https://github.com/pytorch/pytorch/issues/144699
- [ ] `_unsafe_index` : https://github.com/pytorch/pytorch/issues/139312
- [ ] `_fft_r2c`: https://github.com/pytorch/pytorch/issues/135087
- [ ] `_constant_pad_nd`: https://github.com/pytorch/pytorch/issues/144187
Additionally, there are a number of stride & other issues that have been exposed by enabling PyDispatcher in FakeTensorMode. This causes us to potentially route through different decompositions and metas. It is what we use in torch.compile, which means we lack coverage of this mode in our other tests.
Tests exposed by this [turning this on](https://github.com/pytorch/pytorch/pull/138953#issuecomment-2438965279):
- [ ] dropout
- [ ] MultiLabelMarginLoss
Fft tests as well, but that might be related to `_fft_r2c` in the existing issue.
### Versions
master
cc @chauhang @penguinwu @SherlockNoMad @zou3519 @bdhirsh @yf225
|
triaged,oncall: pt2,module: fakeTensor,module: decompositions,module: pt2-dispatcher
|
low
|
Critical
|
2,796,078,744
|
PowerToys
|
[ImageResizer] Pick up settings changes automatically
|
### Description of the new feature / enhancement
This enhancement would automatically reload any changes made by the user to the Image Resizer options within the Settings application. Currently, making changes requires Image Resizer to be closed and reopened.
Ideally, there would be a brief notification in the Image Resizer UI to indicate that the updated settings had been loaded.
### Scenario when this would be used?
It would allow users to make changes to presets and options like "Filename format" while the application is open, streamlining situations where the existing properties are not quite right for the current operation.
### Supporting information
We rely on the user knowing (without this being documented) that they need to close and reopen the application when making changes in the Settings app. A novice user could easily assume that the changes they make in the Settings app (say to "Filename format" or the file modified option) would be immediately reflected in the open Image Resizer application.
|
Idea-Enhancement,Product-Image Resizer,Needs-Triage
|
low
|
Minor
|
2,796,098,976
|
rust
|
Tracking issue for release notes of #127292: Tracking Issue for PathBuf::add_extension and Path::with_added_extension
|
This issue tracks the release notes text for #127292.
### Steps
- [ ] Proposed text is drafted by PR author (or team) making the noteworthy change.
- [ ] Issue is nominated for release team review of clarity for wider audience.
- [ ] Release team includes text in release notes/blog posts.
### Release notes text
The responsible team for the underlying change should edit this section to replace the automatically generated link with a succinct description of what changed, drawing upon text proposed by the author (either in discussion or through direct editing).
````markdown
# Category (e.g. Language, Compiler, Libraries, Compatibility notes, ...)
- [Tracking Issue for PathBuf::add_extension and Path::with_added_extension](https://github.com/rust-lang/rust/issues/127292)
````
> [!TIP]
> Use the [previous releases](https://doc.rust-lang.org/nightly/releases.html) categories to help choose which one(s) to use.
> The category will be de-duplicated with all the other ones by the release team.
>
> *More than one section can be included if needed.*
### Release blog section
If the change is notable enough for inclusion in the blog post, the responsible team should add content to this section.
*Otherwise leave it empty.*
````markdown
````
cc @tisonkun -- origin issue/PR authors and assignees for starting to draft text
|
T-libs-api,relnotes,needs-triage,relnotes-tracking-issue
|
low
|
Minor
|
2,796,100,180
|
PowerToys
|
[ImageResizer] Images may be saved to a new file but not resized
|
### Microsoft PowerToys version
0.87.1
### Installation method
PowerToys auto-update
### Running as admin
No
### Area(s) with issue?
Image Resizer
### Steps to reproduce
1. Create a new Image Resizer preset within the Settings application of 4000 by 4000 pixels with a Fit setting of "Fit"
2. In Explorer, right-click on any image smaller than 4000 x 4000 pixels
3. Select "Resize with Image Resizer" to open Image Resizer
4. Select the preset created in Step 2
5. Ensure "Make pictures smaller but not larger" is checked
6. Click the Resize button
### ✔️ Expected Behavior
Either:
1. No work is done because the image already fits within 4000 x 4000 pixels and does not need to be resized, or;
2. The original file is copied, following the filename format and file modified settings defined in the Settings application
### ❌ Actual Behavior
The image is 'resized' but with the same dimensions as it was originally. For lossy codecs like JPEG, this results in a generational degradation in quality.
### Other Software
_No response_
|
Issue-Bug,Needs-Triage
|
low
|
Minor
|
2,796,104,322
|
TypeScript
|
[isolatedDeclarations] Quick-fix for adding missing annotations can duplicate comments
|
### 🔎 Search Terms
isolatedDeclarations, autofixer, quick fix, quickfixer, quick-fix, duplicate, redundant, comments
### 🕗 Version & Regression Information
TS5.5+
### ⏯ Playground Link
https://www.typescriptlang.org/play/?downlevelIteration=true&importHelpers=true&target=99&module=1&isolatedDeclarations=true&ts=5.7.3#code/KYDwDg9gTgLgBAMwK4DsDGMCWEWIBQCUcA3gFBwVxo4DO8EcAvHAPQBUbcARsAtMHDYsSrTphQAbcQMh0AtMUHCWw4CgAmcKShkR5ZSpRVx1ENEgC2a+AkxQ6cGAE8wwcodFxx2gTz5QBITgQT29pODAAuWoJHCU4AC5QyXDI4DlnV3iAVmSfCL0YDJdA4QAaVmEAQwQYYCgqCAsLKoBGd0NjU3MrFHgaYGoNRxKOow44ACZuXn54p0S4AHI6KCXRTmmauoahCpVt+sbmqsmximM06NiacQBzMYBfDbhD3ZYAbjGAmCQoXAgX0eQA
### 💻 Code
```ts
export function f() {
const o = /** before */ { /* inline post-{ */ // end line post-{
// document first type
/* inline before */ x /* inline pre-colon */ : /* inline pre-type */ 5 /* inline post-type */ , // after comma1
// document second type
/** 2 before */ y : 'str' /** 2 after */, //after comma2
// pre-closing
} /** after */;
return o;
}
```
### 🙁 Actual behavior
After running the autofixer, the result includes many of the inline code comments in the added type annotation. It looks like:
```ts
export function f(): { /* inline post-{ */ // end line post-{
// document first type
/* inline before */ x: number; /* inline post-type */ // after comma1
// document second type
/** 2 before */ y: string; /** 2 after */
} {
const o = /** before */ { /* inline post-{ */ // end line post-{
// document first type
/* inline before */ x /* inline pre-colon */ : /* inline pre-type */ 5 /* inline post-type */ , // after comma1
// document second type
/** 2 before */ y : 'str' /** 2 after */, //after comma2
// pre-closing
} /** after */;
return o;
}
```
### 🙂 Expected behavior
The actual behavior isn't technically broken, but I would generally expect/prefer the generated annotation to include only types with no comments. Something like:
```ts
export function f(): {
x: number;
y: string;
} {
const o = /** before */ { /* inline post-{ */ // end line post-{
// document first type
/* inline before */ x /* inline pre-colon */ : /* inline pre-type */ 5 /* inline post-type */ , // after comma1
// document second type
/** 2 before */ y : 'str' /** 2 after */, //after comma2
// pre-closing
} /** after */;
return o;
}
```
### Additional information about the issue
_No response_
|
Bug,Help Wanted
|
low
|
Critical
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.