id
int64 393k
2.82B
| repo
stringclasses 68
values | title
stringlengths 1
936
| body
stringlengths 0
256k
⌀ | labels
stringlengths 2
508
| priority
stringclasses 3
values | severity
stringclasses 3
values |
|---|---|---|---|---|---|---|
2,797,854,823
|
godot
|
[4.4beta1] When dragging a resource value, the resource details will unexpectedly expand
|
### Tested versions
4.4 beta1
### System information
Godot v4.4.beta1 - Windows 10 - Multi-window
### Issue description
> If the resource details are lengthy, you may need to scroll the UI
1. When dragging a resource, the resource details will first expand/collapse
2. If the resource is a script file, the inspector will directly switch to the script interface.
3. This variable will remember the expanded state, but I prefer that resources do not auto-expand when dragged
### before

### 4.4 beta1

### Steps to reproduce
dragging any resource value in the inspector
### Minimal reproduction project (MRP)
N/A
|
bug,topic:editor,regression
|
low
|
Minor
|
2,797,856,412
|
deno
|
deno add fails for npm:date-fns package
|
`deno add npm:date-fns@latest` (Edit: same happens with `deno cache`)
yields the result:
```
Add npm:date-fns@4.1.0
error: Failed caching npm package 'date-fns@4.1.0'
Caused by:
Failed moving extracted tarball to final destination
```
During the action, there was a (seemingly) temporary folder in `%AppData%\Local\deno\npm\registry.npmjs.org\date-fns` which is the content of the extracted tarball, but it later got deleted and now just the `registry.json` is still there.
```
deno 2.1.6 (stable, release, x86_64-pc-windows-msvc)
v8 13.0.245.12-rusty
typescript 5.6.2
```
|
install
|
low
|
Critical
|
2,797,857,833
|
vscode
|
SCM - Commit Message Generation with Branch Names and Project Info
|
<!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
### Feature Request: Enhanced Commit Message Generation with Branch Names and Project-Specific Information
#### Description
I would like to suggest an enhancement to GitHub Copilot in Visual Studio Code to allow the commit message generation settings to have access to branch names and project-specific information. This feature would enable more context-aware and relevant commit messages, improving the overall workflow and commit history quality.
#### Use Case
When generating commit messages, having access to the current branch name and project-specific details (such as project name, module names, etc.) can provide more meaningful and descriptive commit messages. This is particularly useful in large projects with multiple branches and modules, where context is crucial for understanding the changes.
#### Proposed Solution
1. **Access to Branch Names**: Allow Copilot to access the current branch name and include it in the commit message suggestions.
2. **Project-Specific Information**: Enable configuration to include project-specific details such as project name, module names, or other relevant metadata in the commit message generation process.
3. **Configuration Options**: Provide settings in VS Code to enable or disable these features, allowing users to customize the level of detail included in commit messages.
#### Example
Consider a branch name that includes a ticket code, such as `feature/PROJ-1234-add-login`. Copilot could extract the ticket code `PROJ-1234` and prefix the commit message with it. For instance, if the commit involves adding a login feature, the generated commit message could be:
`PROJ-1234: Add login feature`
This provides immediate context about the ticket or issue related to the commit, making it easier for team members to track changes and understand the purpose of the commit.
#### Benefits
- Improved commit message relevance and clarity.
- Enhanced context for team members reviewing commit history.
- Streamlined workflow with less manual editing of commit messages.
|
feature-request,scm
|
low
|
Major
|
2,797,860,662
|
neovim
|
SIGSEGV when using nvim over serial port at 1,000,000 baud rate
|
### Problem
So this weekend I decided to enable serial access to my server via a USB-Serial cable (is a pl2303) and the box advertises a transfer rate of 1Mbps (`1000000` baud rate).
### Steps to reproduce
### Nothing wrong case
**server**:
kernel params: `console=ttyUSB0,115200`
**client**
`picocom -b 115200 /dev/ttyUSB0`
1. login
2. start nvim
3. nothing wrong
### High baud rate case (1Mbps)
**server**
kernel params: `console=ttyUSB0,1000000`
**client**
`picocom -b 1000000 /dev/ttyUSB0`
1. login
2. start nvim
3. SIGSEGV, terminal/prompt gets messed have to exit + kill/restart terminal
### Mismatched baud rate
~~**server**~~
~~kernel params: `console=ttyUSB0,1000000`~~
~~**client**~~
~~`picocom -b 115200 /dev/ttyUSB0`~~
1. ~~login~~
2. ~~start nvim~~
3. ~~nvim now renders without crashing but ends up taking up 100% CPU on the server and is completely unresponsive. Have to ssh into the server and pkill it (does respond to input like `:q`)~~
Solved in nightly
For reference in all these cases other programs like htop, worked with no issue
### Expected behavior
setting a high baud rate should not crash neovim or cause it to eat 100% of cpu
### Nvim version (nvim -v)
v0.10.3
### Vim (not Nvim) behaves the same?
have not tried, I only use nvim
### Operating system/version
linux 6.12.9
### Terminal name/version
kitty + ghostty
### $TERM environment variable
vt220 (set by default by systemd-getty as of systemd 219)
### Installation
system package manager for arch
|
tui,bug-crash
|
low
|
Critical
|
2,797,865,261
|
flutter
|
[google-sign-in] Canceled sign-in attempt not reflected by `onCurrentUserChanged` `Stream`
|
### What package does this bug report belong to?
google_sign_in
### What target platforms are you seeing this bug on?
iOS, Android
### Have you already upgraded your packages?
Yes
### Dependency versions
<details><summary>pubspec.lock</summary>
```lock
google_identity_services_web:
dependency: transitive
description:
name: google_identity_services_web
sha256: "5be191523702ba8d7a01ca97c17fca096822ccf246b0a9f11923a6ded06199b6"
url: "https://pub.dev"
source: hosted
version: "0.3.1+4"
google_sign_in:
dependency: "direct main"
description:
name: google_sign_in
sha256: "0b8787cb9c1a68ad398e8010e8c8766bfa33556d2ab97c439fb4137756d7308f"
url: "https://pub.dev"
source: hosted
version: "6.2.1"
google_sign_in_android:
dependency: transitive
description:
name: google_sign_in_android
sha256: "0928059d2f0840f63c7b07a30cf73b593ae872cdd0dbd46d1b9ba878d2599c01"
url: "https://pub.dev"
source: hosted
version: "6.1.33"
google_sign_in_ios:
dependency: transitive
description:
name: google_sign_in_ios
sha256: "83f015169102df1ab2905cf8abd8934e28f87db9ace7a5fa676998842fed228a"
url: "https://pub.dev"
source: hosted
version: "5.7.8"
google_sign_in_platform_interface:
dependency: transitive
description:
name: google_sign_in_platform_interface
sha256: "1f6e5787d7a120cc0359ddf315c92309069171306242e181c09472d1b00a2971"
url: "https://pub.dev"
source: hosted
version: "2.4.5"
```
</details>
### Steps to reproduce
1. Make a sign-in request via `GoogleSignIn#signIn()`
2. As the app user, cancel the sign-in without choosing a Google account via the UI
### Expected results
The stream returned by `GoogleSignIn#onCurrentUserChanged()` should have a `null` element emitted for implementations that rely on the stream to handle auth events
### Actual results
While the `Future` returned by `GoogleSignIn#signIn()` completes with `null`, the user changed stream does not emit a null value.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'dart:io';
import 'package:flutter/foundation.dart';
import 'package:flutter/material.dart';
import 'package:google_sign_in/google_sign_in.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
// This widget is the root of your application.
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Google Sign-in bug demo',
theme: ThemeData(
colorScheme: ColorScheme.fromSeed(seedColor: Colors.deepPurple),
useMaterial3: true,
),
home: const MyHomePage(title: 'Google Sign-in bug demo'),
);
}
}
class MyHomePage extends StatefulWidget {
const MyHomePage({super.key, required this.title});
final String title;
@override
State<MyHomePage> createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
static final _clientId = 'google client id';
final _googleSignIn = GoogleSignIn(clientId: _clientId);
bool loading = false;
bool signedIn = false;
@override
void initState() {
super.initState();
// relying on changes in the stream to update auth status
_googleSignIn.onCurrentUserChanged.listen((acc) {
setState(() {
loading = false;
signedIn = acc != null;
});
});
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
backgroundColor: Theme.of(context).colorScheme.inversePrimary,
title: Text(widget.title),
),
body: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: [
if (loading)
...[
Text('loading auth state'),
CircularProgressIndicator(),
]
else if (signedIn)
...[
Text('you\'re signed in!'),
ElevatedButton(onPressed: _doGoogleSignOut, child: Text('sign out')),
]
else
ElevatedButton(onPressed: _doGoogleSignIn, child: Text('sign in with google')),
],
),
),
);
}
void _doGoogleSignIn() {
setState(() {
loading = true;
// when a user cancels the auth process, the future returned by the following request will complete with `null`,
// however, the `_googleSignIn.onCurrentUserChanged` `Stream` does not emit a new `null` value,
// nor is an error thrown. this means that we can't use a consistent approach (i.e., relying solely on the stream)
// to handle the canceled auth flow scenario, and have to also await this future and see if the result is null
_googleSignIn.signIn();
});
}
void _doGoogleSignOut() {
setState(() {
loading = true;
_googleSignIn.signOut();
});
}
}
```
</details>
### Screenshots or Videos
<details open>
<summary>Screenshots / Video demonstration</summary>
https://drive.google.com/file/d/1YoO4xrlAKxRPhnxa7tnkleyRSaxoekmJ/view?usp=sharing
The video first shows that the happy path flows of sign-in and sign-out work when relying on the stream for auth updates. But then, for the Cancel flow, the stream is never updated, so the app remains in the loading state indefinitely.
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[Paste your output here]
```
</details>
|
p: google_sign_in,package,team-ecosystem,has reproducible steps,P2,triaged-ecosystem,found in release: 3.27,found in release: 3.28
|
low
|
Critical
|
2,797,875,418
|
godot
|
Rotating 3D camera parent with physics interpolation make it jump around
|
### Tested versions
Reproducible in `4.4-beta1`, `4.4-dev7`, not sure for prior versions
### System information
Windows 11
### Issue description
Rotating camera parent with having physics interpolation enabled on the camera shakes the camera instead of retaining its focus and only interpolating the parent rotation.
### Steps to reproduce
Attached is a minimal reproducible project which rotates the camera parent on mouse scroll and a button to toggle camera interpolation for comparison.
### Minimal reproduction project (MRP)
[rotationbug.zip](https://github.com/user-attachments/files/18470750/rotationbug.zip)
|
documentation,topic:3d
|
low
|
Critical
|
2,797,885,871
|
godot
|
[4.4beta1] Save file is not created in Play mode only if I debug the code
|
### Tested versions
4.4 Beta1 .NET using C#
### System information
Godot Engine v4.4.beta1.mono.official.d33da79d3 - https://godotengine.org Vulkan 1.3.289 - Forward Mobile - Using Device #0: NVIDIA - NVIDIA GeForce RTX 4050 Laptop GPU
### Issue description
```cs
public partial class ScoreManager : Node
{
public static ScoreManager Instance { get; private set; }
private uint _score;
private uint _highScore;
private const string ScoreFile = "user://tappy.save";
public override void _Ready()
{
Instance = this;
LoadScoreFromFile();
}
public override void _ExitTree()
{
SaveScoreToFile();
}
public static uint GetScore()
{
return Instance._score;
}
public static void SetScore(uint score)
{
Instance._score = score;
if (Instance._score > Instance._highScore)
{
Instance._highScore = Instance._score;
}
SignalManager.EmitOnScored();
}
public static uint GetHighScore()
{
return Instance._highScore;
}
public static void ResetScore()
{
SetScore(0);
}
public static void IncrementScore()
{
SetScore(GetScore() + 1);
}
private void LoadScoreFromFile()
{
using FileAccess file = FileAccess.Open(ScoreFile, FileAccess.ModeFlags.Read);
if (file != null)
{
_highScore = file.Get32();
}
}
private void SaveScoreToFile()
{
using FileAccess file = FileAccess.Open(ScoreFile, FileAccess.ModeFlags.Write);
if (file != null)
{
file.Store32(_highScore);
}
}
}
```
The _highscore is not saving in play mode the Tappy.save is not generating when the game close. It only work if I debug the code. I think the problam is related with the new game window.
### Steps to reproduce
Use the code that I provide and try to generate the file.
### Minimal reproduction project (MRP)
[MRPTappyPlane.zip](https://github.com/user-attachments/files/18470834/MRPTappyPlane.zip)
|
bug,topic:core,needs testing
|
low
|
Critical
|
2,797,887,738
|
neovim
|
mouse: context menu (right click) helps discover lsp/diagnostics
|
## Problem
We show diagnostics (by default) in the signs column and highlight/underline, but getting more details about the warning/error is not obvious.
<img width="411" alt="Image" src="https://github.com/user-attachments/assets/37a104db-0c4f-44c5-890c-28615e143104" />
## Expected behavior
- When user right-clicks on a diagnostic, show...
- a `Show diagnostics` menu item which opens the quickfix list
- a `Configure diagnostics` menu item which goes to help that mentions
- `vim.diagnostic.open_float()`
- `vim.diagnostic.config({ virtual_text = true })`
- Also when right-clicking the sign column item.
|
defaults,complexity:low,lsp
|
low
|
Critical
|
2,797,888,358
|
next.js
|
Image component crashes in combination with turbopack for webp images. Processing image failed unable to decode image data
|
### Link to the code that reproduces this issue
https://github.com/KilianB/Turbopack-webp-bug
### To Reproduce
1. clear the .next cache folder if applicable.
2. run `npm run dev` and the application will crash
---
3. clear cache
4. run without --turbopack
5. Observe that it works
### Current vs. Expected behavior
An image is shown on the page, it does not crash when using turbopack
````
Processing image failed
unable to decode image data
Caused by:
- Format error decoding WebP: An expected chunk was missing
- An expected chunk was missing
````

### Provide environment information
```bash
Operating System:
Platform: win32
Arch: x64
Version: Windows 10 Pro
Available memory (MB): 32699
Available CPU cores: 4
Binaries:
Node: 22.12.0
npm: 10.8.1
Yarn: 1.22.21
pnpm: 9.7.1
Relevant Packages:
next: 15.2.0-canary.16 // Latest available version is detected (15.2.0-canary.16).
eslint-config-next: N/A
react: 19.0.0
react-dom: 19.0.0
typescript: 5.7.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Turbopack, Image (next/image)
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
_No response_
|
Image (next/image),Turbopack
|
low
|
Critical
|
2,797,897,179
|
vscode
|
Copilot Pro Getting Rate Limitted.
|
Type: <b>Bug</b>
I've upgraded to CoPilot Pro and after 10 mins of working with CoPilot I got rate limitted.
VS Code version: Code 1.96.4 (Universal) (cd4ee3b1c348a13bafd8f9ad8060705f6d4b9cba, 2025-01-16T00:16:19.038Z)
OS version: Darwin arm64 24.2.0
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Apple M1 Max (10 x 2400)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|3, 4, 4|
|Memory (System)|64.00GB (14.32GB free)|
|Process Argv|--crash-reporter-id 87fd705a-eabb-4d9e-be10-2cb9c6cb2efa|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (27)</summary>
Extension|Author (truncated)|Version
---|---|---
vscode-sqlite|ale|0.14.1
vscode-django|bat|1.15.0
python-environment-manager|don|1.2.7
python-extension-pack|don|1.7.0
gitlens|eam|16.2.0
prettier-vscode|esb|11.0.0
vscode-mysql|for|0.5.0
copilot|Git|1.257.0
copilot-chat|Git|0.23.2
vsc-python-indent|Kev|1.19.0
rainbow-csv|mec|3.14.0
data-workspace-vscode|ms-|0.5.0
mssql|ms-|1.27.0
sql-bindings-vscode|ms-|0.4.0
sql-database-projects-vscode|ms-|1.4.5
debugpy|ms-|2024.15.2025011702
flake8|ms-|2023.10.0
python|ms-|2024.23.2025011501
vscode-pylance|ms-|2024.12.1
autodocstring|njp|0.6.1
pyDocGenAI|pyD|1.0.2
LiveServer|rit|5.7.9
pdf|tom|1.2.2
intellicode-api-usage-examples|Vis|0.2.9
vscodeintellicode|Vis|1.3.2
jinja|who|0.0.8
vscode-sqlite3-editor|yy0|1.0.200
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368cf:30146710
vspor879:30202332
vspor708:30202333
vspor363:30204092
pythonvspyt551cf:31179979
vscod805:30301674
binariesv615:30325510
vsaa593cf:30376535
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
962ge761:30959799
pythonnoceb:30805159
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupytercf:31046870
nativerepl2:31139839
pythonrstrctxt:31112756
nativeloc2:31192216
cf971741:31144450
iacca1:31171482
notype1:31157159
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
6074i472:31201624
dwoutputs:31217127
```
</details>
<!-- generated by issue reporter -->
|
info-needed
|
low
|
Critical
|
2,797,898,643
|
neovim
|
Lua: get path of current script/module file, show Lua plugins in :scriptnames
|
### Problem
In a Lua plugin, when trying to get the path to some file within the plugin dir (e.g. to load a JSON file from the plugin dynamically), there *seems* to be no obvious / intuitive way to do that. `nvim_get_runtime_file()` looks in a lot of places and you kind of have to hope that there is no naming conflict. Looking into how other plugins solve that, the only other way I found is the following snippet within `telescope` for their planets picker:
```lua
local sourced_file = require("plenary.debug_utils").sourced_filepath()
local base_directory = vim.fn.fnamemodify(sourced_file, ":h:h:h:h")
local globbed_files = vim.fn.globpath(base_directory .. "/data/memes/planets/", "*", true, true)
```
with `plenay.debug_utils.sourced_filepath()` relying on `debug.getinfo()`, which seems undesirable.
### Expected behavior
When coming from other scripting environments (JS, Python, Ruby...), one's intuition might be to look for some way to get the path of the current script file (via some function or magic global), to append a relative path onto. Maybe it would be possible to expose the path the loader found when loading the lua file to the script in the file somehow?
|
plugin,has:workaround,lua
|
medium
|
Critical
|
2,797,898,686
|
neovim
|
RPC: plugins can handle incoming RPC requests
|
### Problem
In-process (Lua) plugins cannot define RPC methods.
### Expected behavior
Plugins can handle incoming RPC requests. For example if a peer calls the "foo" method, then Nvim will route the "foo" RPC call to the handler defined by the plugin.
This could be designed as a `RpcRequest` event (autocmd), given:
- Events gain the ability to take parameters and return a result
- Events like `RpcRequest` allow only 1 handler per "pattern".
- Example: a plugin owns the "foo" RPC call, so when it defines an event handler for the "foo" pattern, any other "foo" handler will be destroyed.
```
nvim.on('rpcrequest', 'foo', { exclusive=true }, function() ... end)
```
|
api,channels-rpc,remote-plugin,remote,events
|
low
|
Minor
|
2,797,924,474
|
ollama
|
MacApp fails to build when building from source
|
### What is the issue?
I cloned the repo and was building the macapp and it fails to build. Can't find webpack.main.config
There's a webpack.main.config.ts file but that's not the file referenced. I tried to fix it myself and fell down a rabbit hole.
I'm just bringing this to the attention of whomever is maintaining it.
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
current head
|
bug
|
low
|
Minor
|
2,797,945,259
|
pytorch
|
DISABLED test_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_True_grad_True (__main__.TestFxGraphCache)
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_True_grad_True&suite=TestFxGraphCache&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35845055086).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 5 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_True_grad_True`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_codecache.py", line 146, in test_cache_load_function
self.assertEqual(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4036, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not equal!
Expected 14 but got 35.
Absolute difference: 21
Relative difference: 1.5
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_codecache.py TestFxGraphCache.test_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_True_grad_True
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_codecache.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
|
module: rocm,triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor
|
low
|
Critical
|
2,797,945,264
|
pytorch
|
DISABLED test_remote_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_False (__main__.TestFxGraphCache)
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_remote_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_False&suite=TestFxGraphCache&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35845021511).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 14 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_remote_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_False`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_codecache.py", line 266, in test_remote_cache_load_function
self.assertEqual(global_stats.fx_graph, Stats(1, 3, 1))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4036, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Object comparison failed: _GlobalItemStats(num_put=2, num_get_hit=2, num_get_miss=2) != Stats(num_put=1, num_get_hit=3, num_get_miss=1)
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_codecache.py TestFxGraphCache.test_remote_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_False
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_codecache.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
|
module: rocm,triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor
|
low
|
Critical
|
2,797,945,278
|
pytorch
|
DISABLED test_reorder_peak_memory_dfs (__main__.TestOperatorReorderForPeakMemory)
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_reorder_peak_memory_dfs&suite=TestOperatorReorderForPeakMemory&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35845054777).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_reorder_peak_memory_dfs`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_memory.py", line 200, in test_reorder_peak_memory_dfs
.run(code)
RuntimeError: Expected to find "buf3 = " but did not find it
Searched string:
stream0 = get_raw_stream(0)
triton_red_fused_sum_2.run(buf4, buf6, 1, 2048, grid=grid(1), stream=stream0)
buf1 = buf4; del buf4 # reuse
# Topologically Sorted Source Nodes: [t2], Original ATen: [aten.mm]
extern_kernels.mm(primals_2, primals_3, out=buf1)
del primals_3
buf5 = empty_strided_cuda((2048, 10), (10, 1), torch.float32)
# Topologically Sorted Source Nodes: [t4], Original ATen: [aten.mm]
extern_kernels.mm(buf1, primals_5, out=buf5)
buf7 = empty_strided_cuda((3, ), (1, ), torch.float32)
# Topologically Sorted Source Nodes: [sum_2], Original ATen: [aten.sum]
stream0 = get_raw_stream(0)
triton_red_fused_sum_3.run(buf5, buf7, 3, 6827, grid=grid(3), stream=stream0)
del buf5
buf9 = buf6; del buf6 # reuse
# Topologically Sorted Source Nodes: [sum_2, add], Original ATen: [aten.sum, aten.add]
stream0 = get_raw_stream(0)
triton_per_fused_add_sum_4.run(buf9, buf7, 1, 3, grid=grid(1), stream=stream0)
del buf7
return (buf9, primals_2, reinterpret_tensor(buf1, (1, 2048), (1, 1), 0), reinterpret_tensor(primals_5, (10, 1), (1, 10), 0), reinterpret_tensor(buf0, (10, 2048), (1, 10), 0), reinterpret_tensor(primals_4, (1, 10), (1, 1), 0), )
def benchmark_compiled_module(times=10, repeat=10):
from torch._dynamo.testing import rand_strided
from torch._inductor.utils import print_performance
primals_1 = rand_strided((1, 10), (10, 1), device='cuda:0', dtype=torch.float32)
primals_2 = rand_strided((2048, 1), (1, 1), device='cuda:0', dtype=torch.float32)
primals_3 = rand_strided((1, 1), (1, 1), device='cuda:0', dtype=torch.float32)
primals_4 = rand_strided((10, 1), (1, 1), device='cuda:0', dtype=torch.float32)
primals_5 = rand_strided((1, 10), (10, 1), device='cuda:0', dtype=torch.float32)
fn = lambda: call([primals_1, primals_2, primals_3, primals_4, primals_5])
return print_performance(fn, times=times, repeat=repeat)
if __name__ == "__main__":
from torch._inductor.wrapper_benchmark import compiled_module_main
compiled_module_main('None', benchmark_compiled_module)
From CHECK: buf3 =
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_memory.py TestOperatorReorderForPeakMemory.test_reorder_peak_memory_dfs
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_memory.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
|
module: rocm,triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor
|
low
|
Critical
|
2,797,945,309
|
pytorch
|
DISABLED test_aoti_eager_cache_hit_dynamic_shapes_cuda (__main__.DynamicShapesCodegenGPUTests)
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_aoti_eager_cache_hit_dynamic_shapes_cuda&suite=DynamicShapesCodegenGPUTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35845054943).
Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 4 failures and 4 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_aoti_eager_cache_hit_dynamic_shapes_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 1093, in test_aoti_eager_cache_hit
res_value = getattr(torch.ops.aten, op_name)(input_tensor)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_ops.py", line 1158, in __call__
return self._op(*args, **(kwargs or {}))
RuntimeError: aot_compile_function.ptr() != nullptr && aot_compile_function.ptr() != Py_None INTERNAL ASSERT FAILED at "/var/lib/jenkins/workspace/torch/csrc/inductor/aoti_eager/kernel_holder.cpp":507, please report a bug to PyTorch. Failed to import - torch._inductor.aoti_eager.aoti_compile_with_persistent_cache
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_torchinductor_codegen_dynamic_shapes.py DynamicShapesCodegenGPUTests.test_aoti_eager_cache_hit_dynamic_shapes_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor_codegen_dynamic_shapes.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
|
module: rocm,triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor
|
low
|
Critical
|
2,797,945,642
|
pytorch
|
DISABLED test_mm_concat_cuda (__main__.FreezingGpuTests)
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_mm_concat_cuda&suite=FreezingGpuTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35843835162).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 9 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_mm_concat_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_inductor_freezing.py", line 336, in test_mm_concat
).run(code[0])
RuntimeError: Expected to not find "triton.jit" but found it
min_elem_per_thread=0
)
@triton.jit
~~~~~~~~~~ <--- HERE
def triton_poi_fused_mm_0(in_ptr0, out_ptr0, xnumel, XBLOCK : tl.constexpr):
xnumel = 144
From CHECK-NOT: triton.jit
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_inductor_freezing.py FreezingGpuTests.test_mm_concat_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_inductor_freezing.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
|
module: rocm,triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor
|
low
|
Critical
|
2,797,945,683
|
pytorch
|
DISABLED test_sdpa_rewriter_12_cuda (__main__.SDPAPatternRewriterCudaTests)
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_sdpa_rewriter_12_cuda&suite=SDPAPatternRewriterCudaTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35845055086).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 4 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_sdpa_rewriter_12_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_fused_attention.py", line 612, in _test_sdpa_rewriter_12
self._check_common(dot_prod_attention, contains=False, has_dropout=True)
File "/var/lib/jenkins/pytorch/test/inductor/test_fused_attention.py", line 85, in _check_common
self.assertGreaterEqual(counters["inductor"]["fuse_attention"], 1)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 1250, in assertGreaterEqual
self.fail(self._formatMessage(msg, standardMsg))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 675, in fail
raise self.failureException(msg)
AssertionError: 0 not greater than or equal to 1
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_fused_attention.py SDPAPatternRewriterCudaTests.test_sdpa_rewriter_12_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_fused_attention.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
|
module: rocm,triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor
|
low
|
Critical
|
2,797,945,728
|
pytorch
|
DISABLED test_sdpa_rewriter_12_cuda (__main__.SDPAPatternRewriterCudaDynamicTests)
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_sdpa_rewriter_12_cuda&suite=SDPAPatternRewriterCudaDynamicTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35844263142).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 9 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_sdpa_rewriter_12_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_fused_attention.py", line 612, in _test_sdpa_rewriter_12
self._check_common(dot_prod_attention, contains=False, has_dropout=True)
File "/var/lib/jenkins/pytorch/test/inductor/test_fused_attention.py", line 85, in _check_common
self.assertGreaterEqual(counters["inductor"]["fuse_attention"], 1)
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 1250, in assertGreaterEqual
self.fail(self._formatMessage(msg, standardMsg))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 675, in fail
raise self.failureException(msg)
AssertionError: 0 not greater than or equal to 1
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_fused_attention.py SDPAPatternRewriterCudaDynamicTests.test_sdpa_rewriter_12_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_fused_attention.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
|
module: rocm,triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor
|
low
|
Critical
|
2,797,945,778
|
pytorch
|
DISABLED test_slice_scatter_reinplace_cuda (__main__.GPUTests)
|
Platforms: rocm, inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_slice_scatter_reinplace_cuda&suite=GPUTests&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35845342970).
Over the past 3 hours, it has been determined flaky in 9 workflow(s) with 12 failures and 9 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_slice_scatter_reinplace_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 7999, in test_slice_scatter_reinplace
assertGeneratedKernelCountEqual(self, 1)
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 727, in assertGeneratedKernelCountEqual
self.assertEqual(torch._inductor.metrics.generated_kernel_count, expected)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4036, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not equal!
Expected 1 but got 2.
Absolute difference: 1
Relative difference: 1.0
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_torchinductor.py GPUTests.test_slice_scatter_reinplace_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_torchinductor.py`
cc @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
|
triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor
|
low
|
Critical
|
2,797,945,833
|
pytorch
|
DISABLED test_remote_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_True (__main__.TestFxGraphCache)
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_remote_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_True&suite=TestFxGraphCache&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35845055086).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_remote_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_True`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_codecache.py", line 266, in test_remote_cache_load_function
self.assertEqual(global_stats.fx_graph, Stats(1, 3, 1))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4036, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Object comparison failed: _GlobalItemStats(num_put=2, num_get_hit=2, num_get_miss=2) != Stats(num_put=1, num_get_hit=3, num_get_miss=1)
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_codecache.py TestFxGraphCache.test_remote_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_True
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_codecache.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
|
module: rocm,triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor
|
low
|
Critical
|
2,797,945,867
|
pytorch
|
DISABLED test_remote_cache_load_function_device_cuda_float32_dynamic_False_bundle_triton_True (__main__.TestFxGraphCache)
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_remote_cache_load_function_device_cuda_float32_dynamic_False_bundle_triton_True&suite=TestFxGraphCache&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35845055086).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 4 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_remote_cache_load_function_device_cuda_float32_dynamic_False_bundle_triton_True`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_codecache.py", line 266, in test_remote_cache_load_function
self.assertEqual(global_stats.fx_graph, Stats(1, 3, 1))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4036, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Object comparison failed: _GlobalItemStats(num_put=2, num_get_hit=2, num_get_miss=2) != Stats(num_put=1, num_get_hit=3, num_get_miss=1)
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_codecache.py TestFxGraphCache.test_remote_cache_load_function_device_cuda_float32_dynamic_False_bundle_triton_True
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_codecache.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
|
module: rocm,triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor
|
low
|
Critical
|
2,797,945,908
|
pytorch
|
DISABLED test_mixed_mm (__main__.TestPatternMatcher)
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_mixed_mm&suite=TestPatternMatcher&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35845054943).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 4 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_mixed_mm`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_pattern_matcher.py", line 346, in test_mixed_mm
self._test_mixed_impl(fn, args, True, False)
File "/var/lib/jenkins/pytorch/test/inductor/test_pattern_matcher.py", line 316, in _test_mixed_impl
self.assertEqual("mixed_mm" in code, mixed_mm_expected)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4036, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Booleans mismatch: False is not True
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_pattern_matcher.py TestPatternMatcher.test_mixed_mm
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_pattern_matcher.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
|
module: rocm,triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor
|
low
|
Critical
|
2,797,946,135
|
pytorch
|
DISABLED test_reuse_kernel_cuda (__main__.AOTInductorTestABICompatibleGpu)
|
Platforms: rocm, inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_reuse_kernel_cuda&suite=AOTInductorTestABICompatibleGpu&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35845021672).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_reuse_kernel_cuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_torchinductor.py", line 12376, in new_test
return value(self)
File "/var/lib/jenkins/pytorch/test/inductor/test_aot_inductor.py", line 1824, in test_reuse_kernel
self.code_check_count(
File "/var/lib/jenkins/pytorch/test/inductor/test_aot_inductor_utils.py", line 245, in code_check_count
).run(src_code)
RuntimeError: Expected to find "triton_poi_fused_sin_0 = loadKernel(" but did not find it
Searched string:
#include <torch/csrc/inductor/aoti_runtime/interface.h>
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
#include <torch/csrc/inductor/aoti_runtime/model.h>
// Definition of AOTI runtime interface functions
From CHECK-COUNT-1: triton_poi_fused_sin_0 = loadKernel(
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_aot_inductor.py AOTInductorTestABICompatibleGpu.test_reuse_kernel_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_aot_inductor.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
|
triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor
|
low
|
Critical
|
2,797,964,524
|
deno
|
compile --no-terminal results in panic
|
Version: Deno 2.1.6
OS: Windows
I made [a comment](https://github.com/denoland/deno/issues/13107#issuecomment-2597301636) explaining a workaround for an issue, but I think this might deserve to be its own issue.
The TLDR is that executables compiled with `--no-terminal` panic unless explicitly given file redirects. Here's an example stderr:
<details>
<summary>
stderr
</summary>
```
thread 'main' panicked at runtime\worker.rs:711:7:
Bootstrap exception: ReferenceError: null or invalid handle
at guessHandleType (ext:deno_node/internal_binding/util.ts:38:16)
at _guessStdinType (ext:deno_node/_process/streams.mjs:137:10)
at initStdin (ext:deno_node/_process/streams.mjs:162:38)
at Object.internals.__bootstrapNodeProcess (node:process:683:22)
at initialize (ext:deno_node/02_init.js:34:15)
at bootstrapMainRuntime (ext:runtime_main/js/99_main.js:892:7)
stack backtrace:
0: 0x7ff790b32885 - node_api_get_module_file_name
1: 0x7ff78fa1e649 - uv_mutex_destroy
2: 0x7ff790b11917 - node_api_get_module_file_name
3: 0x7ff790b359c1 - node_api_get_module_file_name
4: 0x7ff790b36936 - node_api_get_module_file_name
5: 0x7ff790b36396 - node_api_get_module_file_name
6: 0x7ff790b362ef - node_api_get_module_file_name
7: 0x7ff790b362d6 - node_api_get_module_file_name
8: 0x7ff79259fd84 - CrashForExceptionInNonABICompliantCodeRange
9: 0x7ff78f9763fa - uv_mutex_destroy
10: 0x7ff78f6b184d - <unknown>
11: 0x7ff78f79db95 - <unknown>
12: 0x7ff78f9bb8a5 - uv_mutex_destroy
13: 0x7ff78f80cc47 - uv_mutex_destroy
14: 0x7ff78f9a13c1 - uv_mutex_destroy
15: 0x7ff78f78fc60 - <unknown>
16: 0x7ff78f9bb987 - uv_mutex_destroy
17: 0x7ff79254d67c - CrashForExceptionInNonABICompliantCodeRange
18: 0x7ffd5c7b7c24 - BaseThreadInitThunk
19: 0x7ffd5d58d4d1 - RtlUserThreadStart
thread 'main' panicked at C:\a\deno\deno\runtime\tokio_util.rs:111:36:
called `Result::unwrap()` on an `Err` value: JoinError::Panic(Id(1), ...)
stack backtrace:
0: 0x7ff790b32885 - node_api_get_module_file_name
1: 0x7ff78fa1e649 - uv_mutex_destroy
2: 0x7ff790b11917 - node_api_get_module_file_name
3: 0x7ff790b359c1 - node_api_get_module_file_name
4: 0x7ff790b36936 - node_api_get_module_file_name
5: 0x7ff790b36396 - node_api_get_module_file_name
6: 0x7ff790b362ef - node_api_get_module_file_name
7: 0x7ff790b362d6 - node_api_get_module_file_name
8: 0x7ff79259fd84 - CrashForExceptionInNonABICompliantCodeRange
9: 0x7ff7925a01a0 - CrashForExceptionInNonABICompliantCodeRange
10: 0x7ff78f9a3630 - uv_mutex_destroy
11: 0x7ff78f78fc60 - <unknown>
12: 0x7ff78f9bb987 - uv_mutex_destroy
13: 0x7ff79254d67c - CrashForExceptionInNonABICompliantCodeRange
14: 0x7ffd5c7b7c24 - BaseThreadInitThunk
15: 0x7ffd5d58d4d1 - RtlUserThreadStart
```
</summary>
</details>
|
compile,panic
|
low
|
Critical
|
2,797,971,304
|
PowerToys
|
Power Rename button ready for Enter key.
|
### Description of the new feature / enhancement
In older versions, before the facelift, PowerRename would have the Rename button focused so that you could rename your selection quick (pressing Enter).
Now (last time I checked was a year ago) it requires using the mouse and clicking it or using multiple TABs.
Please (re)implement this quality of life feature.
Sorry if this is already changed, but I don't want to uninstall, install and reinstall my PowerToys older version I'm using.
✌
### Scenario when this would be used?
All the time for power users for PowerRename.
Older version (v.0.29.3) behaved as mentioned.
### Supporting information
_No response_
|
Needs-Triage
|
low
|
Minor
|
2,798,005,351
|
deno
|
`Temporal.ZonedDateTime.prototype.getTimeZoneTransition` not implemented
|
Version: Deno 2.1.6
Despite being documented [here](https://docs.deno.com/api/web/~/Temporal.ZonedDateTime.prototype.getTimeZoneTransition), the `getTimeZoneTransition` function does not seem to actually be available in deno.
```
> Temporal.Now.zonedDateTimeISO().getTimeZoneTransition('next')
Uncaught TypeError: Temporal.Now.zonedDateTimeISO(...).getTimeZoneTransition is not a function
at <anonymous>:1:54
```
|
bug,upstream,v8
|
low
|
Critical
|
2,798,018,385
|
deno
|
Support NODE_EXTRA_CA_CERTS
|
Avoid special behavior as seen here
https://blog.disintegrator.dev/posts/http2-support-in-js-runtimes/
|
feat,tls
|
low
|
Minor
|
2,798,046,493
|
rust
|
16k-aligned statics crash rustc on Windows
|
```rust
#[repr(align(16384))]
struct HighAlignment;
static EXAMPLE: HighAlignment = const { HighAlignment };
fn main() {}
```
```
$ cargo build --target x86_64-pc-windows-gnu
...
error: could not compile `cringe` (bin "cringe"); 2 warnings emitted
Caused by:
process didn't exit successfully: `/home/purplesyringa/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/bin/rustc --crate-name cringe --edition=2021 src/main.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --diagnostic-width=211 --crate-type bin --emit=dep-info,link -C embed-bitcode=no -C debuginfo=2 --check-cfg 'cfg(docsrs,test)' --check-cfg 'cfg(feature, values())' -C metadata=2b980327f571f145 -C extra-filename=-97496ab9b6118996 --out-dir /home/purplesyringa/cringe/target/x86_64-pc-windows-gnu/debug/deps --target x86_64-pc-windows-gnu -C incremental=/home/purplesyringa/cringe/target/x86_64-pc-windows-gnu/debug/incremental -L dependency=/home/purplesyringa/cringe/target/x86_64-pc-windows-gnu/debug/deps -L dependency=/home/purplesyringa/cringe/target/debug/deps` (signal: 4, SIGILL: illegal instruction)
```
Also reproduces on other ABIs, also gives `STATUS_ILLEGAL_INSTRUCTION` when building on Windows. Also applies to thread locals. Something something `IMAGE_SCN_ALIGN_` stops at 8k and LLVM fails an assertion?
### Meta
`rustc --version --verbose`:
```
rustc 1.86.0-nightly (419b3e2d3 2025-01-15)
binary: rustc
commit-hash: 419b3e2d3e350822550eee0e82eeded4d324d584
commit-date: 2025-01-15
host: x86_64-unknown-linux-gnu
release: 1.86.0-nightly
LLVM version: 19.1.6
```
@rustbot label +I-crash +T-compiler +A-LLVM +E-needs-investigation +O-windows
|
I-crash,A-LLVM,O-windows,T-compiler,C-bug,A-repr,E-needs-investigation,A-align
|
low
|
Critical
|
2,798,069,989
|
PowerToys
|
Error when copying content to clipboard
|
### Microsoft PowerToys version
0.87.1
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
General
### Steps to reproduce
Step:
Open input panel via shortcut -> type d -> select date format -> press Enter to copy -> error prompt (actual copy was successful)


[PowerToysReport_2025-01-20-11-03-46.zip](https://github.com/user-attachments/files/18471825/PowerToysReport_2025-01-20-11-03-46.zip)
### ✔️ Expected Behavior
Copy successful without error prompt
### ❌ Actual Behavior
error prompt (actual copy was successful)
### Other Software
_No response_
|
Issue-Bug,Needs-Triage
|
low
|
Critical
|
2,798,120,326
|
PowerToys
|
Workspace won't properly launch steam games
|
### Microsoft PowerToys version
0.87.1
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
Workspaces
### Steps to reproduce
Added VRChat into a workspace and it doesn't properly launches the game with steam quick access? so the game goes to offline mode.
### ✔️ Expected Behavior
Game launching with steam and in online mode.
### ❌ Actual Behavior
Game launching without steam and in offline mode.
### Other Software
_No response_
|
Issue-Bug,Needs-Triage,Product-Workspaces
|
low
|
Minor
|
2,798,133,774
|
ui
|
[bug]:Error: React.Children.only expected to receive a single React element child.
|
### Describe the bug
Error: React.Children.only expected to receive a single React element child.
The error is in the dropdown-menu.tsx file. I have changed the React.ElementRef... to React.ComponentRef...
I have "react": "^18.3.1", "typescript": "^5", nextjs 15
### Affected component/components
Dropdown Menu
### How to reproduce
1. Go to http://localhost:3001/dashboard
2. Click on profile picture
3. Get this error Unhandled Runtime Error
Error: React.Children.only expected to receive a single React element child.
### Codesandbox/StackBlitz link
_No response_
### Logs
```bash
hook.js:608 Error: React.Children.only expected to receive a single React element child.
The above error occurred in the <SlotClone> component. It was handled by the <ReactDevOverlay> error boundary. Error Component Stack
at _c6 (C:\Users\monic\OneDr…pdown-menu.tsx:82:6)
at div (<anonymous>)
at div (<anonymous>)
at _c4 (C:\Users\monic\OneDr…pdown-menu.tsx:62:6)
at div (<anonymous>)
at header (<anonymous>)
at div (<anonymous>)
at div (<anonymous>)
at layout [Server] (<anonymous>)
at ThemeProvider (C:\Users\monic\OneDr…emeProvider.tsx:7:3)
at body (<anonymous>)
at html (<anonymous>)
at RootLayout [Server] (<anonymous>)
```
### System Info
```bash
Using Google Chrome
```
### Before submitting
- [x] I've made research efforts and searched the documentation
- [x] I've searched for existing issues
|
bug
|
low
|
Critical
|
2,798,176,005
|
ollama
|
Why do I keep getting "@@@@" as responses?
|
### What is the issue?
I have attached the screenshot to what is happening. I have an Nvidia 980m 4gb. Running latest version of Windows 10 and ollama.

### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.5.7
|
bug
|
low
|
Major
|
2,798,197,635
|
ollama
|
Requesting this new multimodal model.
|
Please add the openbmb/MiniCPM-o-2_6 Model.
https://huggingface.co/openbmb/MiniCPM-o-2_6
|
model request
|
low
|
Major
|
2,798,211,358
|
excalidraw
|
Need website whitelisted
|
need weather.gov whitelisted. using this as teacher
|
whitelist
|
low
|
Minor
|
2,798,213,913
|
angular
|
Input signal with default value is undefined in the template
|
### Which @angular/* package(s) are the source of the bug?
core
### Is this a regression?
No
### Description
Hello,
in my component I use input signal with a default value provided by a static method:
```
symbolProfile = input<SymbolProfile>(SymbolProfileMaker.create());
```
in the template I am obliged to check the undefined before using the value:
```
<p-accordion class="block mt-5">
<p-accordionTab header="{{ symbolProfile() && symbolProfile().companyName }} Overview">
<span class="accordion-description">{{ symbolProfile() && symbolProfile().description }}</span>
</p-accordionTab>
</p-accordion>
```
I would have thought that providing a default value would have avoided me to check against undefined but it seems the default value in the input signal is provided too late.
If I log the input signal value in the constructor I can see the default value.
Could you please tell me what I am doing wrong or why I need to check for undefined?
Thanks a in advance.
### Please provide a link to a minimal reproduction of the bug
_No response_
### Please provide the exception or error you saw
```true
```
### Please provide the environment you discovered this bug in (run `ng version`)
```true
Angular CLI: 18.1.3
Node: 22.6.0
Package Manager: npm 10.8.2
OS: darwin arm64
Angular: 18.1.3
... animations, cdk, cli, common, compiler, compiler-cli, core
... forms, localize, platform-browser, platform-browser-dynamic
... router
Package Version
---------------------------------------------------------
@angular-devkit/architect 0.1801.3
@angular-devkit/build-angular 18.1.3
@angular-devkit/core 18.1.3
@angular-devkit/schematics 18.1.3
@schematics/angular 18.1.3
rxjs 7.8.1
typescript 5.5.4
zone.js 0.14.10
```
### Anything else?
_No response_
|
needs reproduction,area: core
|
low
|
Critical
|
2,798,230,005
|
vscode
|
python installation failing in VS code
|
Hi VS Code Team,
We are not able to install python and extensipons in the VS Code. All users are facing same issue. Could you please fix the issue as soon as possible.
Error: Failed to install 'ms-python.python'.
Thanks & Regards,
Ram
|
info-needed
|
low
|
Critical
|
2,798,239,484
|
material-ui
|
Codemod for migrating deprecated MUI APIs does not work with custom design systems
|
I am currently working with a custom design system that wraps MUI components, and I've encountered an issue when trying to migrate from deprecated MUI APIs using the codemod tool.
When I run the following command:
```
npx @mui/codemod@latest deprecations/all <path>
```
It does not work properly because my codebase imports components from `@my-ui-library`, which is a wrapper around MUI's components, instead of directly importing from `@mui/material`.
The codemod only works when I change my imports to `@mui/material`.
Any workaround? Thanks
**Environment**:
- MUI version: 6.4.0
|
new feature,package: codemod
|
low
|
Minor
|
2,798,248,064
|
flutter
|
Exception: Cannot read bytes from Blob. Is it still available?
|
### Steps to reproduce
in flutter web, getting this error Cannot read bytes from Blob. Is it still available? at line await file.readAsBytes();
when file size is larger than 2 GB
### Expected results
how to get file bytes in flutter web when file size is larger than 2GB
### Actual results
small file working fine. but when access large files like 2GB it throws error
### Code sample
<details open><summary>Code sample</summary>
```dart
Example
bool isDragging = false;
String statusMessage = 'Drop files here';
int totalBytesProcessed = 0;
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: const Text('Large File Drop Example'),
),
body: Center(
child: DropTarget(
onDragEntered: (details) {
setState(() {
isDragging = true;
});
},
onDragExited: (details) {
setState(() {
isDragging = false;
});
},
onDragDone: (details) async {
await _handleFileDrop(details);
},
child: Container(
width: 400,
height: 200,
decoration: BoxDecoration(
border: Border.all(
color: isDragging ? Colors.blue : Colors.grey,
width: 2,
),
),
child: Center(
child: Text(
statusMessage,
style: TextStyle(fontSize: 16),
textAlign: TextAlign.center,
),
),
),
),
),
);
}
Future<void> _handleFileDrop(DropDoneDetails dropDoneDetails) async {
if (dropDoneDetails.files.isNotEmpty) {
final file = dropDoneDetails.files.first;
log('Processing file: ${file.name}');
setState(() {
statusMessage = 'Processing file: ${file.name}';
totalBytesProcessed = 0;
});
try {
final bytes = await file.readAsBytes();
totalBytesProcessed = bytes.length;
// await _processFileInChunks(file as html.File);
setState(() {
statusMessage =
'File processed successfully! Total bytes: $totalBytesProcessed';
log('File processed successfully! Total bytes: $totalBytesProcessed');
});
} catch (e) {
log('Error processing file: $e');
setState(() {
statusMessage = 'Error processing file: $e';
});
}
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
Restarted application in 227ms.
[log] Processing file: test 2.zip
[log] Error processing file: Exception: Cannot read bytes from Blob. Is it still available?
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
/Volumes/InsomniacsAppleSSD2/flutter3.24.5/bin/flutter doctor --verbose
[!] Flutter (Channel stable, 3.24.5, on macOS 14.6.1 23G93 darwin-arm64 (Rosetta), locale en-IN)
• Flutter version 3.24.5 on channel stable at /Volumes/InsomniacsAppleSSD2/flutter3.24.5
! The flutter binary is not on your path. Consider adding /Volumes/InsomniacsAppleSSD2/flutter3.24.5/bin to your path.
! The dart binary is not on your path. Consider adding /Volumes/InsomniacsAppleSSD2/flutter3.24.5/bin to your path.
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision dec2ee5c1f (10 weeks ago), 2024-11-13 11:13:06 -0800
• Engine revision a18df97ca5
• Dart version 3.5.4
• DevTools version 2.37.3
• If those were intentional, you can disregard the above warnings; however it is recommended to use "git" directly to perform update checks and upgrades.
[✗] Android toolchain - develop for Android devices
✗ Unable to locate Android SDK.
Install Android Studio from: https://developer.android.com/studio/index.html
On first launch it will assist you in installing the Android SDK components.
(or visit https://flutter.dev/to/macos-android-setup for detailed instructions).
If the Android SDK has been installed to a custom location, please use
`flutter config --android-sdk` to update to that location.
[✓] Xcode - develop for iOS and macOS (Xcode 15.3)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 15E204a
• CocoaPods version 1.15.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2023.2)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.9+0-17.0.9b1087.7-11185874)
```
</details>
|
waiting for customer response,in triage
|
low
|
Critical
|
2,798,268,375
|
flutter
|
Flutter crash on Android Pixel phone when App launched
|
### Steps to reproduce
1. When user launch app in Pixel phone.
### Expected results
App should start well.
### Actual results
App was crashed temporary.
### Code sample
''
### Screenshots or Video
_No response_
### Logs
<details open><summary>Logs</summary>
```console
#00 pc 000000000005e0bc /apex/com.android.runtime/lib64/bionic/libc.so (abort+156)
#01 pc 0000000000933d60 /apex/com.android.art/lib64/libart.so (_ZN3art7Runtime5AbortEPKc+348)
#02 pc 0000000000016100 /apex/com.android.art/lib64/libbase.so
#03 pc 000000000000c804 /system/lib64/liblog.so (__android_log_assert+292)
#04 pc 000000000059692c /system/lib64/libhwui.so
#05 pc 00000000005963fc /system/lib64/libhwui.so
#06 pc 00000000002386cc /system/lib64/libhwui.so
#07 pc 0000000000237ca4 /system/lib64/libhwui.so
#08 pc 000000000023754c /system/lib64/libhwui.so
#09 pc 0000000000237f64 /system/lib64/libhwui.so
#10 pc 00000000002cc9f4 /system/lib64/libhwui.so
#11 pc 00000000002cc544 /system/lib64/libhwui.so
#12 pc 000000000047804c /system/lib64/libhwui.so
#13 pc 00000000002ce168 /system/lib64/libhwui.so
#14 pc 00000000004ffe88 /system/lib64/libhwui.so
#15 pc 0000000000329740 /system/lib64/libhwui.so
#16 pc 000000000001786c /system/lib64/libutils.so (_ZN7android6Thread11_threadLoopEPv+252)
#17 pc 0000000000019e88 /system/lib64/libutils.so
#18 pc 00000000000705f8 /apex/com.android.runtime/lib64/bionic/libc.so
#19 pc 0000000000061874 /apex/com.android.runtime/lib64/bionic/libc.so
#20 pc 0000000000000000 <unknown>
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.22.2, on macOS 13.6.1 22G313 darwin-arm64, locale
ko-KR)
• Flutter version 3.22.2 on channel stable at
/Users/chungdan/fvm/versions/3.22.2
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 761747bfc5 (8 months ago), 2024-06-05 22:15:13 +0200
• Engine revision edd8546116
• Dart version 3.4.3
• DevTools version 2.34.3
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /Users/chungdan/Library/Android/sdk
• Platform android-34, build-tools 34.0.0
• Java binary at: /Applications/Android
Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build
17.0.10+0-17.0.10b1087.21-11572160)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 15.2)
• Xcode at /Users/chungdan/Downloads/Xcode.app/Contents/Developer
• Build 15C500b
• CocoaPods version 1.16.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2023.3)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build
17.0.10+0-17.0.10b1087.21-11572160)
[✓] Connected device (4 available)
• SM S901N (mobile) • R3CT108Z0GA •
android-arm64 • Android 13 (API 33)
• iPhone 15 Pro Max (mobile) • EBF9BFAE-3C04-4E7D-8CC1-5644F3E2961D •
ios • com.apple.CoreSimulator.SimRuntime.iOS-17-2 (simulator)
• Mac Designed for iPad (desktop) • mac-designed-for-ipad •
darwin • macOS 13.6.1 22G313 darwin-arm64
• Chrome (web) • chrome •
web-javascript • Google Chrome 131.0.6778.265
! Error: Browsing on the local area network for iPhone. Ensure the device is
unlocked and attached with a cable or associated with the same local area
network as this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code
-27)
! Error: iPhone is not available because it is unpaired. Pair with the
device in the Xcode Devices Window, and respond to any pairing prompts on
the device. (code -29)
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
|
waiting for customer response,in triage
|
low
|
Critical
|
2,798,291,858
|
pytorch
|
Indexed ^= (XOR in-place) operation doesn't work as expected on MPS backend
|
### 🐛 Describe the bug
The ^= (XOR in-place) operation produces incorrect results on the MPS backend. The behavior is inconsistent with other backends, such as CPU. Specifically, the operation appears to modify unintended values in the tensor.
```
import torch
# On CPU
zeros = torch.zeros((10, 2), dtype=torch.int16, device="cpu")
zeros[:, 0] ^= 1
print(zeros) # Expected and correct output:
# tensor([[1, 0],
# [1, 0],
# [1, 0],
# [1, 0],
# [1, 0],
# [1, 0],
# [1, 0],
# [1, 0],
# [1, 0],
# [1, 0]], dtype=torch.int16)
# On MPS
zeros = torch.zeros((10, 2), dtype=torch.int16, device="mps")
zeros[:, 0] ^= 1
print(zeros) # Incorrect output:
# tensor([[1, 1],
# [1, 1],
# [1, 1],
# [1, 1],
# [1, 1],
# [0, 0],
# [0, 0],
# [0, 0],
# [0, 0],
# [0, 0]], device='mps:0', dtype=torch.int16)
# Non-in-place workaround
zeros = torch.zeros((10, 2), dtype=torch.int16, device="mps")
zeros[:, 0] = zeros[:, 0] ^ 1
print(zeros) # Correct output:
# tensor([[1, 0],
# [1, 0],
# [1, 0],
# [1, 0],
# [1, 0],
# [1, 0],
# [1, 0],
# [1, 0],
# [1, 0],
# [1, 0]], device='mps:0', dtype=torch.int16)
```
### Versions
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.2 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.4)
CMake version: Could not collect
Libc version: N/A
Python version: 3.12.3 | packaged by conda-forge | (main, Apr 15 2024, 18:35:20) [Clang 16.0.6 ] (64-bit runtime)
Python platform: macOS-15.2-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Max
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] onnx==1.17.0
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[conda] numpy 2.1.2 py312h801f5e3_0 conda-forge
[conda] pytorch 2.5.1 py3.12_0 pytorch
[conda] torchaudio 2.5.1 py312_cpu pytorch
[conda] torchvision 0.20.1 py312_cpu pytorch
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen
|
high priority,triaged,module: correctness (silent),module: mps
|
low
|
Critical
|
2,798,298,081
|
langchain
|
TimeWeightedVectorStoreRetriever does not support Chroma due to datetime metadata issue
|
### Checked other resources
- [x] I added a very descriptive title to this issue.
- [x] I searched the LangChain documentation with the integrated search.
- [x] I used the GitHub search to find a similar question and didn't find it.
- [x] I am sure that this is a bug in LangChain rather than my code.
- [x] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
This is an example from LangChain (but using Chroma instead of Faiss)
[link](https://python.langchain.com/docs/how_to/time_weighted_vectorstore/)
```python
from datetime import datetime, timedelta
from langchain.retrievers import TimeWeightedVectorStoreRetriever
from langchain_core.documents import Document
from langchain_openai import OpenAIEmbeddings
from langchain_chroma import Chroma
# Low decay rate example
# Define your embedding model
embeddings_model = OpenAIEmbeddings()
# Initialize the Chroma vectorstore
vectorstore = Chroma(embedding_function=embeddings_model)
retriever = TimeWeightedVectorStoreRetriever(
vectorstore=vectorstore, decay_rate=0.0000000000000000000000001, k=1
)
# Add documents with a timestamp from yesterday
yesterday = datetime.now() - timedelta(days=1)
retriever.add_documents(
[Document(page_content="hello world", metadata={"last_accessed_at": yesterday})]
)
retriever.add_documents([Document(page_content="hello foo")])
# Retrieve documents
print("Low Decay Rate Results:")
print(retriever.invoke("hello world"))
# High decay rate example
# Reinitialize the Chroma vectorstore
vectorstore = Chroma(embedding_function=embeddings_model)
retriever = TimeWeightedVectorStoreRetriever(
vectorstore=vectorstore, decay_rate=0.999, k=1
)
# Add documents with a timestamp from yesterday
yesterday = datetime.now() - timedelta(days=1)
retriever.add_documents(
[Document(page_content="hello world", metadata={"last_accessed_at": yesterday})]
)
retriever.add_documents([Document(page_content="hello foo")])
# Retrieve documents
print("\nHigh Decay Rate Results:")
print(retriever.invoke("hello world"))
# Virtual time example
from langchain_core.utils import mock_now
# Mock the current time to tomorrow
tomorrow = datetime.now() + timedelta(days=1)
with mock_now(tomorrow):
print("\nVirtual Time Results:")
print(retriever.invoke("hello world"))
```
### Error Message and Stack Trace (if applicable)
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
<ipython-input-21-ea5d6671c756> in <cell line: 14>()
12 # Add documents with a timestamp from yesterday
13 yesterday = datetime.now() - timedelta(days=1)
---> 14 retriever.add_documents(
15 [Document(page_content="hello world", metadata={"last_accessed_at": yesterday})]
16 )
/usr/local/lib/python3.10/dist-packages/langchain/retrievers/time_weighted_retriever.py in add_documents(self, documents, **kwargs)
162 doc.metadata["buffer_idx"] = len(self.memory_stream) + i
163 self.memory_stream.extend(dup_docs)
--> 164 return self.vectorstore.add_documents(dup_docs, **kwargs)
165
166 async def aadd_documents(
/usr/local/lib/python3.10/dist-packages/langchain_core/vectorstores/base.py in add_documents(self, documents, **kwargs)
285 texts = [doc.page_content for doc in documents]
286 metadatas = [doc.metadata for doc in documents]
--> 287 return self.add_texts(texts, metadatas, **kwargs)
288 msg = (
289 f"`add_documents` and `add_texts` has not been implemented "
/usr/local/lib/python3.10/dist-packages/langchain_chroma/vectorstores.py in add_texts(self, texts, metadatas, ids, **kwargs)
564 "langchain_community.vectorstores.utils.filter_complex_metadata."
565 )
--> 566 raise ValueError(e.args[0] + "\n\n" + msg)
567 else:
568 raise e
ValueError: Expected metadata value to be a str, int, float or bool, got 2025-01-19 06:02:08.017004 which is a datetime in upsert.
Try filtering complex metadata from the document using langchain_community.vectorstores.utils.filter_complex_metadata.
### Description
### Description
When using `TimeWeightedVectorStoreRetriever` with the `Chroma` vector store, an error occurs when attempting to add documents with `datetime` metadata. The error indicates that `Chroma` does not support `datetime` objects in metadata, which is required by `TimeWeightedVectorStoreRetriever` for its time-weighted retrieval functionality.
### Expected Behavior
The `TimeWeightedVectorStoreRetriever` should successfully add documents to the `Chroma` vector store, even with `datetime` metadata, and allow time-weighted retrieval.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Sun Nov 10 10:07:59 UTC 2024
> Python Version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.3.30
> langchain: 0.3.14
> langchain_community: 0.3.14
> langsmith: 0.2.3
> langchain_chroma: 0.2.0
> langchain_experimental: 0.3.4
> langchain_openai: 0.3.1
> langchain_text_splitters: 0.3.3
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.10
> async-timeout: 4.0.3
> chromadb: 0.6.3
> dataclasses-json: 0.6.7
> fastapi: 0.115.6
> httpx: 0.28.1
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> numpy: 1.26.4
> openai: 1.59.8
> orjson: 3.10.12
> packaging: 24.2
> pydantic: 2.10.3
> pydantic-settings: 2.7.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.36
> tenacity: 9.0.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
|
Ɑ: vector store,🤖:bug
|
low
|
Critical
|
2,798,323,447
|
godot
|
Set window Borderless True causes weird stretch in Windows force native: true
|
### Tested versions
Reproducable in: 4.3 stable
### System information
Godot v4.3.stable - Windows 10.0.22631 - GLES3 (Compatibility) - Intel(R) Iris(R) Xe Graphics (Intel Corporation; 30.0.100.9836) - 11th Gen Intel(R) Core(TM) i5-1135G7 @ 2.40GHz (8 Threads)
### Issue description
When toggling the borderless option on a Window Node with force_native: true, the window appears to stretch incorrectly. The screen stretches upward to the window title bar, creating a distorted appearance.
Note:
I only tested on Windows 11. It would be helpful if others could confirm this behavior on different operating systems.
https://github.com/user-attachments/assets/2506243a-640b-4d37-b862-2f688adf952b
### Steps to reproduce
1. Create WindowNode, set force native: true, visible: false
2. Set WindowNode visible=true at func _ready function
3. Toggle WindowNode's Borderless On and Off
4. WindowNode's Screen is Stretched
### Minimal reproduction project (MRP)
How to Reproduce:
[window_borderless_bugreport.zip](https://github.com/user-attachments/files/18473215/window_borderless_bugreport.zip)
|
bug,needs testing,topic:gui
|
low
|
Critical
|
2,798,347,087
|
rust
|
Tracking Issue for `unchecked_disjoint_bitor`
|
Feature gate: `#![feature(disjoint_bitor)]`
This is a tracking issue for the `unchecked_disjoint_bitor` method on integer types (and the associated intrinsic implementing it).
This is a method for cases where `a | b` and `a + b` return the same value, named after the [`disjoint` flag in LLVM](https://llvm.org/docs/LangRef.html#or-instruction).
ACP: https://github.com/rust-lang/libs-team/issues/373
### Public API
```rust
impl u8/.../u128/usize {
pub const unsafe fn unchecked_disjoint_bitor(self, other: Self) -> Self;
}
```
### Steps / History
<!--
For larger features, more steps might be involved.
If the feature is changed later, please add those PRs here as well.
-->
- [ ] Implementation: #135760
- [ ] Final comment period (FCP)[^1]
- [ ] Stabilization PR
<!--
Once the feature has gone through a few release cycles and there are no
unresolved questions left, the feature might be ready for stabilization.
If this feature didn't go through the RFC process, a final comment period
(FCP) is always needed before stabilization. This works as follows:
A library API team member can kick off the stabilization process, at which point
the rfcbot will ask all the team members to verify they agree with
stabilization. Once enough members agree and there are no concerns, the final
comment period begins: this issue will be marked as such and will be listed
in the next This Week in Rust newsletter. If no blocking concerns are raised in
that period of 10 days, a stabilization PR can be opened by anyone.
-->
### Unresolved Questions
<!--
Include any open questions that need to be answered before the feature can be
stabilised. If multiple (unrelated) big questions come up, it can be a good idea
to open a separate issue for each, to make it easier to keep track of the
discussions.
It's useful to link any relevant discussions and conclusions (whether on GitHub,
Zulip, or the internals forum) here.
-->
- None yet.
[^1]: https://std-dev-guide.rust-lang.org/feature-lifecycle/stabilization.html
|
T-libs-api,C-tracking-issue
|
low
|
Minor
|
2,798,368,603
|
flutter
|
Custom shader building/testing inconsistant
|
### Steps to reproduce
1. Create flutter app through Visual Studio Code: Empty Application
2. Add shader code that imports sibiling shader code.
3. In pubspec.yaml import the shader code.
4. Try debug launch.
### Expected results
Shader code imports and hopefully compiles with no issue consistantly through out different platforms.
If there is an error while compile, print error and cancel build/test.
### Actual results
It shows different outcomes.
[On Windows machine]
- Building to Android emulator hangs up on `Flutter: Running Gradle task 'assembleDebug'...`
[On Mac machine]
- Debug to MacOS or IOS builds and launches emulator.
The build/test on both platform works if `common.frag` and `simple.frag` is combined like below.
```
#include <flutter/runtime_effect.glsl>
precision mediump float;
uniform vec2 iResolution;
out vec4 fragColor;
void main(){
vec2 uv = FlutterFragCoord().xy / iResolution.xy;
fragColor = vec4(uv, 0., 1.);
}
```
I think Flutter isn't reliable with printing errors when it comes to shader too.
Sometimes it will print errors on shader while executing which results canceling the build, but sometimes it hangs with no feedback.
When it hangs, I can't make changes to shader code, even if I stop the process, until I force quit JAVA, and any other related instances.
I guess it could be my environment problem. But I found an old issue that was closed. So I thought I would give it another shot at [#127737](https://github.com/flutter/flutter/issues/127737) with much more simple code and steps to reproduce.
### Code sample
<details open><summary>Code sample</summary>
- lib
-- main.dart
- shaders
-- common.frag
-- simple.frag
```
//common.frag
#include <flutter/runtime_effect.glsl>
precision mediump float;
uniform vec2 iResolution;
out vec4 fragColor;
```
```
//simple.frag
#include <./common.frag>
void main(){
vec2 uv = FlutterFragCoord().xy / iResolution.xy;
fragColor = vec4(uv, 0., 1.);
}
```
```
//pubspec.yaml
flutter:
uses-material-design: true
shaders:
- shaders/simple.frag
```
</details>
### Screenshots or Video
_No response_
### Logs
<details open><summary>Logs</summary>
```
[ +151 ms] Artifact Instance of 'AndroidGenSnapshotArtifacts' is not required, skipping update.
[ +3 ms] Artifact Instance of 'AndroidInternalBuildArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'IOSEngineArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'FlutterWebSdk' is not required, skipping update.
[ ] Artifact Instance of 'LegacyCanvasKitRemover' is not required, skipping update.
[ +2 ms] Artifact Instance of 'WindowsEngineArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'MacOSEngineArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'LinuxEngineArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'LinuxFuchsiaSDKArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'MacOSFuchsiaSDKArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'FlutterRunnerSDKArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'FlutterRunnerDebugSymbols' is not required, skipping update.
[ +76 ms] executing: C:\Users\Username\AppData\Local\Android\sdk\platform-tools\adb.exe devices -l
[ +62 ms] List of devices attached
emulator-5554 device product:sdk_gphone64_x86_64 model:sdk_gphone64_x86_64 device:emu64xa
transport_id:3
[ +8 ms] C:\Users\Username\AppData\Local\Android\sdk\platform-tools\adb.exe -s emulator-5554 shell getprop
[ +56 ms] Artifact Instance of 'AndroidInternalBuildArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'IOSEngineArtifacts' is not required, skipping update.
[ +5 ms] Artifact Instance of 'MacOSEngineArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'LinuxEngineArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'LinuxFuchsiaSDKArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'MacOSFuchsiaSDKArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'FlutterRunnerSDKArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'FlutterRunnerDebugSymbols' is not required, skipping update.
[ +115 ms] Skipping pub get: version match.
[ +117 ms] Generating
C:\Users\Username\Desktop\work\temp\TheSeries\git\android\app\src\main\java\io\flutter\plugins\GeneratedPlugin
Registrant.java
[ +60 ms] ro.hardware = ranchu
[ +78 ms] No packages with native assets. Skipping native assets compilation.
[ +3 ms] Initializing file store
[ +6 ms] Skipping target: gen_localizations
[ +4 ms] gen_dart_plugin_registrant: Starting due to {InvalidatedReasonKind.inputChanged: The following
inputs have updated contents:
C:\Users\Username\Desktop\work\temp\TheSeries\git\.dart_tool\package_config_subset}
[ +79 ms] gen_dart_plugin_registrant: Complete
[ +1 ms] Skipping target: _composite
[ +1 ms] complete
[ +5 ms] Launching lib\main.dart on sdk gphone64 x86 64 in debug mode...
[ +3 ms] C:\Users\Username\dev\flutter\bin\cache\dart-sdk\bin\dartaotruntime.exe
C:\Users\Username\dev\flutter\bin\cache\dart-sdk\bin\snapshots\frontend_server_aot.dart.snapshot --sdk-root
C:\Users\Username\dev\flutter\bin\cache\artifacts\engine\common\flutter_patched_sdk/ --incremental
--target=flutter --experimental-emit-debug-metadata --output-dill
C:\Users\Username\AppData\Local\Temp\flutter_tools.82b8d426\flutter_tool.2148e57c\app.dill --packages
C:\Users\Username\Desktop\work\temp\TheSeries\git\.dart_tool\package_config.json -Ddart.vm.profile=false
-Ddart.vm.product=false --enable-asserts --track-widget-creation --filesystem-scheme org-dartlang-root
--initialize-from-dill build\cache.dill.track.dill --verbosity=error
--enable-experiment=alternative-invalidation-strategy
[ +8 ms] executing: C:\Users\Username\AppData\Local\Android\sdk\build-tools\35.0.0\aapt dump xmltree
C:\Users\Username\Desktop\work\temp\TheSeries\git\build\app\outputs\flutter-apk\app-debug.apk
AndroidManifest.xml
[ +34 ms] Exit code 0 from: C:\Users\Username\AppData\Local\Android\sdk\build-tools\35.0.0\aapt dump xmltree
C:\Users\Username\Desktop\work\temp\TheSeries\git\build\app\outputs\flutter-apk\app-debug.apk
AndroidManifest.xml
[ +1 ms] N: android=http://schemas.android.com/apk/res/android
E: manifest (line=2)
A: android:versionCode(0x0101021b)=(type 0x10)0x1
A: android:versionName(0x0101021c)="0.1.0" (Raw: "0.1.0")
A: android:compileSdkVersion(0x01010572)=(type 0x10)0x23
A: android:compileSdkVersionCodename(0x01010573)="15" (Raw: "15")
A: package="com.example.git" (Raw: "com.example.git")
A: platformBuildVersionCode=(type 0x10)0x23
A: platformBuildVersionName=(type 0x10)0xf
E: uses-sdk (line=7)
A: android:minSdkVersion(0x0101020c)=(type 0x10)0x15
A: android:targetSdkVersion(0x01010270)=(type 0x10)0x23
E: uses-permission (line=15)
A: android:name(0x01010003)="android.permission.INTERNET" (Raw:
"android.permission.INTERNET")
E: queries (line=23)
E: intent (line=24)
E: action (line=25)
A: android:name(0x01010003)="android.intent.action.PROCESS_TEXT" (Raw:
"android.intent.action.PROCESS_TEXT")
E: data (line=27)
A: android:mimeType(0x01010026)="text/plain" (Raw: "text/plain")
E: permission (line=31)
A: android:name(0x01010003)="com.example.git.DYNAMIC_RECEIVER_NOT_EXPORTED_PERMISSION" (Raw:
"com.example.git.DYNAMIC_RECEIVER_NOT_EXPORTED_PERMISSION")
A: android:protectionLevel(0x01010009)=(type 0x11)0x2
E: uses-permission (line=35)
A: android:name(0x01010003)="com.example.git.DYNAMIC_RECEIVER_NOT_EXPORTED_PERMISSION" (Raw:
"com.example.git.DYNAMIC_RECEIVER_NOT_EXPORTED_PERMISSION")
E: application (line=37)
A: android:label(0x01010001)="git" (Raw: "git")
A: android:icon(0x01010002)=@0x7f0a0000
A: android:name(0x01010003)="android.app.Application" (Raw: "android.app.Application")
A: android:debuggable(0x0101000f)=(type 0x12)0xffffffff
A: android:extractNativeLibs(0x010104ea)=(type 0x12)0xffffffff
A: android:appComponentFactory(0x0101057a)="androidx.core.app.CoreComponentFactory" (Raw:
"androidx.core.app.CoreComponentFactory")
E: activity (line=44)
A: android:theme(0x01010000)=@0x7f0c0000
A: android:name(0x01010003)="com.example.git.MainActivity" (Raw:
"com.example.git.MainActivity")
A: android:exported(0x01010010)=(type 0x12)0xffffffff
A: android:taskAffinity(0x01010012)="" (Raw: "")
A: android:launchMode(0x0101001d)=(type 0x10)0x1
A: android:configChanges(0x0101001f)=(type 0x11)0x40003fb4
A: android:windowSoftInputMode(0x0101022b)=(type 0x11)0x10
A: android:hardwareAccelerated(0x010102d3)=(type 0x12)0xffffffff
E: meta-data (line=60)
A: android:name(0x01010003)="io.flutter.embedding.android.NormalTheme" (Raw:
"io.flutter.embedding.android.NormalTheme")
A: android:resource(0x01010025)=@0x7f0c0001
E: intent-filter (line=64)
E: action (line=65)
A: android:name(0x01010003)="android.intent.action.MAIN" (Raw:
"android.intent.action.MAIN")
E: category (line=67)
A: android:name(0x01010003)="android.intent.category.LAUNCHER" (Raw:
"android.intent.category.LAUNCHER")
E: meta-data (line=74)
A: android:name(0x01010003)="flutterEmbedding" (Raw: "flutterEmbedding")
A: android:value(0x01010024)=(type 0x10)0x2
E: uses-library (line=78)
A: android:name(0x01010003)="androidx.window.extensions" (Raw:
"androidx.window.extensions")
A: android:required(0x0101028e)=(type 0x12)0x0
E: uses-library (line=81)
A: android:name(0x01010003)="androidx.window.sidecar" (Raw: "androidx.window.sidecar")
A: android:required(0x0101028e)=(type 0x12)0x0
E: provider (line=85)
A: android:name(0x01010003)="androidx.startup.InitializationProvider" (Raw:
"androidx.startup.InitializationProvider")
A: android:exported(0x01010010)=(type 0x12)0x0
A: android:authorities(0x01010018)="com.example.git.androidx-startup" (Raw:
"com.example.git.androidx-startup")
E: meta-data (line=89)
A: android:name(0x01010003)="androidx.lifecycle.ProcessLifecycleInitializer" (Raw:
"androidx.lifecycle.ProcessLifecycleInitializer")
A: android:value(0x01010024)="androidx.startup" (Raw: "androidx.startup")
E: meta-data (line=92)
A: android:name(0x01010003)="androidx.profileinstaller.ProfileInstallerInitializer"
(Raw: "androidx.profileinstaller.ProfileInstallerInitializer")
A: android:value(0x01010024)="androidx.startup" (Raw: "androidx.startup")
E: receiver (line=97)
A: android:name(0x01010003)="androidx.profileinstaller.ProfileInstallReceiver" (Raw:
"androidx.profileinstaller.ProfileInstallReceiver")
A: android:permission(0x01010006)="android.permission.DUMP" (Raw:
"android.permission.DUMP")
A: android:enabled(0x0101000e)=(type 0x12)0xffffffff
A: android:exported(0x01010010)=(type 0x12)0xffffffff
A: android:directBootAware(0x01010505)=(type 0x12)0x0
E: intent-filter (line=103)
E: action (line=104)
A: android:name(0x01010003)="androidx.profileinstaller.action.INSTALL_PROFILE" (Raw:
"androidx.profileinstaller.action.INSTALL_PROFILE")
E: intent-filter (line=106)
E: action (line=107)
A: android:name(0x01010003)="androidx.profileinstaller.action.SKIP_FILE" (Raw:
"androidx.profileinstaller.action.SKIP_FILE")
E: intent-filter (line=109)
E: action (line=110)
A: android:name(0x01010003)="androidx.profileinstaller.action.SAVE_PROFILE" (Raw:
"androidx.profileinstaller.action.SAVE_PROFILE")
E: intent-filter (line=112)
E: action (line=113)
A: android:name(0x01010003)="androidx.profileinstaller.action.BENCHMARK_OPERATION"
(Raw: "androidx.profileinstaller.action.BENCHMARK_OPERATION")
[ +10 ms] executing: C:\Users\Username\AppData\Local\Android\sdk\platform-tools\adb.exe -s emulator-5554
shell -x logcat -v time -t 1
[ +11 ms] <- compile package:git/main.dart
[ +71 ms] --------- beginning of main
01-20 06:48:22.221 I/BistoHotwordHelper( 1403): (REDACTED) getHotwordActive::active query: %s,
watch: %s, devices connected: %s
[ +10 ms] executing: C:\Users\Username\AppData\Local\Android\sdk\platform-tools\adb.exe version
[ +39 ms] Android Debug Bridge version 1.0.41
Version 35.0.2-12147458
Installed as C:\Users\Username\AppData\Local\Android\sdk\platform-tools\adb.exe
Running on Windows 10.0.22631
[ +1 ms] executing: C:\Users\Username\AppData\Local\Android\sdk\platform-tools\adb.exe start-server
[ +47 ms] Building APK
[ +8 ms] executing: C:\Program Files\Android\Android Studio\jbr\bin\java -version
[ +120 ms] Exit code 0 from: C:\Program Files\Android\Android Studio\jbr\bin\java -version
[ ] openjdk version "17.0.10" 2024-01-16
OpenJDK Runtime Environment (build 17.0.10+0--11609105)
OpenJDK 64-Bit Server VM (build 17.0.10+0--11609105, mixed mode)
[ +10 ms] executing: C:\Program Files\Android\Android Studio\jbr\bin\java --version
[ +150 ms] Exit code 0 from: C:\Program Files\Android\Android Studio\jbr\bin\java --version
[ ] openjdk 17.0.10 2024-01-16
OpenJDK Runtime Environment (build 17.0.10+0--11609105)
OpenJDK 64-Bit Server VM (build 17.0.10+0--11609105, mixed mode)
[ +9 ms] CMake project not found, skipping support Android 15 16k page size migration.
[ +34 ms] Using gradle from C:\Users\Username\Desktop\work\temp\TheSeries\git\android\gradlew.bat.
[ +1 ms] Running Gradle task 'assembleDebug'...
[ +4 ms] executing: [C:\Users\Username\Desktop\work\temp\TheSeries\git\android/]
C:\Users\Username\Desktop\work\temp\TheSeries\git\android\gradlew.bat --full-stacktrace --info -Pverbose=true
-Ptarget-platform=android-x64 -Ptarget=C:\Users\Username\Desktop\work\temp\TheSeries\git\lib\main.dart
-Pbase-application-name=android.app.Application -Pdart-obfuscation=false -Ptrack-widget-creation=true
-Ptree-shake-icons=false -Pfilesystem-scheme=org-dartlang-root assembleDebug
[ +633 ms] Initialized native services in: C:\Users\Username\.gradle\native
[ ] Initialized jansi services in: C:\Users\Username\.gradle\native
[ +109 ms] Received JVM installation metadata from 'C:\Program Files\Android\Android Studio\jbr':
{JAVA_HOME=C:\Program Files\Android\Android Studio\jbr, JAVA_VERSION=17.0.10, JAVA_VENDOR=JetBrains s.r.o.,
RUNTIME_NAME=OpenJDK Runtime Environment, RUNTIME_VERSION=17.0.10+0--11609105, VM_NAME=OpenJDK 64-Bit Server
VM, VM_VERSION=17.0.10+0--11609105, VM_VENDOR=JetBrains s.r.o., OS_ARCH=amd64}
[ +498 ms] The client will now receive all logging from the daemon (pid: 24552). The daemon log file:
C:\Users\Username\.gradle\daemon\8.3\daemon-24552.out.log
[ +1 ms] Starting 3rd build in daemon [uptime: 14 mins 2.701 secs, performance: 100%, GC rate: 0.00/s, heap
usage: 0% of 4 GiB, non-heap usage: 5% of 2 GiB]
[ ] Using 20 worker leases.
[ ] Now considering [C:\Users\Username\Desktop\work\temp\TheSeries\git\android,
C:\Users\Username\dev\flutter\packages\flutter_tools\gradle] as hierarchies to watch
[ ] Watching 2 directory hierarchies to track changes
[ ] Watching the file system is configured to be enabled if available
[ ] File system watching is active
[ ] Starting Build
[ ] Now considering [C:\Users\Username\dev\flutter\packages\flutter_tools\gradle,
C:\Users\Username\Desktop\work\temp\TheSeries\git\android] as hierarchies to watch
[ ] Watching 2 directory hierarchies to track changes
[ +103 ms] > Configure project :gradle
[ ] Evaluating project ':gradle' using build file
'C:\Users\Username\dev\flutter\packages\flutter_tools\gradle\build.gradle.kts'.
[ ] The configuration :gradle:classpath is both resolvable and consumable. This is considered a legacy
configuration and it will eventually only be possible to be one of these.
[ ] The configuration :gradle:classpath is both consumable and declarable. This combination is
incorrect, only one of these flags should be set.
[ ] The configuration :gradle:classpath is both resolvable and consumable. This is considered a legacy
configuration and it will eventually only be possible to be one of these.
[ ] The configuration :gradle:classpath is both consumable and declarable. This combination is
incorrect, only one of these flags should be set.
[ ] Caching disabled for Kotlin DSL accessors for project ':gradle' because:
[ ] Build cache is disabled
[ ] Skipping Kotlin DSL accessors for project ':gradle' as it is up-to-date.
[ ] The configuration detachedConfiguration1 is both resolvable and consumable. This is considered a
legacy configuration and it will eventually only be possible to be one of these.
[ ] The configuration detachedConfiguration1 is both consumable and declarable. This combination is
incorrect, only one of these flags should be set.
[ ] The configuration detachedConfiguration1 is both resolvable and consumable. This is considered a
legacy configuration and it will eventually only be possible to be one of these.
[ ] The configuration detachedConfiguration1 is both consumable and declarable. This combination is
incorrect, only one of these flags should be set.
[ ] The configuration detachedConfiguration2 is both resolvable and consumable. This is considered a
legacy configuration and it will eventually only be possible to be one of these.
[ ] The configuration detachedConfiguration2 is both consumable and declarable. This combination is
incorrect, only one of these flags should be set.
[ ] The configuration detachedConfiguration2 is both resolvable and consumable. This is considered a
legacy configuration and it will eventually only be possible to be one of these.
[ ] The configuration detachedConfiguration2 is both consumable and declarable. This combination is
incorrect, only one of these flags should be set.
[ ] The configuration classpath is both resolvable and consumable. This is considered a legacy
configuration and it will eventually only be possible to be one of these.
[ ] The configuration classpath is both consumable and declarable. This combination is incorrect, only
one of these flags should be set.
[ ] The configuration :gradle:mainSourceElements is both consumable and declarable. This combination
is incorrect, only one of these flags should be set.
[ ] Resolve mutations for :gradle:compileJava (Thread[Execution worker,5,main]) started.
[ ] :gradle:compileJava (Thread[Execution worker,5,main]) started.
[ ] > Task :gradle:compileJava NO-SOURCE
[ ] Skipping task ':gradle:compileJava' as it has no source files and no previous output files.
[ ] Resolve mutations for :gradle:compileGroovy (Thread[Execution worker,5,main]) started.
[ ] :gradle:compileGroovy (Thread[included builds,5,main]) started.
[ ] > Task :gradle:compileGroovy UP-TO-DATE
[ ] Caching disabled for task ':gradle:compileGroovy' because:
[ ] Build cache is disabled
[ ] Skipping task ':gradle:compileGroovy' as it is up-to-date.
[ ] Resolve mutations for :gradle:pluginDescriptors (Thread[included builds,5,main]) started.
[ ] :gradle:pluginDescriptors (Thread[Execution worker Thread 10,5,main]) started.
[ ] > Task :gradle:pluginDescriptors UP-TO-DATE
[ ] Caching disabled for task ':gradle:pluginDescriptors' because:
[ ] Build cache is disabled
[ ] Skipping task ':gradle:pluginDescriptors' as it is up-to-date.
[ ] Resolve mutations for :gradle:processResources (Thread[Execution worker Thread 10,5,main])
started.
[ ] :gradle:processResources (Thread[Execution worker Thread 10,5,main]) started.
[ ] > Task :gradle:processResources UP-TO-DATE
[ ] Caching disabled for task ':gradle:processResources' because:
[ ] Build cache is disabled
[ ] Skipping task ':gradle:processResources' as it is up-to-date.
[ ] Resolve mutations for :gradle:classes (Thread[Execution worker Thread 10,5,main]) started.
[ ] :gradle:classes (Thread[Execution worker Thread 10,5,main]) started.
[ ] > Task :gradle:classes UP-TO-DATE
[ ] Skipping task ':gradle:classes' as it has no actions.
[ ] Resolve mutations for :gradle:jar (Thread[Execution worker Thread 10,5,main]) started.
[ ] :gradle:jar (Thread[Execution worker Thread 10,5,main]) started.
[ ] > Task :gradle:jar UP-TO-DATE
[ ] Caching disabled for task ':gradle:jar' because:
[ ] Build cache is disabled
[ ] Skipping task ':gradle:jar' as it is up-to-date.
[ ] The configuration classpath is both resolvable and consumable. This is considered a legacy
configuration and it will eventually only be possible to be one of these.
[ ] The configuration classpath is both consumable and declarable. This combination is incorrect, only
one of these flags should be set.
[ +65 ms] Settings evaluated using settings file
'C:\Users\Username\Desktop\work\temp\TheSeries\git\android\settings.gradle'.
[ +1 ms] Projects loaded. Root project using build file
'C:\Users\Username\Desktop\work\temp\TheSeries\git\android\build.gradle'.
[ ] Included projects: [root project 'android', project ':app']
[ +294 ms] > Configure project :app
[ ] Evaluating project ':app' using build file
'C:\Users\Username\Desktop\work\temp\TheSeries\git\android\app\build.gradle'.
[ ] Using default execution profile
[ ] Using Kotlin Gradle Plugin gradle76 variant
[ ] The configuration classpath is both resolvable and consumable. This is considered a legacy
configuration and it will eventually only be possible to be one of these.
[ ] The configuration classpath is both consumable and declarable. This combination is incorrect, only
one of these flags should be set.
[ ] The configuration classpath is both resolvable and consumable. This is considered a legacy
configuration and it will eventually only be possible to be one of these.
[ ] The configuration classpath is both consumable and declarable. This combination is incorrect, only
one of these flags should be set.
[ ] Parsed shrinker version: 8.1.56
[ ] > Configure project :
[ ] Evaluating root project 'android' using build file
'C:\Users\Username\Desktop\work\temp\TheSeries\git\android\build.gradle'.
[ ] All projects evaluated.
[ ] The configuration :classpath is both resolvable and consumable. This is considered a legacy
configuration and it will eventually only be possible to be one of these.
[ ] The configuration :classpath is both consumable and declarable. This combination is incorrect,
only one of these flags should be set.
[ ] The configuration :classpath is both resolvable and consumable. This is considered a legacy
configuration and it will eventually only be possible to be one of these.
[ ] The configuration :classpath is both consumable and declarable. This combination is incorrect,
only one of these flags should be set.
[ ] The configuration :classpath is both resolvable and consumable. This is considered a legacy
configuration and it will eventually only be possible to be one of these.
[ ] The configuration :classpath is both consumable and declarable. This combination is incorrect,
only one of these flags should be set.
[ ] The configuration :classpath is both resolvable and consumable. This is considered a legacy
configuration and it will eventually only be possible to be one of these.
[ ] The configuration :classpath is both consumable and declarable. This combination is incorrect,
only one of these flags should be set.
[ ] The configuration :classpath is both resolvable and consumable. This is considered a legacy
configuration and it will eventually only be possible to be one of these.
[ ] The configuration :classpath is both consumable and declarable. This combination is incorrect,
only one of these flags should be set.
[ ] The configuration :classpath is both resolvable and consumable. This is considered a legacy
configuration and it will eventually only be possible to be one of these.
[ ] The configuration :classpath is both consumable and declarable. This combination is incorrect,
only one of these flags should be set.
[ ] The configuration :app:classpath is both resolvable and consumable. This is considered a legacy
configuration and it will eventually only be possible to be one of these.
[ ] The configuration :app:classpath is both consumable and declarable. This combination is incorrect,
only one of these flags should be set.
[ ] The configuration :app:classpath is both resolvable and consumable. This is considered a legacy
configuration and it will eventually only be possible to be one of these.
[ ] The configuration :app:classpath is both consumable and declarable. This combination is incorrect,
only one of these flags should be set.
[ ] The configuration :app:classpath is both resolvable and consumable. This is considered a legacy
configuration and it will eventually only be possible to be one of these.
[ ] The configuration :app:classpath is both consumable and declarable. This combination is incorrect,
only one of these flags should be set.
[ ] The configuration :app:classpath is both resolvable and consumable. This is considered a legacy
configuration and it will eventually only be possible to be one of these.
[ ] The configuration :app:classpath is both consumable and declarable. This combination is incorrect,
only one of these flags should be set.
[ ] The configuration :app:classpath is both resolvable and consumable. This is considered a legacy
configuration and it will eventually only be possible to be one of these.
[ ] The configuration :app:classpath is both consumable and declarable. This combination is incorrect,
only one of these flags should be set.
[ ] The configuration :app:classpath is both resolvable and consumable. This is considered a legacy
configuration and it will eventually only be possible to be one of these.
[ ] The configuration :app:classpath is both consumable and declarable. This combination is incorrect,
only one of these flags should be set.
[ ] Task name matched 'assembleDebug'
[ ] Selected primary task 'assembleDebug' from project :
[ ] WARNING: We recommend using a newer Android Gradle plugin to use compileSdk = 35
[ ] This Android Gradle plugin (8.1.0) was tested up to compileSdk = 33 (and compileSdkPreview =
"UpsideDownCakePrivacySandbox").
[ ] You are strongly encouraged to update your project to use a newer
[ ] Android Gradle plugin that has been tested with compileSdk = 35.
[ ] If you are already using the latest version of the Android Gradle plugin,
[ ] you may need to wait until a newer version with support for compileSdk = 35 is available.
[ ] To suppress this warning, add/update
[ ] android.suppressUnsupportedCompileSdk=35
[ ] to this project's gradle.properties.
[ ] Tasks to be executed: [task ':app:preBuild', task ':app:preDebugBuild', task
':app:mergeDebugNativeDebugMetadata', task ':app:compileFlutterBuildDebug', task
':app:packJniLibsflutterBuildDebug', task ':app:checkDebugAarMetadata', task ':app:cleanMergeDebugAssets',
task ':app:mergeDebugShaders', task ':app:compileDebugShaders', task ':app:generateDebugAssets', task
':app:mergeDebugAssets', task ':app:copyFlutterAssetsDebug', task ':app:generateDebugResValues', task
':app:mapDebugSourceSetPaths', task ':app:generateDebugResources', task ':app:mergeDebugResources', task
':app:packageDebugResources', task ':app:parseDebugLocalResources', task
':app:createDebugCompatibleScreenManifests', task ':app:extractDeepLinksDebug', task
':app:processDebugMainManifest', task ':app:processDebugManifest', task
':app:processDebugManifestForPackage', task ':app:processDebugResources', task ':app:compileDebugKotlin',
task ':app:javaPreCompileDebug', task ':app:compileDebugJavaWithJavac', task ':app:compressDebugAssets', task
':app:processDebugJavaRes', task ':app:mergeDebugJavaResource', task ':app:checkDebugDuplicateClasses', task
':app:desugarDebugFileDependencies', task ':app:mergeExtDexDebug', task ':app:mergeLibDexDebug', task
':app:dexBuilderDebug', task ':app:mergeProjectDexDebug', task ':app:mergeDebugJniLibFolders', task
':app:mergeDebugNativeLibs', task ':app:stripDebugDebugSymbols', task ':app:validateSigningDebug', task
':app:writeDebugAppMetadata', task ':app:writeDebugSigningConfigVersions', task ':app:packageDebug', task
':app:createDebugApkListingFileRedirect', task ':app:assembleDebug']
[ +2 ms] Tasks that were excluded: []
[ ] Resolve mutations for :app:preBuild (Thread[Execution worker Thread 17,5,main]) started.
[ ] :app:preBuild (Thread[Execution worker Thread 17,5,main]) started.
[ ] > Task :app:preBuild UP-TO-DATE
[ ] Skipping task ':app:preBuild' as it has no actions.
[ ] Resolve mutations for :app:preDebugBuild (Thread[Execution worker Thread 17,5,main]) started.
[ ] :app:preDebugBuild (Thread[Execution worker Thread 17,5,main]) started.
[ ] > Task :app:preDebugBuild UP-TO-DATE
[ ] Skipping task ':app:preDebugBuild' as it has no actions.
[ ] Resolve mutations for :app:mergeDebugNativeDebugMetadata (Thread[Execution worker Thread
17,5,main]) started.
[ ] :app:mergeDebugNativeDebugMetadata (Thread[Execution worker Thread 17,5,main]) started.
[ ] > Task :app:mergeDebugNativeDebugMetadata NO-SOURCE
[ ] Skipping task ':app:mergeDebugNativeDebugMetadata' as it has no source files and no previous
output files.
[ ] Resolve mutations for :app:compileFlutterBuildDebug (Thread[Execution worker Thread 17,5,main])
started.
[ +60 ms] :app:compileFlutterBuildDebug (Thread[Execution worker Thread 6,5,main]) started.
[+1915 ms] > Task :app:compileFlutterBuildDebug
[ ] Caching disabled for task ':app:compileFlutterBuildDebug' because:
[ ] Build cache is disabled
[ ] Task ':app:compileFlutterBuildDebug' is not up-to-date because:
[ ] Task has failed previously.
[ ] Starting process 'command 'C:\Users\Username\dev\flutter\bin\flutter.bat''. Working directory:
C:\Users\Username\Desktop\work\temp\TheSeries\git Command: C:\Users\Username\dev\flutter\bin\flutter.bat
--verbose assemble --no-version-check --depfile
C:\Users\Username\Desktop\work\temp\TheSeries\git\build\app\intermediates\flutter\debug/flutter_build.d
--output C:\Users\Username\Desktop\work\temp\TheSeries\git\build\app\intermediates\flutter\debug
-dTargetFile=C:\Users\Username\Desktop\work\temp\TheSeries\git\lib\main.dart -dTargetPlatform=android
-dBuildMode=debug -dTrackWidgetCreation=true -dFlavor= -dAndroidArchs=android-x64 -dMinSdkVersion=21
debug_android_application
[ ] Successfully started process 'command 'C:\Users\Username\dev\flutter\bin\flutter.bat''
[ ] [ +57 ms] Artifact Instance of 'AndroidGenSnapshotArtifacts' is not required, skipping update.
[ ] [ +2 ms] Artifact Instance of 'AndroidInternalBuildArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'IOSEngineArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'FlutterWebSdk' is not required, skipping update.
[ ] [ ] Artifact Instance of 'LegacyCanvasKitRemover' is not required, skipping update.
[ ] [ +2 ms] Artifact Instance of 'WindowsEngineArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'MacOSEngineArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'LinuxEngineArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'LinuxFuchsiaSDKArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'MacOSFuchsiaSDKArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'FlutterRunnerSDKArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'FlutterRunnerDebugSymbols' is not required, skipping update.
[ ] [ +78 ms] Artifact Instance of 'MaterialFonts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'GradleWrapper' is not required, skipping update.
[ ] [ +2 ms] Artifact Instance of 'AndroidInternalBuildArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'IOSEngineArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'FlutterWebSdk' is not required, skipping update.
[ ] [ ] Artifact Instance of 'LegacyCanvasKitRemover' is not required, skipping update.
[ ] [ ] Artifact Instance of 'FlutterSdk' is not required, skipping update.
[ ] [ ] Artifact Instance of 'WindowsEngineArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'MacOSEngineArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'LinuxEngineArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'LinuxFuchsiaSDKArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'MacOSFuchsiaSDKArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'FlutterRunnerSDKArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'FlutterRunnerDebugSymbols' is not required, skipping update.
[ ] [ ] Artifact Instance of 'IosUsbArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'IosUsbArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'IosUsbArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'IosUsbArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'IosUsbArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'FontSubsetArtifacts' is not required, skipping update.
[ ] [ ] Artifact Instance of 'PubDependencies' is not required, skipping update.
[ ] [ +60 ms] Initializing file store
[ ] [ +12 ms] Done initializing file store
[ ] [ +47 ms] Skipping target: gen_localizations
[ ] [ +6 ms] Skipping target: gen_dart_plugin_registrant
[ ] [ +552 ms] Skipping target: kernel_snapshot_program
[ ] [ +3 ms] Skipping target: native_assets
[ ] [ ] Skipping target: kernel_snapshot_native_assets
[ ] [ ] Skipping target: kernel_snapshot
[ ] [ +280 ms] debug_android_application: Starting due to {InvalidatedReasonKind.inputChanged: The
following inputs have updated contents: C:\Users\Username\Desktop\work\temp\TheSeries\git\pubspec.yaml}
[ ] [ +131 ms] shaderc command:
[C:\Users\Username\dev\flutter\bin\cache\artifacts\engine\windows-x64\impellerc.exe, --sksl,
--runtime-stage-gles, --runtime-stage-vulkan, --iplr,
--sl=C:\Users\Username\Desktop\work\temp\TheSeries\git\build\app\intermediates\flutter\debug\flutter_assets\sh
aders/simple.frag,
--spirv=C:\Users\Username\Desktop\work\temp\TheSeries\git\build\app\intermediates\flutter\debug\flutter_assets
\shaders/simple.frag.spirv, --input=C:\Users\Username\Desktop\work\temp\TheSeries\git\shaders\simple.frag,
--input-type=frag, --include=C:\Users\Username\Desktop\work\temp\TheSeries\git\shaders,
--include=C:\Users\Username\dev\flutter\bin\cache\artifacts\engine\windows-x64\shader_lib]
[ +1 ms] [ +2 ms] shaderc command:
[C:\Users\Username\dev\flutter\bin\cache\artifacts\engine\windows-x64\impellerc.exe, --sksl,
--runtime-stage-gles, --runtime-stage-vulkan, --iplr,
--sl=C:\Users\Username\Desktop\work\temp\TheSeries\git\build\app\intermediates\flutter\debug\flutter_assets\sh
aders/ink_sparkle.frag,
--spirv=C:\Users\Username\Desktop\work\temp\TheSeries\git\build\app\intermediates\flutter\debug\flutter_assets
\shaders/ink_sparkle.frag.spirv,
--input=C:\Users\Username\dev\flutter\packages\flutter\lib\src\material\shaders\ink_sparkle.frag,
--input-type=frag, --include=C:\Users\Username\dev\flutter\packages\flutter\lib\src\material\shaders,
--include=C:\Users\Username\dev\flutter\bin\cache\artifacts\engine\windows-x64\shader_lib]
```
it hangs after this.
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```
Flutter doctor -v
[√] Flutter (Channel stable, 3.27.2, on Microsoft Windows [Version 10.0.22631.4602], locale ko-KR)
• Flutter version 3.27.2 on channel stable at C:\Users\Username\dev\flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 68415ad1d9 (7 days ago), 2025-01-13 10:22:03 -0800
• Engine revision e672b006cb
• Dart version 3.6.1
• DevTools version 2.40.2
[√] Windows Version (Installed version of Windows is version 10 or higher)
[!] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at C:\Users\Username\AppData\Local\Android\sdk
• Platform android-35, build-tools 35.0.0
• Java binary at: C:\Program Files\Android\Android Studio\jbr\bin\java
• Java version OpenJDK Runtime Environment (build 17.0.10+0--11609105)
! Some Android licenses not accepted. To resolve this, run: flutter doctor --android-licenses
[√] Chrome - develop for the web
• Chrome at C:\Program Files\Google\Chrome\Application\chrome.exe
[X] Visual Studio - develop Windows apps
X Visual Studio not installed; this is necessary to develop Windows apps.
Download at https://visualstudio.microsoft.com/downloads/.
Please install the "Desktop development with C++" workload, including all of its default components
[√] Android Studio (version 2024.1)
• Android Studio at C:\Program Files\Android\Android Studio
• Flutter plugin can be installed from:
https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.10+0--11609105)
[√] VS Code (version 1.96.4)
• VS Code at C:\Users\Username\AppData\Local\Programs\Microsoft VS Code
• Flutter extension version 3.102.0
[√] Connected device (4 available)
• sdk gphone64 x86 64 (mobile) • emulator-5554 • android-x64 • Android 15 (API 35) (emulator)
• Windows (desktop) • windows • windows-x64 • Microsoft Windows [Version
10.0.22631.4602]
• Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.265
• Edge (web) • edge • web-javascript • Microsoft Edge 130.0.2849.46
[√] Network resources
• All expected network resources are available.
```
</details>
|
waiting for customer response,in triage
|
low
|
Critical
|
2,798,379,406
|
react-native
|
Crash with SIGSEGV on Android
|
### Description
Currently, my Firebase Crashlytics reported an
Crashed: null pointer dereference #1
SIGSEGV 0x0000000000000008
### Steps to reproduce
I don't know since it's reported from firebase crashlytics
### React Native Version
0.73.1
### Affected Platforms
Runtime - Android
### Output of `npx react-native info`
```text
System:
OS: macOS 14.3.1
CPU: (8) arm64 Apple M1
Memory: 115.33 MB / 16.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 23.4.0
path: /opt/homebrew/bin/node
Yarn:
version: 3.6.4
path: /opt/homebrew/bin/yarn
npm:
version: 10.2.4
path: /usr/local/bin/npm
Watchman:
version: 2024.12.02.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods:
version: 1.16.2
path: /Users/sotatek/.rbenv/shims/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 23.2
- iOS 17.2
- macOS 14.2
- tvOS 17.2
- watchOS 10.2
Android SDK:
API Levels:
- "31"
- "33"
- "33"
- "33"
- "34"
Build Tools:
- 30.0.3
- 33.0.1
- 34.0.0
System Images:
- android-29 | Google APIs ARM 64 v8a
- android-29 | Google Play ARM 64 v8a
- android-34 | Google APIs ARM 64 v8a
- android-UpsideDownCakePrivacySandbox | Google Play ARM 64 v8a
Android NDK: Not Found
IDEs:
Android Studio: 2023.1 AI-231.9392.1.2311.11330709
Xcode:
version: 15.1/15C65
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.10
path: /usr/bin/javac
Ruby:
version: 3.3.6
path: /Users/sotatek/.rbenv/shims/ruby
npmPackages:
"@react-native-community/cli": Not Found
react:
installed: 18.2.0
wanted: 18.2.0
react-native:
installed: 0.73.1
wanted: 0.73.1
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: false
iOS:
hermesEnabled: true
newArchEnabled: false
```
### Stacktrace or Logs
```text
Crashed: null pointer dereference #1
SIGSEGV 0x0000000000000008
0 libfbjni.so (Missing BuildId 957a087554a7dd659c148d0560674b789d2844d0)
1 libfbjni.so (Missing BuildId 957a087554a7dd659c148d0560674b789d2844d0)
2 (Missing)
3 libart.so (Missing BuildId c35c9ebf7bb06435e4b31977d87bd5d5)
4 libart.so (Missing BuildId c35c9ebf7bb06435e4b31977d87bd5d5)
5 libart.so (Missing BuildId c35c9ebf7bb06435e4b31977d87bd5d5)
6 libart.so (Missing BuildId c35c9ebf7bb06435e4b31977d87bd5d5)
7 libart.so (Missing BuildId c35c9ebf7bb06435e4b31977d87bd5d5)
8 libart.so (Missing BuildId c35c9ebf7bb06435e4b31977d87bd5d5)
9 libart.so (Missing BuildId c35c9ebf7bb06435e4b31977d87bd5d5)
10 libart.so (Missing BuildId c35c9ebf7bb06435e4b31977d87bd5d5)
11 libc.so (Missing BuildId 6d463132a910e8cedcfbd9e51b09adce)
12 libc.so (Missing BuildId 6d463132a910e8cedcfbd9e51b09adce)
13 libc.so (Missing BuildId 6d463132a910e8cedcfbd9e51b09adce)
14 libc.so (Missing BuildId 6d463132a910e8cedcfbd9e51b09adce)
15 libart.so (Missing BuildId c35c9ebf7bb06435e4b31977d87bd5d5)
```
### Reproducer
we don't know how to reproduce.
### Screenshots and Videos
_No response_
|
Platform: Android,Needs: Author Feedback,Needs: Repro,Type: Unsupported Version
|
low
|
Critical
|
2,798,414,768
|
neovim
|
vim.snippet.expand: insertion at wrong place when using virtualedit + tab indentation
|
### Problem
Insertion starts at the wrong place when expanding a snippet, when
1. Tabs are used for indentation, and
2. using :set virtualedit=all
Note: this is a different issue from https://github.com/neovim/neovim/issues/30953, which does not refer to indentation / virtualedit but rather the order of text edits.
Original issue: https://github.com/Saghen/blink.cmp/issues/889
### Steps to reproduce
:set virtualedit=all
:set list
Then expand a snippet to the right of a few tab symbols. See the following short video:
https://github.com/user-attachments/assets/ea755013-0814-4584-a585-b1fa7d8ed407
### Expected behavior
The snippet itself is expanded at the correct location. The cursor should be placed at the correct location in the expanded snippet as well.
### Nvim version (nvim -v)
NVIM v0.11.0-dev-1594+g5f527f24f0
### Vim (not Nvim) behaves the same?
no, vim doesn't have lua-based snippet support at all
### Operating system/version
macOS
### Terminal name/version
kitty 0.38.1
### $TERM environment variable
xterm-kitty
### Installation
build from repo
|
snippet
|
low
|
Minor
|
2,798,433,570
|
react-native
|
A problem occurred evaluating settings 'android'. > Parameter specified as non-null is null: method kotlin.text.Regex.replace, parameter input
|
### Description
A problem occurred evaluating settings 'android'. > Parameter specified as non-null is null: method kotlin.text.Regex.replace, parameter input
settings.gradle file
```
pluginManagement { includeBuild("../node_modules/@react-native/gradle-plugin") }
plugins { id("com.facebook.react.settings") }
extensions.configure(com.facebook.react.ReactSettingsExtension){ ex -> ex.autolinkLibrariesFromCommand() }
rootProject.name = 'Velo'
include ':app'
include ':@react-native-firebase_analytics'
project(':@react-native-firebase_analytics').projectDir = new File(rootProject.projectDir, '../node_modules/@react-native-firebase/analytics/android')
```
### Steps to reproduce
Upgraded react native version from 0.71.7 to 0.75.1 with the help of react native upgrade helper
Deleted node-modules, package-lock.json, .gradle & build folder
When I run ./gradlew clean I get the error stated
### React Native Version
0.75.1
### Affected Platforms
Runtime - Android
### Output of `npx react-native info`
```text
System:
OS: Windows 11 10.0.26100
CPU: (8) x64 11th Gen Intel(R) Core(TM) i5-1155G7 @ 2.50GHz
Memory: 1.56 GB / 11.79 GB
Binaries:
Node:
version: 20.11.0
path: C:\Program Files\nodejs\node.EXE
Yarn: Not Found
npm:
version: 7.24.2
path: C:\Program Files\nodejs\npm.CMD
Watchman:
version: 20250113.015429.0
path: C:\ProgramData\chocolatey\bin\watchman.EXE
SDKs:
Android SDK: Not Found
Windows SDK: Not Found
IDEs:
Android Studio: AI-241.19072.14.2412.12360217
Visual Studio: Not Found
Languages:
Java: 17.0.10
Ruby: Not Found
npmPackages:
"@react-native-community/cli": Not Found
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.75.4
wanted: ^0.75.1
react-native-windows: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: true
iOS:
hermesEnabled: Not found
newArchEnabled: Not found
```
### Stacktrace or Logs
```text
* Exception is:
org.gradle.api.GradleScriptException: A problem occurred evaluating settings 'android'.
at org.gradle.groovy.scripts.internal.DefaultScriptRunnerFactory$ScriptRunnerImpl.run(DefaultScriptRunnerFactory.java:93)
at org.gradle.configuration.DefaultScriptPluginFactory$ScriptPluginImpl.lambda$apply$0(DefaultScriptPluginFactory.java:137)
at org.gradle.configuration.DefaultScriptTarget.addConfiguration(DefaultScriptTarget.java:74)
at org.gradle.configuration.DefaultScriptPluginFactory$ScriptPluginImpl.apply(DefaultScriptPluginFactory.java:140)
at org.gradle.configuration.BuildOperationScriptPlugin$1.run(BuildOperationScriptPlugin.java:68)
at org.gradle.internal.operations.DefaultBuildOperationRunner$1.execute(DefaultBuildOperationRunner.java:29)
at org.gradle.internal.operations.DefaultBuildOperationRunner$1.execute(DefaultBuildOperationRunner.java:26)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:157)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.run(DefaultBuildOperationRunner.java:47)
at org.gradle.internal.operations.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:68)
at org.gradle.configuration.BuildOperationScriptPlugin.lambda$apply$0(BuildOperationScriptPlugin.java:65)
at org.gradle.internal.code.DefaultUserCodeApplicationContext.apply(DefaultUserCodeApplicationContext.java:43)
at org.gradle.configuration.BuildOperationScriptPlugin.apply(BuildOperationScriptPlugin.java:65)
at org.gradle.initialization.ScriptEvaluatingSettingsProcessor.applySettingsScript(ScriptEvaluatingSettingsProcessor.java:75)
at org.gradle.initialization.ScriptEvaluatingSettingsProcessor.process(ScriptEvaluatingSettingsProcessor.java:68)
at org.gradle.initialization.SettingsEvaluatedCallbackFiringSettingsProcessor.process(SettingsEvaluatedCallbackFiringSettingsProcessor.java:34)
at org.gradle.initialization.RootBuildCacheControllerSettingsProcessor.process(RootBuildCacheControllerSettingsProcessor.java:46)
at org.gradle.initialization.BuildOperationSettingsProcessor$2.call(BuildOperationSettingsProcessor.java:49)
at org.gradle.initialization.BuildOperationSettingsProcessor$2.call(BuildOperationSettingsProcessor.java:46)
at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:200)
at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:195)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:157)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53)
at org.gradle.internal.operations.DefaultBuildOperationExecutor.call(DefaultBuildOperationExecutor.java:73)
at org.gradle.initialization.BuildOperationSettingsProcessor.process(BuildOperationSettingsProcessor.java:46)
at org.gradle.initialization.DefaultSettingsLoader.findSettingsAndLoadIfAppropriate(DefaultSettingsLoader.java:143)
at org.gradle.initialization.DefaultSettingsLoader.findAndLoadSettings(DefaultSettingsLoader.java:63)
at org.gradle.initialization.SettingsAttachingSettingsLoader.findAndLoadSettings(SettingsAttachingSettingsLoader.java:33)
at org.gradle.internal.composite.CommandLineIncludedBuildSettingsLoader.findAndLoadSettings(CommandLineIncludedBuildSettingsLoader.java:35)
at org.gradle.internal.composite.ChildBuildRegisteringSettingsLoader.findAndLoadSettings(ChildBuildRegisteringSettingsLoader.java:44)
at org.gradle.internal.composite.CompositeBuildSettingsLoader.findAndLoadSettings(CompositeBuildSettingsLoader.java:35)
at org.gradle.initialization.InitScriptHandlingSettingsLoader.findAndLoadSettings(InitScriptHandlingSettingsLoader.java:33)
at org.gradle.api.internal.initialization.CacheConfigurationsHandlingSettingsLoader.findAndLoadSettings(CacheConfigurationsHandlingSettingsLoader.java:36)
at org.gradle.initialization.GradlePropertiesHandlingSettingsLoader.findAndLoadSettings(GradlePropertiesHandlingSettingsLoader.java:38)
at org.gradle.initialization.DefaultSettingsPreparer.prepareSettings(DefaultSettingsPreparer.java:31)
at org.gradle.initialization.BuildOperationFiringSettingsPreparer$LoadBuild.doLoadBuild(BuildOperationFiringSettingsPreparer.java:71)
at org.gradle.initialization.BuildOperationFiringSettingsPreparer$LoadBuild.run(BuildOperationFiringSettingsPreparer.java:66)
at org.gradle.internal.operations.DefaultBuildOperationRunner$1.execute(DefaultBuildOperationRunner.java:29)
at org.gradle.internal.operations.DefaultBuildOperationRunner$1.execute(DefaultBuildOperationRunner.java:26)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:157)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.run(DefaultBuildOperationRunner.java:47)
at org.gradle.internal.operations.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:68)
at org.gradle.initialization.BuildOperationFiringSettingsPreparer.prepareSettings(BuildOperationFiringSettingsPreparer.java:54)
at org.gradle.initialization.VintageBuildModelController.lambda$prepareSettings$1(VintageBuildModelController.java:80)
at org.gradle.internal.model.StateTransitionController.lambda$doTransition$14(StateTransitionController.java:255)
at org.gradle.internal.model.StateTransitionController.doTransition(StateTransitionController.java:266)
at org.gradle.internal.model.StateTransitionController.doTransition(StateTransitionController.java:254)
at org.gradle.internal.model.StateTransitionController.lambda$transitionIfNotPreviously$11(StateTransitionController.java:213)
at org.gradle.internal.work.DefaultSynchronizer.withLock(DefaultSynchronizer.java:34)
at org.gradle.internal.model.StateTransitionController.transitionIfNotPreviously(StateTransitionController.java:209)
at org.gradle.initialization.VintageBuildModelController.prepareSettings(VintageBuildModelController.java:80)
at org.gradle.initialization.VintageBuildModelController.prepareToScheduleTasks(VintageBuildModelController.java:70)
at org.gradle.internal.build.DefaultBuildLifecycleController.lambda$prepareToScheduleTasks$6(DefaultBuildLifecycleController.java:175)
at org.gradle.internal.model.StateTransitionController.lambda$doTransition$14(StateTransitionController.java:255)
at org.gradle.internal.model.StateTransitionController.doTransition(StateTransitionController.java:266)
at org.gradle.internal.model.StateTransitionController.doTransition(StateTransitionController.java:254)
at org.gradle.internal.model.StateTransitionController.lambda$maybeTransition$9(StateTransitionController.java:190)
at org.gradle.internal.work.DefaultSynchronizer.withLock(DefaultSynchronizer.java:34)
at org.gradle.internal.model.StateTransitionController.maybeTransition(StateTransitionController.java:186)
at org.gradle.internal.build.DefaultBuildLifecycleController.prepareToScheduleTasks(DefaultBuildLifecycleController.java:173)
at org.gradle.internal.buildtree.DefaultBuildTreeWorkPreparer.scheduleRequestedTasks(DefaultBuildTreeWorkPreparer.java:36)
at org.gradle.configurationcache.VintageBuildTreeWorkController$scheduleAndRunRequestedTasks$1.apply(VintageBuildTreeWorkController.kt:36)
at org.gradle.configurationcache.VintageBuildTreeWorkController$scheduleAndRunRequestedTasks$1.apply(VintageBuildTreeWorkController.kt:35)
at org.gradle.composite.internal.DefaultIncludedBuildTaskGraph.withNewWorkGraph(DefaultIncludedBuildTaskGraph.java:112)
at org.gradle.configurationcache.VintageBuildTreeWorkController.scheduleAndRunRequestedTasks(VintageBuildTreeWorkController.kt:35)
at org.gradle.internal.buildtree.DefaultBuildTreeLifecycleController.lambda$scheduleAndRunTasks$1(DefaultBuildTreeLifecycleController.java:77)
at org.gradle.internal.buildtree.DefaultBuildTreeLifecycleController.lambda$runBuild$4(DefaultBuildTreeLifecycleController.java:120)
at org.gradle.internal.model.StateTransitionController.lambda$transition$6(StateTransitionController.java:169)
at org.gradle.internal.model.StateTransitionController.doTransition(StateTransitionController.java:266)
at org.gradle.internal.model.StateTransitionController.lambda$transition$7(StateTransitionController.java:169)
at org.gradle.internal.work.DefaultSynchronizer.withLock(DefaultSynchronizer.java:44)
at org.gradle.internal.model.StateTransitionController.transition(StateTransitionController.java:169)
at org.gradle.internal.buildtree.DefaultBuildTreeLifecycleController.runBuild(DefaultBuildTreeLifecycleController.java:117)
at org.gradle.internal.buildtree.DefaultBuildTreeLifecycleController.scheduleAndRunTasks(DefaultBuildTreeLifecycleController.java:77)
at org.gradle.internal.buildtree.DefaultBuildTreeLifecycleController.scheduleAndRunTasks(DefaultBuildTreeLifecycleController.java:72)
at org.gradle.tooling.internal.provider.ExecuteBuildActionRunner.run(ExecuteBuildActionRunner.java:31)
at org.gradle.launcher.exec.ChainingBuildActionRunner.run(ChainingBuildActionRunner.java:35)
at org.gradle.internal.buildtree.ProblemReportingBuildActionRunner.run(ProblemReportingBuildActionRunner.java:49)
at org.gradle.launcher.exec.BuildOutcomeReportingBuildActionRunner.run(BuildOutcomeReportingBuildActionRunner.java:65)
at org.gradle.tooling.internal.provider.FileSystemWatchingBuildActionRunner.run(FileSystemWatchingBuildActionRunner.java:140)
at org.gradle.launcher.exec.BuildCompletionNotifyingBuildActionRunner.run(BuildCompletionNotifyingBuildActionRunner.java:41)
at org.gradle.launcher.exec.RootBuildLifecycleBuildActionExecutor.lambda$execute$0(RootBuildLifecycleBuildActionExecutor.java:40)
at org.gradle.composite.internal.DefaultRootBuildState.run(DefaultRootBuildState.java:123)
at org.gradle.launcher.exec.RootBuildLifecycleBuildActionExecutor.execute(RootBuildLifecycleBuildActionExecutor.java:40)
at org.gradle.internal.buildtree.InitDeprecationLoggingActionExecutor.execute(InitDeprecationLoggingActionExecutor.java:66)
at org.gradle.internal.buildtree.InitProblems.execute(InitProblems.java:36)
at org.gradle.internal.buildtree.DefaultBuildTreeContext.execute(DefaultBuildTreeContext.java:40)
at org.gradle.launcher.exec.BuildTreeLifecycleBuildActionExecutor.lambda$execute$0(BuildTreeLifecycleBuildActionExecutor.java:71)
at org.gradle.internal.buildtree.BuildTreeState.run(BuildTreeState.java:60)
at org.gradle.launcher.exec.BuildTreeLifecycleBuildActionExecutor.execute(BuildTreeLifecycleBuildActionExecutor.java:71)
at org.gradle.launcher.exec.RunAsBuildOperationBuildActionExecutor$3.call(RunAsBuildOperationBuildActionExecutor.java:61)
at org.gradle.launcher.exec.RunAsBuildOperationBuildActionExecutor$3.call(RunAsBuildOperationBuildActionExecutor.java:57)
at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:200)
at org.gradle.internal.operations.DefaultBuildOperationRunner$CallableBuildOperationWorker.execute(DefaultBuildOperationRunner.java:195)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:66)
at org.gradle.internal.operations.DefaultBuildOperationRunner$2.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:157)
at org.gradle.internal.operations.DefaultBuildOperationRunner.execute(DefaultBuildOperationRunner.java:59)
at org.gradle.internal.operations.DefaultBuildOperationRunner.call(DefaultBuildOperationRunner.java:53)
at org.gradle.internal.operations.DefaultBuildOperationExecutor.call(DefaultBuildOperationExecutor.java:73)
at org.gradle.launcher.exec.RunAsBuildOperationBuildActionExecutor.execute(RunAsBuildOperationBuildActionExecutor.java:57)
at org.gradle.launcher.exec.RunAsWorkerThreadBuildActionExecutor.lambda$execute$0(RunAsWorkerThreadBuildActionExecutor.java:36)
at org.gradle.internal.work.DefaultWorkerLeaseService.withLocks(DefaultWorkerLeaseService.java:264)
at org.gradle.internal.work.DefaultWorkerLeaseService.runAsWorkerThread(DefaultWorkerLeaseService.java:128)
at org.gradle.launcher.exec.RunAsWorkerThreadBuildActionExecutor.execute(RunAsWorkerThreadBuildActionExecutor.java:36)
at org.gradle.tooling.internal.provider.continuous.ContinuousBuildActionExecutor.execute(ContinuousBuildActionExecutor.java:110)
at org.gradle.tooling.internal.provider.SubscribableBuildActionExecutor.execute(SubscribableBuildActionExecutor.java:64)
at org.gradle.internal.session.DefaultBuildSessionContext.execute(DefaultBuildSessionContext.java:46)
at org.gradle.tooling.internal.provider.BuildSessionLifecycleBuildActionExecuter$ActionImpl.apply(BuildSessionLifecycleBuildActionExecuter.java:92)
at org.gradle.tooling.internal.provider.BuildSessionLifecycleBuildActionExecuter$ActionImpl.apply(BuildSessionLifecycleBuildActionExecuter.java:80)
at org.gradle.internal.session.BuildSessionState.run(BuildSessionState.java:71)
at org.gradle.tooling.internal.provider.BuildSessionLifecycleBuildActionExecuter.execute(BuildSessionLifecycleBuildActionExecuter.java:62)
at org.gradle.tooling.internal.provider.BuildSessionLifecycleBuildActionExecuter.execute(BuildSessionLifecycleBuildActionExecuter.java:41)
at org.gradle.tooling.internal.provider.StartParamsValidatingActionExecuter.execute(StartParamsValidatingActionExecuter.java:64)
at org.gradle.tooling.internal.provider.StartParamsValidatingActionExecuter.execute(StartParamsValidatingActionExecuter.java:32)
at org.gradle.tooling.internal.provider.SessionFailureReportingActionExecuter.execute(SessionFailureReportingActionExecuter.java:51)
at org.gradle.tooling.internal.provider.SessionFailureReportingActionExecuter.execute(SessionFailureReportingActionExecuter.java:39)
at org.gradle.tooling.internal.provider.SetupLoggingActionExecuter.execute(SetupLoggingActionExecuter.java:47)
at org.gradle.tooling.internal.provider.SetupLoggingActionExecuter.execute(SetupLoggingActionExecuter.java:31)
at org.gradle.launcher.daemon.server.exec.ExecuteBuild.doBuild(ExecuteBuild.java:65)
at org.gradle.launcher.daemon.server.exec.BuildCommandOnly.execute(BuildCommandOnly.java:37)
at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:104)
at org.gradle.launcher.daemon.server.exec.WatchForDisconnection.execute(WatchForDisconnection.java:39)
at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:104)
at org.gradle.launcher.daemon.server.exec.ResetDeprecationLogger.execute(ResetDeprecationLogger.java:29)
at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:104)
at org.gradle.launcher.daemon.server.exec.RequestStopIfSingleUsedDaemon.execute(RequestStopIfSingleUsedDaemon.java:35)
at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:104)
at org.gradle.launcher.daemon.server.exec.ForwardClientInput$2.create(ForwardClientInput.java:78)
at org.gradle.launcher.daemon.server.exec.ForwardClientInput$2.create(ForwardClientInput.java:75)
at org.gradle.util.internal.Swapper.swap(Swapper.java:38)
at org.gradle.launcher.daemon.server.exec.ForwardClientInput.execute(ForwardClientInput.java:75)
at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:104)
at org.gradle.launcher.daemon.server.exec.LogAndCheckHealth.execute(LogAndCheckHealth.java:64)
at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:104)
at org.gradle.launcher.daemon.server.exec.LogToClient.doBuild(LogToClient.java:63)
at org.gradle.launcher.daemon.server.exec.BuildCommandOnly.execute(BuildCommandOnly.java:37)
at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:104)
at org.gradle.launcher.daemon.server.exec.EstablishBuildEnvironment.doBuild(EstablishBuildEnvironment.java:84)
at org.gradle.launcher.daemon.server.exec.BuildCommandOnly.execute(BuildCommandOnly.java:37)
at org.gradle.launcher.daemon.server.api.DaemonCommandExecution.proceed(DaemonCommandExecution.java:104)
at org.gradle.launcher.daemon.server.exec.StartBuildOrRespondWithBusy$1.run(StartBuildOrRespondWithBusy.java:52)
at org.gradle.launcher.daemon.server.DaemonStateCoordinator$1.run(DaemonStateCoordinator.java:297)
at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
at org.gradle.internal.concurrent.AbstractManagedExecutor$1.run(AbstractManagedExecutor.java:47)
Caused by: java.lang.NullPointerException: Parameter specified as non-null is null: method kotlin.text.Regex.replace, parameter input
at kotlin.text.Regex.replace(Regex.kt)
at com.facebook.react.model.ModelAutolinkingDependenciesJson.getNameCleansed(ModelAutolinkingDependenciesJson.kt:17)
at com.facebook.react.ReactSettingsExtension$Companion.getLibrariesToAutolink$settings_plugin(ReactSettingsExtension.kt:204)
at com.facebook.react.ReactSettingsExtension.autolinkLibrariesFromCommand(ReactSettingsExtension.kt:73)
at com.facebook.react.ReactSettingsExtension.autolinkLibrariesFromCommand$default(ReactSettingsExtension.kt:48)
at com.facebook.react.ReactSettingsExtension.autolinkLibrariesFromCommand(ReactSettingsExtension.kt)
at com.facebook.react.ReactSettingsExtension$autolinkLibrariesFromCommand.call(Unknown Source)
at settings_dsdt5v14ydjmeidqy1yu57cqy$_run_closure1.doCall$original(C:\Users\RiyaPremarajan\kuber\velocity-app\android\settings.gradle:3)
at settings_dsdt5v14ydjmeidqy1yu57cqy$_run_closure1.doCall(C:\Users\RiyaPremarajan\kuber\velocity-app\android\settings.gradle)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at jdk.proxy1/jdk.proxy1.$Proxy115.execute(Unknown Source)
at org.gradle.internal.extensibility.ExtensionsStorage$ExtensionHolder.configure(ExtensionsStorage.java:177)
at org.gradle.internal.extensibility.ExtensionsStorage.configureExtension(ExtensionsStorage.java:70)
at org.gradle.internal.extensibility.DefaultConvention.configure(DefaultConvention.java:202)
at org.gradle.internal.extensibility.DefaultConvention.configure(DefaultConvention.java:197)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at settings_dsdt5v14ydjmeidqy1yu57cqy.run(C:\Users\RiyaPremarajan\kuber\velocity-app\android\settings.gradle:3)
at org.gradle.groovy.scripts.internal.DefaultScriptRunnerFactory$ScriptRunnerImpl.run(DefaultScriptRunnerFactory.java:91)
... 153 more
```
### Reproducer
Cannot be shared due to confidentiality
### Screenshots and Videos
_No response_
|
Platform: Android,API: Settings,Needs: Author Feedback,Needs: Repro,Newer Patch Available
|
low
|
Critical
|
2,798,476,989
|
rust
|
Using a bare trait as a field type in a struct gives subpar suggestion
|
### Code
```Rust
trait Trait {}
struct Foo {
a: Trait,
b: u32,
}
```
### Current output
```Shell
error[E0782]: expected a type, found a trait
--> src/lib.rs:4:8
|
4 | a: Trait,
| ^^^^^
|
help: you can add the `dyn` keyword if you want a trait object
|
4 | a: dyn Trait,
| +++
For more information about this error, try `rustc --explain E0782`.
error: could not compile `trait-in-struct` (lib) due to 1 previous error
```
### Desired output
```Shell
error[E0782]: expected a type, found a trait
--> src/lib.rs:4:8
|
4 | a: Trait,
| ^^^^^
|
help: you might be missing a type parameter
|
3 | struct Foo<T: Trait> {
| +++
4 | a: T,
| ^
For more information about this error, try `rustc --explain E0782`.
error: could not compile `playground` (lib) due to 1 previous error
```
### Rationale and extra context
Currently this just suggests using `dyn` which also works if the field is the last field int the struct so can be unsized. However the suggestion is also given when the field is not in the last position, so applying the fix fails.
For most cases I think it makes more sense to either add a generic or Box the value.
### Other cases
```Rust
```
### Rust Version
```Shell
rustc 1.86.0-nightly (8361aef0d 2025-01-14)
binary: rustc
commit-hash: 8361aef0d7c29b1501a316a208ed84cd8a2ae5da
commit-date: 2025-01-14
host: x86_64-unknown-linux-gnu
release: 1.86.0-nightly
LLVM version: 19.1.6
```
### Anything else?
_No response_
|
A-diagnostics,T-compiler
|
low
|
Critical
|
2,798,521,655
|
tauri
|
[bug] run `pnpm tauri ios dev` failed, command ["xcodebuild"] exited with code 65
|
### Describe the bug
run `pnpm tauri ios dev` returns the following:
````
error: failed to run custom build command for `tauri v2.2.3`
Caused by:
process didn't exit successfully: `/Users/echo/XcodeProject/Personal/tauri-app/src-tauri/target/debug/build/tauri-037514aa3ee3496c/build-script-build` (exit status: 101)
--- stdout
cargo:rustc-check-cfg=cfg(custom_protocol)
cargo:rustc-check-cfg=cfg(dev)
cargo:rustc-cfg=dev
cargo:dev=true
cargo:rustc-check-cfg=cfg(desktop)
cargo:rustc-check-cfg=cfg(mobile)
cargo:rustc-cfg=mobile
--- stderr
thread 'main' panicked at /Users/echo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/swift-rs-1.0.7/src-rs/build.rs:42:56:
called `Result::unwrap()` on an `Err` value: Error("expected ident", line: 1, column: 2)
stack backtrace:
0: rust_begin_unwind
at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/std/src/panicking.rs:665:5
1: core::panicking::panic_fmt
at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/core/src/panicking.rs:74:14
2: core::result::unwrap_failed
at /rustc/90b35a6239c3d8bdabc530a6a0816f7ff89a0aaf/library/core/src/result.rs:1700:5
3: core::result::Result<T,E>::unwrap
at /Users/echo/.rustup/toolchains/1.83.0-aarch64-apple-darwin/lib/rustlib/src/rust/library/core/src/result.rs:1104:23
4: swift_rs::build::SwiftEnv::new
at /Users/echo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/swift-rs-1.0.7/src-rs/build.rs:42:9
5: swift_rs::build::SwiftLinker::link
at /Users/echo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/swift-rs-1.0.7/src-rs/build.rs:215:25
6: tauri_utils::build::link_swift_library
at /Users/echo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tauri-utils-2.1.1/src/build.rs:25:3
7: tauri_utils::build::link_apple_library
at /Users/echo/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tauri-utils-2.1.1/src/build.rs:11:5
8: build_script_build::main
at ./build.rs:328:7
9: core::ops::function::FnOnce::call_once
at /Users/echo/.rustup/toolchains/1.83.0-aarch64-apple-darwin/lib/rustlib/src/rust/library/core/src/ops/function.rs:250:5
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
warning: build failed, waiting for other jobs to finish...
Failed to run `cargo build`: command ["cargo", "build", "--package", "tauri-app", "--manifest-path", "/Users/echo/XcodeProject/Personal/tauri-app/src-tauri/Cargo.toml", "--target", "aarch64-apple-ios-sim", "--features", "tauri/rustls-tls", "--lib", "--no-default-features"] exited with code 101
Error Failed to run `cargo build`: command ["cargo", "build", "--package", "tauri-app", "--manifest-path", "/Users/echo/XcodeProject/Personal/tauri-app/src-tauri/Cargo.toml", "--target", "aarch64-apple-ios-sim", "--features", "tauri/rustls-tls", "--lib", "--no-default-features"] exited with code 101
ELIFECYCLE Command failed with exit code 1.
Command PhaseScriptExecution failed with a nonzero exit code
note: Run script build phase 'Build Rust Code' will be run during every build because the option to run the script phase "Based on dependency analysis" is unchecked. (in target 'tauri-app_iOS' from project 'tauri-app')
** BUILD FAILED **
The following build commands failed:
PhaseScriptExecution Build\ Rust\ Code /Users/echo/Library/Developer/Xcode/DerivedData/tauri-app-anhhqqpuuguyqcdgrusyrcxkfzrl/Build/Intermediates.noindex/tauri-app.build/debug-iphonesimulator/tauri-app_iOS.build/Script-4CFF21EFA1FFE4D2AB9D6998.sh (in target 'tauri-app_iOS' from project 'tauri-app')
Building workspace tauri-app with scheme tauri-app_iOS and configuration debug
(2 failures)
command ["xcodebuild"] exited with code 65
Error command ["xcodebuild"] exited with code 65
ELIFECYCLE Command failed with exit code 1.
````
### Reproduction
_No response_
### Expected behavior
_No response_
### Full `tauri info` output
```text
`
pnpm tauri info
> tauri-app@0.1.0 tauri /Users/echo/XcodeProject/Personal/tauri-app
> tauri "info"
[✔] Environment
- OS: Mac OS 15.0.0 arm64 (X64)
✔ Xcode Command Line Tools: installed
✔ rustc: 1.83.0 (90b35a623 2024-11-26)
✔ cargo: 1.83.0 (5ffbef321 2024-10-29)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: 1.83.0-aarch64-apple-darwin (default)
- node: 23.6.0
- pnpm: 9.15.4
- yarn: 1.22.22
- npm: 10.9.2
[-] Packages
- tauri 🦀: 2.2.3
- tauri-build 🦀: 2.0.5
- wry 🦀: 0.48.1
- tao 🦀: 0.31.1
- @tauri-apps/api : 2.2.0
- @tauri-apps/cli : 2.2.5
[-] Plugins
- tauri-plugin-opener 🦀: 2.2.5
- @tauri-apps/plugin-opener : 2.2.5
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:1420/
- framework: Vue.js
- bundler: Vite
[-] iOS
- Developer Teams: CLOSELI TECHNOLOGY HOLDING LIMITED (ID: X4A7JZGSRG)
`
```
### Stack trace
```text
```
### Additional context
_No response_
|
type: bug,status: needs triage,platform: iOS
|
low
|
Critical
|
2,798,525,235
|
godot
|
4.4beta1 - Jolt Physics/Physics Interpolation - Skeleton Ragdoll diverges mesh from Physical Bones
|
### Tested versions
Reproducible in 4.4-beta1, no other versions as Jolt is new in 4.4
### System information
Godot v4.4.beta1.mono - Windows 11 (build 26100) - Multi-window, 2 monitors - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 4080 (NVIDIA; 32.0.15.6590) - Intel(R) Core(TM) i7-14700K (28 threads)
### Issue description

Happy to get a MRP if necessary
Working correctly when using GodotPhysics instead of Jolt
### Steps to reproduce
https://docs.godotengine.org/en/stable/tutorials/physics/ragdoll_system.html
Follow this tutorial
Enable Jolt Physics in Project Settings
Enable Physics Interpolation
Observe the mesh glitching out and away from the physical bones of the skeleton, after activating the physical bone simulation
### Minimal reproduction project (MRP)
Will create if necessary
|
bug,topic:physics,needs testing,topic:3d
|
low
|
Major
|
2,798,533,109
|
transformers
|
Auto-resume from checkpoint throws error if last checkpoint is incomplete
|
### System Info
- `transformers` version: 4.45.2
- Platform: Linux-5.14.0-284.73.1.el9_2.x86_64-x86_64-with-glibc2.34
- Python version: 3.11.9
- Huggingface_hub version: 0.26.3
- Safetensors version: 0.4.5
- Accelerate version: 1.0.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: FSDP
- Using GPU in script?: yes
- GPU type: NVIDIA A100-SXM4-80GB
### Who can help?
Trainer: @muellerzr @SunMarc
Currently, the [_save_checkpoint()](https://github.com/huggingface/transformers/blob/b2f2977533445c4f62bf58e10b1360e6856e78ce/src/transformers/trainer.py#L3197) method saves the model, optimizer (optionally) and finally the Trainer state.
The [resume_from_checkpoint()](https://github.com/huggingface/transformers/blob/b2f2977533445c4f62bf58e10b1360e6856e78ce/src/transformers/trainer.py#L2070) function gets the checkpoint directory from the `get_last_checkpoint` function and loads the model and trainer state.
If training was stopped (or ended abruptly) in the middle of checkpointing, checkpoint directory (checkpoint-xx) is created but some of the files are missing. Auto-resume picks the directory for resuming from checkpoint but when loading the files this could throw an error. For ex. if the trainer state was not yet written, this throws a FileNotFound error during the TrainerState.load_from_json call and training is not able to resume. We'll need to manually delete the last directory to make it use the second last directory (a PytorchJob, for instance, will auto-resume the pod in case of failure but because of this issue, it cannot automatically resume from a failure and needs manual intervention).
We expect resume from checkpoint to pick the correct/complete checkpoint directory instead of throwing an error.
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run accelerate launch (this is a sample command with small run time and higher checkpointing time):
```
accelerate launch --use_fsdp --fsdp_auto_wrap_policy=TRANSFORMER_BASED_WRAP --fsdp_forward_prefetch=false --fsdp_offload_params=false --fsdp_sharding_strategy=FULL_SHARD --fsdp_state_dict_type=FULL_STATE_DICT --fsdp_cpu_ram_efficient_loading=true --fsdp_sync_module_states=true --rdzv_backend=static --same_network --num_processes=4 --num_machines=${WORLD_SIZE} --mixed_precision=no --dynamo_backend=no --machine_rank=${RANK} --main_process_ip=${MASTER_ADDR} --main_process_port=${MASTER_PORT} -m tuning.sft_trainer --model_name_or_path bigscience/bloom-560m --training_data_path input.json --output_dir output_dir --packing false --response_template '\n### Response:' --dataset_text_field output --num_train_epochs 6.0 --max_seq_length 4096 --per_device_train_batch_size 30 --save_strategy epoch --logging_steps 1 --learning_rate 1e-5 --use_flash_attn false --validation_data_path validation.json --metric_for_best_model "loss" --load_best_model_at_end True --logging_strategy "steps" --per_device_eval_batch_size 10 --evaluation_strategy "epoch"
```
When logs show that it is writing the checkpoint, end the process with Ctrl-C. Then, again run the same command where it will try to resume from checkpoint. It will throw an error such as `FileNotFoundError: [Errno 2] No such file or directory: 'output_dir/checkpoint-25/trainer_state.json'` depending on which file is missing.
### Expected behavior
If the last checkpoint is incomplete or not written fully, we expect training to resume from the checkpoint before instead of throwing an error.
I have raised a [PR](https://github.com/huggingface/transformers/pull/35580) with a fix, which checks if model files and trainer state are available before choosing the directory for resuming from checkpoint.
|
bug
|
low
|
Critical
|
2,798,542,501
|
ollama
|
Network issues with pulling model from ollama
|
### What is the issue?
When I trying to pull model from ollama by proxy whatever large model and small model(2g),it seems that redownload again and again and again, just like from 120mb to 160mb and go back to 100mb again and again and finally show "max retries exceeded".This issue persists probably 1 week and it worked well before.I already changed different network to try it but it still not work.:(
Thanks for ur assistant!
### OS
macOS
### GPU
Apple
### CPU
Apple
### Ollama version
0.5.7
|
bug
|
low
|
Major
|
2,798,636,562
|
pytorch
|
Some FlexAttention learned bias bugs/limitations
|
### 🐛 Describe the bug
## Ex 1
```Python
import torch
from torch.nn.attention.flex_attention import flex_attention, create_block_mask, create_mask
torch.set_default_device('cuda')
flex_attention = torch.compile(flex_attention, dynamic=False)
result = torch.randn((), requires_grad=True)
def score_mod(score, b, h, q, kv):
return score * result
S = 8192
torch.manual_seed(0)
q, k, v = [torch.randn(1, 1, S, 64, dtype=torch.float16, requires_grad=True) for _ in range(3)]
flex_attention(q, k, v, score_mod=score_mod).sum().backward()
```
```Shell
File "/home/chilli/local/pytorch/torch/fx/interpreter.py", line 230, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chilli/local/pytorch/torch/_inductor/graph.py", line 1147, in call_function
raise LoweringException(e, target, args, kwargs).with_traceback(
File "/home/chilli/local/pytorch/torch/_inductor/graph.py", line 1137, in call_function
out = lowerings[target](*args, **kwargs) # type: ignore[index]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chilli/local/pytorch/torch/_inductor/lowering.py", line 452, in wrapped
out = decomp_fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chilli/local/pytorch/torch/_inductor/kernel/flex_attention.py", line 2226, in flex_attention_backward
joint_outputs = process_joint_outputs(
^^^^^^^^^^^^^^^^^^^^^^
File "/home/chilli/local/pytorch/torch/_inductor/kernel/flex_attention.py", line 2103, in process_joint_outputs
grads_out = [get_out(x) for x in other_grads]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chilli/local/pytorch/torch/_inductor/kernel/flex_attention.py", line 2103, in <listcomp>
grads_out = [get_out(x) for x in other_grads]
^^^^^^^^^^
File "/home/chilli/local/pytorch/torch/_inductor/kernel/flex_attention.py", line 2100, in get_out
assert buf.name is not None
^^^^^^^^^^^^^^^^^^^^
torch._inductor.exc.LoweringException: AssertionError:
target: flex_attention_backward
args[0]: TensorBox(StorageBox(
InputBuffer(name='primals_1', layout=FixedLayout('cuda:0', torch.float16, size=[1, 1, 8192, 64], stride=[524288, 524288, 64, 1]))
))
args[1]: TensorBox(StorageBox(
InputBuffer(name='primals_2', layout=FixedLayout('cuda:0', torch.float16, size=[1, 1, 8192, 64], stride=[524288, 524288, 64, 1]))
))
args[2]: TensorBox(StorageBox(
```
## Ex 2
```Python
import torch
from torch.nn.attention.flex_attention import flex_attention, create_block_mask, create_mask
torch.set_default_device('cuda')
flex_attention = torch.compile(flex_attention, dynamic=False)
result = torch.randn((1,), requires_grad=True)
def score_mod(score, b, h, q, kv):
return score * result[score.new_zeros((), dtype=torch.int)]
S = 8192
torch.manual_seed(0)
q, k, v = [torch.randn(1, 1, S, 64, dtype=torch.float16, requires_grad=True) for _ in range(3)]
flex_attention(q, k, v, score_mod=score_mod).sum().backward()
```
```Shell
Traceback (most recent call last):
File "/home/chilli/.conda/envs/py311/lib/python3.11/site-packages/triton/language/core.py", line 35, in wrapper
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/chilli/.conda/envs/py311/lib/python3.11/site-packages/triton/language/core.py", line 1268, in broadcast_to
return semantic.broadcast_impl_shape(input, shape, _builder)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chilli/.conda/envs/py311/lib/python3.11/site-packages/triton/language/semantic.py", line 732, in broadcast_impl_shape
raise ValueError(f"Cannot broadcast, rank mismatch: {src_shape}, {shape}")
ValueError: Cannot broadcast, rank mismatch: [1], [64, 64]
The above exception was the direct cause of the following exception:
triton.compiler.errors.CompilationError: at 96:33:
if CHECK_BLOCK_BOUNDARY:
grad_scores = tl.where(offs_n2[None, :] < KV_LEN, grad_scores, 0.0)
# ~~~~~~~~~~~~~~~~~~~ Apply other buffer grad writes ~~~~~~~~~~~~~
if WRITE_DQ:
scatter_mask = offs_m2[:, None] < Q_LEN and offs_n2[None, :] < KV_LEN
tmp12 = tl.full([1], 0, tl.int32)
tmp13 = (ds)
tmp14 = (pre_mod_scores)
tmp15 = tmp13 * tmp14
tmp16 = tmp15.to(tl.float32)
tl.atomic_add(in_ptr17 + tl.broadcast_to(tmp12, tmp16.shape), tmp16, scatter_mask, sem='relaxed')
^
The above exception was the direct cause of the following exception:
triton.compiler.errors.CompilationError: at 79:17:
dq = bwd_dq_block_mn(
arg_Q, arg_K, arg_V, arg_LSE, arg_DELTA, arg_DO, arg_DQ, arg_DV, arg_KV_NUM_BLKS, arg_KV_IDX, arg_Q_NUM_BLKS, arg_Q_IDX, arg_FULL_KV_NUM_BLKS, arg_FULL_KV_IDX, arg_FULL_Q_NUM_BLKS, arg_FULL_Q_IDX, in_ptr16, in_ptr17, out_ptr0,
dq, q, kT_ptrs, vT_ptrs, do, Di, lse, Q_LEN, KV_LEN,
off_z, off_hq, offs_m2, offs_n2,
stride_kn, stride_kd, stride_vn, stride_vd,
kv_indices, sparse_kv_num_blocks,
MATMUL_PRECISION, RCP_LN2,
IS_FULL_BLOCKS, CHECK_BLOCK_BOUNDARY=True,
)
```
### Versions
N/A
cc @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @yf225 @drisspg @yanboliang @BoyuanFeng
|
triaged,oncall: pt2,module: higher order operators,module: pt2-dispatcher,module: flex attention
|
low
|
Critical
|
2,798,699,087
|
langchain
|
[ChatLiteLLM] litellm.UnsupportedParamsError: VertexAI doesn't support tool_choice=any. Supported tool_choice values=['auto', 'required', json object]
|
### Checked other resources
- [x] I added a very descriptive title to this issue.
- [x] I searched the LangChain documentation with the integrated search.
- [x] I used the GitHub search to find a similar question and didn't find it.
- [x] I am sure that this is a bug in LangChain rather than my code.
- [x] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import os
from langchain_community.chat_models import ChatLiteLLM
from langchain_google_genai import ChatGoogleGenerativeAI
from typing import Optional
from pydantic import BaseModel, Field
class Joke(BaseModel):
"""Joke to tell user."""
setup: str = Field(description="The setup of the joke")
punchline: str = Field(description="The punchline to the joke")
rating: Optional[int] = Field(
default=None, description="How funny the joke is, from 1 to 10"
)
llm_litellm = ChatLiteLLM(
model="gemini/gemini-2.0-flash-exp",
api_key=os.getenv("API_KEY"),
)
stuctured_litellm = llm_litellm.with_structured_output(Joke, include_raw=True)
llm_gemini = ChatGoogleGenerativeAI(
model="gemini-2.0-flash-exp",
api_key=os.getenv("API_KEY"),
)
structured_gemini = llm_gemini.with_structured_output(Joke, include_raw=True)
try:
response_litellm = stuctured_litellm.invoke("Tell me a joke about cats")
print(response_litellm)
except Exception as e:
print(f"An error occured for litellm: {str(e)}")
print()
try:
response_gemini = structured_gemini.invoke("Tell me a joke about cats")
print(response_gemini)
except Exception as e:
print(f"An error occured for gemini: {str(e)}")
print()
```
### Error Message and Stack Trace (if applicable)
An error occured for litellm: litellm.UnsupportedParamsError: VertexAI doesn't support tool_choice=any. Supported tool_choice values=['auto', 'required', json object]. To drop it from the call, set `litellm.drop_params = True.
{'raw': AIMessage(content='', additional_kwargs={'function_call': {'name': 'Joke', 'arguments': '{"punchline": "They\'re always feline good!", "setup": "Why don\'t cats play poker in the wild?"}'}}, response_metadata={'prompt_feedback': {'block_reason': 0, 'safety_ratings': []}, 'finish_reason': 'STOP', 'safety_ratings': [{'category': 'HARM_CATEGORY_HATE_SPEECH', 'probability': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_DANGEROUS_CONTENT', 'probability': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_HARASSMENT', 'probability': 'NEGLIGIBLE', 'blocked': False}, {'category': 'HARM_CATEGORY_SEXUALLY_EXPLICIT', 'probability': 'NEGLIGIBLE', 'blocked': False}]}, id='run-1adbaf9b-ea9f-47f4-bbce-2fd6e93b9a65-0', tool_calls=[{'name': 'Joke', 'args': {'punchline': "They're always feline good!", 'setup': "Why don't cats play poker in the wild?"}, 'id': 'ad13c4ea-ec15-45e6-9c87-91487ed657c5', 'type': 'tool_call'}], usage_metadata={'input_tokens': 91, 'output_tokens': 22, 'total_tokens': 113, 'input_token_details': {'cache_read': 0}}), 'parsed': Joke(setup="Why don't cats play poker in the wild?", punchline="They're always feline good!", rating=None), 'parsing_error': None}
### Description
As one can see, it works with ChatGoogleGenerativeAI but it does not work with ChatLiteLLM.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #202405300957~1732141768~22.04~f2697e1 SMP PREEMPT_DYNAMIC Wed N
> Python Version: 3.12.7 (main, Oct 16 2024, 04:37:19) [Clang 18.1.8 ]
Package Information
-------------------
> langchain_core: 0.3.29
> langchain: 0.3.14
> langchain_community: 0.3.14
> langsmith: 0.2.10
> langchain_anthropic: 0.3.1
> langchain_aws: 0.2.10
> langchain_fireworks: 0.2.6
> langchain_google_genai: 2.0.8
> langchain_openai: 0.3.0
> langchain_text_splitters: 0.3.5
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.11.11
> anthropic: 0.42.0
> async-timeout: Installed. No version info available.
> boto3: 1.35.97
> dataclasses-json: 0.6.7
> defusedxml: 0.7.1
> filetype: 1.2.0
> fireworks-ai: 0.15.11
> google-generativeai: 0.8.3
> httpx: 0.27.2
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> langsmith-pyo3: Installed. No version info available.
> numpy: 2.2.1
> openai: 1.59.6
> orjson: 3.10.14
> packaging: 24.2
> pydantic: 2.10.5
> pydantic-settings: 2.7.1
> PyYAML: 6.0.2
> requests: 2.32.3
> requests-toolbelt: 1.0.0
> SQLAlchemy: 2.0.37
> tenacity: 9.0.0
> tiktoken: 0.8.0
> typing-extensions: 4.12.2
> zstandard: Installed. No version info available.
|
🤖:bug
|
low
|
Critical
|
2,798,705,450
|
vscode
|
Git - "Not Commited Yet" does not bring value
|
If I change the line, the Git Blame changes to "Not Commited Yet"
This just adds noise, and might conflict with ghost text inline completions.
My suggestion is to simply not show anything on the modified line.

|
git,under-discussion
|
low
|
Minor
|
2,798,707,941
|
go
|
x/pkgsite: package removal request for (case sensitive) https://pkg.go.dev/github.com/ibm/sarama
|
### What is the path of the package that you would like to have removed?
github.com/ibm/sarama (case sensitive)
### Are you the owner of this package?
Yes I am the owner of this package under the IBM org.
### What is the reason that you could not retract this package instead?
This is a follow-on from https://github.com/golang/go/issues/71256 where under https://go.dev/cl/642600 I added ibm/sarama --> IBM/sarama under the `knownAlternatives` mechanism in pkgsite, which corrects things at the fetch level, but it looks like the historically fetched data needs to be purged
pkgsite seems to accidentally have an invalid (and outdated) lowercase entry at https://pkg.go.dev/github.com/ibm/sarama whereas the correct module path is https://pkg.go.dev/github.com/IBM/sarama which is correct an up-to-date.
Please can you remove the invalid lowercase entry at https://pkg.go.dev/github.com/ibm/sarama (whilst retaining/keeping the uppercase IBM entry at https://pkg.go.dev/github.com/IBM/sarama)
|
pkgsite,pkgsite/package-removal
|
low
|
Minor
|
2,798,724,154
|
vscode
|
Terminal selection/scrolling breaks when resizing
|
1. Have lots of output in terminal
2. Make a selection
3. Resize terminal
🐛 Selection moves with the resizing
Also:
1. Have lots of output in terminal, make sure you are scrolled all the way down
2. Shrink the terminal, it still appears as you're scrolled all the way down
3. Scroll down in the terminal
🐛 Scroll position will jump and allow you to scroll down
https://github.com/user-attachments/assets/1a4d6004-fc8c-4a83-969c-3c5ed622b2bb
|
bug,upstream,upstream-issue-linked,terminal-layout
|
low
|
Minor
|
2,798,732,162
|
pytorch
|
DISABLED test_reorder_peak_memory_lpmf (__main__.TestOperatorReorderForPeakMemory)
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_reorder_peak_memory_lpmf&suite=TestOperatorReorderForPeakMemory&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35856927699).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 4 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_reorder_peak_memory_lpmf`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_memory.py", line 114, in test_reorder_peak_memory_lpmf
.run(code)
RuntimeError: Expected to find "buf0 = " but did not find it
Searched string:
extern_kernels.mm(primals_2, primals_3, out=buf2)
del primals_3
buf1 = empty_strided_cuda((2048, 12), (12, 1), torch.float32)
# Topologically Sorted Source Nodes: [t1], Original ATen: [aten.mm]
extern_kernels.mm(primals_2, buf0, out=buf1)
del buf0
buf3 = empty_strided_cuda((2048, 1), (1, 1), torch.float32)
# Topologically Sorted Source Nodes: [t3], Original ATen: [aten.mm]
extern_kernels.mm(reinterpret_tensor(buf1, (2048, 10), (12, 1), 0), primals_4, out=buf3)
buf6 = empty_strided_cuda((), (), torch.float32)
# Topologically Sorted Source Nodes: [sum_1], Original ATen: [aten.sum]
stream0 = get_raw_stream(0)
triton_red_fused_sum_1.run(buf3, buf6, 1, 2048, grid=grid(1), stream=stream0)
del buf3
buf5 = empty_strided_cuda((2048, 12), (12, 1), torch.float32)
# Topologically Sorted Source Nodes: [t4], Original ATen: [aten.mm]
extern_kernels.mm(buf2, buf4, out=buf5)
del buf4
buf7 = empty_strided_cuda((3, ), (1, ), torch.float32)
# Topologically Sorted Source Nodes: [sum_2], Original ATen: [aten.sum]
stream0 = get_raw_stream(0)
triton_red_fused_sum_2.run(buf5, buf7, 3, 6827, grid=grid(3), stream=stream0)
del buf5
buf9 = buf6; del buf6 # reuse
# Topologically Sorted Source Nodes: [sum_2, add], Original ATen: [aten.sum, aten.add]
stream0 = get_raw_stream(0)
triton_per_fused_add_sum_3.run(buf9, buf7, 1, 3, grid=grid(1), stream=stream0)
del buf7
return (buf9, primals_2, reinterpret_tensor(buf2, (1, 2048), (1, 1), 0), reinterpret_tensor(primals_5, (10, 1), (1, 10), 0), reinterpret_tensor(buf1, (10, 2048), (1, 12), 0), reinterpret_tensor(primals_4, (1, 10), (1, 1), 0), )
def benchmark_compiled_module(times=10, repeat=10):
from torch._dynamo.testing import rand_strided
from torch._inductor.utils import print_performance
primals_1 = rand_strided((1, 10), (10, 1), device='cuda:0', dtype=torch.float32)
primals_2 = rand_strided((2048, 1), (1, 1), device='cuda:0', dtype=torch.float32)
primals_3 = rand_strided((1, 1), (1, 1), device='cuda:0', dtype=torch.float32)
primals_4 = rand_strided((10, 1), (1, 1), device='cuda:0', dtype=torch.float32)
primals_5 = rand_strided((1, 10), (10, 1), device='cuda:0', dtype=torch.float32)
fn = lambda: call([primals_1, primals_2, primals_3, primals_4, primals_5])
return print_performance(fn, times=times, repeat=repeat)
if __name__ == "__main__":
from torch._inductor.wrapper_benchmark import compiled_module_main
compiled_module_main('None', benchmark_compiled_module)
From CHECK: buf0 =
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_memory.py TestOperatorReorderForPeakMemory.test_reorder_peak_memory_lpmf
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_memory.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
|
module: rocm,triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor
|
low
|
Critical
|
2,798,732,275
|
pytorch
|
DISABLED test_aoti (__main__.TestMemoryPlanning)
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_aoti&suite=TestMemoryPlanning&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35856927508).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 4 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_aoti`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_memory_planning.py", line 113, in test_aoti
).run(
RuntimeError: Expected to find "int64_t int_array_2[] = {24L + align(12L*s0), };" but did not find it
Searched string:
Auto-tuning code written to /tmp/tmp92c6h0z4/tmp0ptwdcmx.py
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
Output code:
From CHECK: int64_t int_array_2[] = {24L + align(12L*s0), };
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_memory_planning.py TestMemoryPlanning.test_aoti
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_memory_planning.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
|
module: rocm,triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor
|
low
|
Critical
|
2,798,732,377
|
pytorch
|
DISABLED test_remote_cache_load_function_device_cuda_float32_dynamic_False_bundle_triton_False (__main__.TestFxGraphCache)
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_remote_cache_load_function_device_cuda_float32_dynamic_False_bundle_triton_False&suite=TestFxGraphCache&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35856926954).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 4 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_remote_cache_load_function_device_cuda_float32_dynamic_False_bundle_triton_False`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_codecache.py", line 266, in test_remote_cache_load_function
self.assertEqual(global_stats.fx_graph, Stats(1, 3, 1))
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4036, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Object comparison failed: _GlobalItemStats(num_put=2, num_get_hit=2, num_get_miss=2) != Stats(num_put=1, num_get_hit=3, num_get_miss=1)
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_codecache.py TestFxGraphCache.test_remote_cache_load_function_device_cuda_float32_dynamic_False_bundle_triton_False
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_codecache.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
|
module: rocm,triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor
|
low
|
Critical
|
2,798,733,364
|
PowerToys
|
Move app windows with powertoys
|
### Description of the new feature / enhancement
Like it is possible to move windows around with win + (up, down, right, left), could it be possible do that inside powertoys run. The difference would be that there would be no suggestions for splitting two programs. Open powertoys run and type right, now your windows is to the right side of the screen.
I'm mostly looking for these options: middle, right side, left side, upper half, bottom half, the 4 corners, maximize window, divide screen in thirds.
### Scenario when this would be used?
Opening new programs and moving them around quickly. For example:
In the morning you open your pc to start working but you have to open vscode, browser and teams. I want to move the around quickly, I type in "left third", "middle third", "right third". Now my app windows are moved to their desired spot.
### Supporting information
[Raycast window manager](https://www.raycast.com/core-features/window-management)
|
Needs-Triage
|
low
|
Minor
|
2,798,740,877
|
pytorch
|
Significant precision error from torch.compile
|
### 🐛 Describe the bug
When wrapping torch.compile around a forward region of the Flux-dev DiT model (both `reduce-overhead` and `max-autotune-no-cudagraphs`), the speed-up is accompanied by significant precision error. This happens even when wrapping around the smallest op as shown below. After enabling `CUDA_LAUNCH_BLOCKING=1`, the precision error is gone.
It would be troublesome to provide a minimun reproducier as this is an ongoing project involving large model block dependencies, but can also try if needed.


### Profile trace of pure cuda graph showing perf. benefits but also incurring error
Even though `reduce-overhead` is used, Triton kernel fusion(the purple region) still cuts in, which might be causing the error.

### Versions
Collecting environment information...
PyTorch version: 2.7.0.dev20250109+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.0
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-1017-aws-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.5.82
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
Nvidia driver version: 550.127.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.2.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: AuthenticAMD
Model name: AMD EPYC 7R13 Processor
CPU family: 25
Model: 1
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
Stepping: 1
BogoMIPS: 5299.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr rdpru wbnoinvd arat npt nrip_save vaes vpclmulqdq rdpid
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 3 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 48 MiB (96 instances)
L3 cache: 384 MiB (12 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-47,96-143
NUMA node1 CPU(s): 48-95,144-191
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cudnn-frontend==1.5.1
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] nvtx==0.2.5
[pip3] onnx==1.16.0
[pip3] optree==0.13.1
[pip3] pynvjitlink==0.2.3
[pip3] pytorch-triton==3.2.0+git0d4682f0
[pip3] torch==2.7.0.dev20250109+cu124
[pip3] torch-tensorrt==2.5.0a0
[pip3] torchaudio==2.6.0.dev20250109+cu124
[pip3] torchvision==0.22.0.dev20250109+cu124
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @mcarilli @eellison @BoyuanFeng
|
high priority,triaged,bug,oncall: pt2,module: inductor
|
low
|
Critical
|
2,798,767,628
|
ollama
|
Ollama is working fine with CLI / powershell but goes in loop on API request.
|
### What is the issue?
Ollama is able to run model when a message is sent through CLI /Powershell, when Cline or Roocline is used or a prompt is sent through API. Ollama is stuck in loop of previous response.
### OS
Windows
### GPU
Nvidia
### CPU
Intel
### Ollama version
0.5.7
|
bug
|
low
|
Major
|
2,798,781,885
|
transformers
|
Ascend:Training not loaded into NPU
|
### System Info
-- CANN 版本: 8.0.RC1
-- Pytorch版本:2.1.0
-- torch_npu:2.1.0.post6
-- Python 版本:3.9.21
-- 训练卡: 910B2
-- transformers:transformers 4.29.1
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
代码如下:
`mport torch
import torch_npu
from transformers import (
Trainer,
TrainingArguments,
EvalPrediction,
AutoModelForSequenceClassification,
AutoTokenizer
)
from datasets import Dataset
import evaluate
import numpy as np
import pandas as pd
import os
MODEL_PATH = os.environ.get('MODEL_PATH', '/data/ts/ascend/model')
DATASET_PATH = os.environ.get('DATASET_PATH', '/data/ts/ascend/datasets')
BUS_MODEL_PATH = os.environ.get('BUS_MODEL_PATH', '/data/ts/ascend/bus_models')
def train(body):
# model_name = body['model_name']
# pretrain_model = body['pretrain_model']
# dataset = body['dataset']
# train_path = body['train_path']
# token_name = body['token_name']
# batch_size = body['batch_size']
# num_train_epochs = body['num_train_epochs']
# basic_mode_type = body['basic_mode_type']
# choose_dataset_size = body['choose_dataset_size']/100
model_name = 'test'#body['model_name']
pretrain_model = 'acge_text_embedding'#body['pretrain_model']
dataset = 'ray-dataset/test.csv'
batch_size = 4
num_train_epochs = 4
basic_mode_type = '1'
choose_dataset_size = 10/100
# 得到语料名
dataset_name =dataset.split('/')[-1]
dataset_path = os.path.join(DATASET_PATH, dataset_name)
# os.system(f"mc mirror --overwrite --remove oss/ray-dataset/{dataset_name} {dataset_path}")
# 标签处理
df = pd.read_csv(dataset_path)
df,num_labels,label_to_id,id_to_label = convert_label(df)
# df.to_csv(dataset_path,index=False)
# 暂时只支持csv
raw_datasets = Dataset.from_pandas(df)
# raw_train, raw_val = raw_datasets.split([1-choose_dataset_size, choose_dataset_size])
#加载模型
model_info = os.path.join(MODEL_PATH, pretrain_model)
os.makedirs(MODEL_PATH, exist_ok=True)
# os.system(f"mc mirror --overwrite --remove oss/ray-model/{pretrain_model} {model_info}")
tokenizer = AutoTokenizer.from_pretrained(model_info)
model = AutoModelForSequenceClassification.from_pretrained(model_info,num_labels=num_labels, id2label=id_to_label, label2id=label_to_id, ignore_mismatched_sizes=True).to('npu:3')
# token化
def preprocess_function(examples):
result = tokenizer(examples["text"], padding="max_length", truncation=True, max_length=512)
result["label"] = [int(l) for l in examples["label"]] # 假设每个标签是一个列表,取第一个元素
# result["label"] = examples["label"]
nput_ids = torch.tensor(result.input_ids, device='npu:3')
token_type_ids = torch.tensor(result.token_type_ids, device='npu:3')
attention_mask = torch.tensor(result.attention_mask, device='npu:3')
label = torch.tensor(result.label, device='npu:3')
result2 = {}
result2['label'] = label
result2['input_ids'] = input_ids
result2['token_type_ids'] = token_type_ids
result2['attention_mask'] = attention_mask
return result2
dataset_train = raw_datasets.map(preprocess_function, batched=True)
dataset_val = raw_datasets.map(preprocess_function, batched=True)
dataset_train = dataset_train.remove_columns(['text'])
output_info = os.path.join(BUS_MODEL_PATH, model_name)
training_args = TrainingArguments(
output_dir=output_info,
num_train_epochs=num_train_epochs,
learning_rate=2e-5,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
evaluation_strategy="steps",
eval_steps=100,
save_strategy="epoch",
logging_strategy="steps",
logging_steps=50
)
trainer = Trainer(
model=model,
train_dataset=dataset_train,
args=training_args,
)
print('开始训练')
result = trainer.train()
print('训练完成')
bus_model_info = os.path.join(MODEL_PATH, model_name)
checkpoint_path = result.checkpoint.path
os.system(
f"cp -r {checkpoint_path}/checkpoint/ {bus_model_info}")
os.system(
f"mc cp -r {checkpoint_path}/checkpoint/ oss/ray-npu/{model_name}")
return result
def convert_label(df):
# 创建标签到ID的映射
label_to_id = {}
id_to_label = {}
id_counter = 0
# 遍历所有标签,创建映射
for label in df['label'].unique():
if label not in label_to_id:
label_to_id[label] = str(id_counter)
id_to_label[str(id_counter)] = label
id_counter += 1
# 将DataFrame中的标签替换为ID
df['label'] = df['label'].map(label_to_id)
return df,id_counter,label_to_id,id_to_label
if __name__ == "__main__":
train(None)
`
The analysis:
1、cpu:
train defore

train after

2、data、model to npu


3、The operator changes during training have not reached npu

### Expected behavior
Reasoning is normal until the NPU runs, but the training operator is still processed by the CPU
|
bug
|
low
|
Minor
|
2,798,797,834
|
angular
|
Show warning when `withEnabledBlockingInitialNavigation` Is used with `provideClientHydration`
|
### Which @angular/* package(s) are relevant/related to the feature request?
router, platform-browser
### Description
The `withEnabledBlockingInitialNavigation` option was designed specifically for use cases involving deferred hydration to prevent UI flickers during initial navigation. However, when used with `provideClientHydration`, which supports standard hydration, the issue of flickering does not arise. To avoid confusion and misuse, a warning should be displayed in the console when these two configurations are used together, as they are not intended to work in tandem.
### Proposed solution
Issue warning when these 2 methods are used togather.
|
help wanted,area: core,core: hydration
|
low
|
Minor
|
2,798,810,676
|
PowerToys
|
The Layout of Virtual Desktops can not be changed
|
### Microsoft PowerToys version
0.87.1
### Installation method
Microsoft Store
### Running as admin
Yes
### Area(s) with issue?
FancyZones
### Steps to reproduce
1. Open PowerToys and enable the Fancyzones module.
2. Add a new virtual Desktop(VD) and found the layout is the same as the first VD.
3. Open the PowerToys Setting and change the monitor layout to another layout.
4. The layout in the main VD is changed as expected.
### ✔️ Expected Behavior
The layout of the second VD is changed to the new layout
### ❌ Actual Behavior
The layout of the second VD is not changed and the layout is still the last one.
### Other Software
_No response_
|
Issue-Bug,Needs-Triage
|
low
|
Minor
|
2,798,847,281
|
react-native
|
Images are cut off/not rendered correctly on new architecture
|
### Description
Images are being cut off after enabling the new architecture in Expo SDK 52. The issue seems to be able to reproduce randomly, sometimes the images are shown correctly, sometimes not. The issue is being reproduced also in production version of the apps. The issue is being reproduced even if developing on Windows machine or MacOS. The issue is being able to reproduce also in native react-native app, issue being caused by new architecture.
The issue is still able to reproduce after adding
"resolutions": {
"@react-native/assets-registry": "0.76.1"
}
in package.json.
### Steps to reproduce
1. Initialize a new application
2. Import some local images using "require" and use them in the <Image/> tag
3. If they are rendered correctly, try to reload the app and observe the issue.
### React Native Version
0.76.6
### Affected Platforms
Build - Windows, Build - MacOS, Build - Linux, Runtime - iOS, Runtime - Android
### Areas
Fabric - The New Renderer
### Output of `npx react-native info`
```text
System:
OS: macOS 15.1.1
Shell: 5.9 - /bin/zsh
Binaries:
Node: 22.7.0 - /opt/homebrew/bin/node
Yarn: 1.22.19 - ~/.nvm/versions/node/v16.18.1/bin/yarn
npm: 10.8.2 - /opt/homebrew/bin/npm
Managers:
CocoaPods: 1.15.2 - /opt/homebrew/bin/pod
IDEs:
Xcode: /undefined - /usr/bin/xcodebuild
npmPackages:
expo: ^52.0.25 => 52.0.25
react: ^18.3.1 => 18.3.1
react-native: 0.76.6 => 0.76.6
npmGlobalPackages:
eas-cli: 13.2.1
Expo Workflow: managed
```
### Stacktrace or Logs
```text
No crash for this case.
```
### Reproducer
https://github.com/vl4di99/rn-new-arch-image-bug
### Screenshots and Videos

|
Component: Image,Needs: Triage :mag:,Type: New Architecture
|
low
|
Critical
|
2,798,884,722
|
pytorch
|
Nested tensor support for pointwise matrix multiplication of nested tensor and normal tensor
|
### 🚀 The feature, motivation and pitch
I am using nested tensors (jagged layout) for my input data, and I need to apply rotary positional embeddings to qkv vectors.
At the moment I cannot see how to do this efficiently. I've landed on this slow list comprehension (see below), where I am slicing the normal tensor using and multiplying with the elements of the nested tensor.
```
def rotate_half(x):
# x1, x2 = x[..., : x.shape[-1] // 2], x[..., x.shape[-1] // 2 :] # old implementation
x1, x2 = x.chunk(2, dim= -1)
return torch.cat(
(-x2, x1), dim=-1
)
# @torch.jit.script # TODO: I don't think this is supported for torchscript with nested tensors
# def _apply_rotary_pos_emb_torchscript(qkv, cos, sin):
def _apply_rotary_pos_emb(qkv, cos, sin): # qkv shape: (B, j1, 3, n_heads, head_dim), cos & sin shape: (1, j1.max(), 1, head_dim)
if qkv.is_nested:
cos = cos.squeeze(0)
sin = sin.squeeze(0)
# slow list comprehension
result_list = [(t * cos[:t.shape[0]]) + (rotate_half(t) * sin[:t.shape[0]]) for t in qkv.unbind()]
# Reassemble the list of tensors back into a nested tensor
return torch.nested.as_nested_tensor(result_list)
return (qkv * cos) + (rotate_half(qkv) * sin)
```
### Alternatives
You could convert the cos and sin tensors to nested tensors of the same shape as qkv, and multiply these, but this does also not seem like an optimal solution, and requires copying the cos and sin vectors as much as we have batch size.
There might be some way of applying rotary positional embeddings to nested tensors that I haven't thought of. If so, please let me know!
### Additional context
I am working on a project utilizing protein sequences as input data. The data varies widely in sequence length. min sequence length is probably 32 tokens, the max is whatever I set the max length to be, probably 4096 tokens. I am using layout=torch.jagged at the moment, as this seem to be the best format to
It's the perfect project for nested tensors, but so far, FlashAttention, Rotary positional embeddings, and loss calculations are proving to be difficult to implement with efficient computations
cc @cpuhrsch @jbschlosser @bhosmer @drisspg @soulitzer @davidberard98 @YuqingJ
|
triaged,module: nestedtensor
|
low
|
Major
|
2,798,888,068
|
react-native
|
React native flickering issue in Screen when Scrollview Apply React native 0.76 version React navigation 7 version
|
### Description
i'm using React native **0.76.6** version and React navigation 7 version.
When i use header from react navigation and use Scrollview from react native this issue generate
### Steps to reproduce
1. Design simple screen render some Data.
2. Use React navigation stack.
3. Inside screen use Scrollview from React native.
4. And set this screen inside Stack Navigation.
5. Set inside options **hadershown** true
### React Native Version
0.76.6
### Affected Platforms
Runtime - iOS
### Output of `npx react-native info`
```text
System:
OS: macOS 15.1.1
CPU: (12) x64 Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz
Memory: 31.86 MB / 16.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 20.11.0
path: ~/.nvm/versions/node/v20.11.0/bin/node
Yarn:
version: 1.22.22
path: ~/Desktop/SmartMonster/node_modules/.bin/yarn
npm:
version: 10.2.4
path: ~/.nvm/versions/node/v20.11.0/bin/npm
Watchman:
version: 2024.11.11.00
path: /usr/local/bin/watchman
Managers:
CocoaPods:
version: 1.16.2
path: /usr/local/bin/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 24.2
- iOS 18.2
- macOS 15.2
- tvOS 18.2
- visionOS 2.2
- watchOS 11.2
Android SDK: Not Found
IDEs:
Android Studio: 2024.2 AI-242.23339.11.2421.12550806
Xcode:
version: 16.2/16C5032a
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.13
path: /usr/bin/javac
Ruby:
version: 2.6.10
path: /usr/bin/ruby
npmPackages:
"@react-native-community/cli":
installed: 15.1.3
wanted: ^15.1.3
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.76.6
wanted: 0.76.6
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: true
iOS:
hermesEnabled: true
newArchEnabled: true
```
### Stacktrace or Logs
```text
N/A
```
### Reproducer
N/A
### Screenshots and Videos
https://github.com/user-attachments/assets/dbab4bb1-687e-4cc9-bd75-d1f9d833e9bb
|
Component: ScrollView,Needs: Author Feedback,Needs: Repro
|
low
|
Major
|
2,798,938,814
|
vscode
|
Huge CPU usage and crashes
|
Type: <b>Bug</b>
Since a few weeks, I experience huge cpu usage (a process called "node" using several cores) when running simulations from VS Code terminal. Even after stopping the simulation, the cpu usage does not get lower until VS Code entirely crashes. The whole computer is not usable in the meantime.
When I disabled GitHub Copilot, this behavior dod not show up, I did not test all other extensions.
VS Code version: Code 1.96.4 (cd4ee3b1c348a13bafd8f9ad8060705f6d4b9cba, 2025-01-16T00:16:19.038Z)
OS version: Linux x64 6.8.0-51-generic
Modes:
Remote OS version: Linux x64 6.8.0-51-generic
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|13th Gen Intel(R) Core(TM) i7-1355U (12 x 400)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: disabled_software<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: disabled_off<br>webnn: disabled_off|
|Load (avg)|4, 4, 3|
|Memory (System)|31.00GB (16.89GB free)|
|Process Argv|--crash-reporter-id c3224d8b-099b-44c6-a73d-f3c305ef4832|
|Screen Reader|no|
|VM|0%|
|DESKTOP_SESSION|ubuntu|
|XDG_CURRENT_DESKTOP|Unity|
|XDG_SESSION_DESKTOP|ubuntu|
|XDG_SESSION_TYPE|x11|
|Item|Value|
|---|---|
|Remote|Dev Container|
|OS|Linux x64 6.8.0-51-generic|
|CPUs|13th Gen Intel(R) Core(TM) i7-1355U (12 x 1900)|
|Memory (System)|31.00GB (16.87GB free)|
|VM|0%|
</details><details><summary>Extensions (5)</summary>
Extension|Author (truncated)|Version
---|---|---
remote-containers|ms-|0.394.0
copilot|Git|1.257.0
copilot-chat|Git|0.23.2
terminal-keeper|ngu|1.1.53
clang-format|xav|1.9.0
(1 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
a9j8j154:30646983
962ge761:30959799
pythonnoceb:30805159
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
dvdeprecation:31068756
dwnewjupyter:31046869
nativerepl2:31139839
pythonrstrctxt:31112756
nativeloc1:31192215
cf971741:31144450
iacca1:31171482
notype1:31157159
5fd0e150:31155592
dwcopilot:31170013
stablechunks:31184530
6074i472:31201624
dwoutputs:31217127
```
</details>
<!-- generated by issue reporter -->
|
triage-needed,stale
|
low
|
Critical
|
2,798,956,075
|
godot
|
[4.4beta1] Enabling SDFGI -> Desired set (1) not used by shader
|
### Tested versions
Reproducable in:
v4.4.beta1.official [d33da79d3]
v4.4.dev7.official [46c8f8c5c]
v4.4.dev6.official [1f47e4c4e]
v4.4.dev5.official [9e6098432]
v4.4.dev4.official [36e6207bb]
Not reproducable in:
v4.4.dev3.official [f4af8201b]
v4.4.dev2.official [97ef3c837]
v4.4.dev1.official [28a72fa43]
### System information
Godot v4.4.beta1 - Windows 11 (build 26100) - Multi-window, 1 monitor - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 2070 SUPER (NVIDIA; 32.0.15.6590) - AMD Ryzen 7 3800X 8-Core Processor (16 threads)
### Issue description
Enabling SDFGI throws errors (and crashes game when run outside editor):
Console output:
```
ERROR: Desired set (1) not used by shader.
at: (servers/rendering/rendering_device.cpp:3427)
ERROR: Condition "rid.is_null()" is true. Returning: rid
at: _allocate_from_uniforms (servers/rendering/renderer_rd/uniform_set_cache_rd.h:131)
ERROR: Parameter "uniform_set" is null.
at: draw_list_bind_uniform_set (servers/rendering/rendering_device.cpp:4513)
```
### Steps to reproduce
1) Create new project
2) Create new scene
3) Add WorldEnvironment -> new Environment and enable SDFGI
4) Add Camera3D.
5) Run.
### Minimal reproduction project (MRP)
[sdfgi.zip](https://github.com/user-attachments/files/18476657/sdfgi.zip)
|
bug,topic:rendering,confirmed,crash,regression
|
low
|
Critical
|
2,798,963,204
|
excalidraw
|
Feature Request: Subfolders within Collections
|
As users working in a team
We would like to be able to create subfolders within Collections
So that we can easily manage a growing portfolio of whiteboards.
---------------------------------------------------------------------------------------------------
The Collections feature is great and we have created a Collection for each of our microservices.
We are now rolling out Excalidraw to the wider team who will generate more and more whiteboards. It would be great to have a ability to create subfolders within Collections (e.g. Archived / Processed / WIP) to allow a hierarchical storage structure.
Is this something that could be considered?
|
Excalidraw+
|
low
|
Minor
|
2,798,971,528
|
react
|
Bug: Properties are not passed to Custom Elements that extend built-in elements
|
<!--
Please provide a clear and concise description of what the bug is. Include
screenshots if needed. Please test using the latest version of the relevant
React packages to make sure your issue has not already been fixed.
-->
React version: 19.0.0
## Steps To Reproduce
1. Define a Custom Element `x-custom-link` that extends the `a` element
2. Render the custom element using the `is` attribute: `<a is="x-custom-link"></a>`
3. Pass a property to the element
<!--
Your bug will get fixed much faster if we can run your code and it doesn't
have dependencies other than React. Issues without reproduction steps or
code examples may be immediately closed as not actionable.
-->
Link to code example: https://codesandbox.io/p/sandbox/musing-shadow-ttt8hc
<!--
Please provide a CodeSandbox (https://codesandbox.io/s/new), a link to a
repository on GitHub, or provide a minimal code example that reproduces the
problem. You may provide a screenshot of the application if you think it is
relevant to your bug report. Here are some tips for providing a minimal
example: https://stackoverflow.com/help/mcve.
-->
## The current behavior
Properties are correctly passed to autonomous custom elements but not to those that extend builtin elements.
## The expected behavior
Properties are passed to custom elements that extend builtin elements as well.
|
Status: Unconfirmed
|
medium
|
Critical
|
2,798,985,222
|
flutter
|
When will Flutter mark newly added classes or methods in the documentation with the Flutter version in which they became available?
|
<img width="1491" alt="Image" src="https://github.com/user-attachments/assets/e0cb7854-e303-467d-af25-a4f47eff098a" />
like this:
<img width="774" alt="Image" src="https://github.com/user-attachments/assets/f738bfae-3936-43b0-9006-9e6487340991" />
|
waiting for customer response,in triage
|
low
|
Minor
|
2,798,988,448
|
godot
|
[4.4beta1] Godot instantly crash on start with signal 11 no backtrace
|
### Tested versions
- Reproducible in : 4.4.beta1
- Not Reproducible in : 4.3.stable
### System information
Godot v4.4.beta1 - Windows 10 (build 19045) - Multi-window, 2 monitors - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1070 (NVIDIA; 32.0.15.6603) - Intel(R) Core(TM) i5-7600K CPU @ 3.80GHz (4 threads)
### Issue description
**I used to be able to launch 4.4beta1 before**, I changed nothing to my flow and today it just won't start.
Trying to launch 4.4beta does nothing, I double click and see the mouse icon changing to "loading", then nothing.
I tried removing every Godot related stuff in %AppData%, launching 4.4beta1 has been able to recreate all the folders so it at least went that far.
I tried skipping the Project Manager view by opening a .godot project directly with "Open With" but it didn't work.
Of course I restarted the computer just in case too.
I tried the console version to see if it wrote anything, most of the time it only writes this an stops immediately :
```
PS D:\Downloads\Godot_v4.4-beta1_win64.exe> .\Godot_v4.4-beta1_win64_console.exe -d -v
WorkerThreadPool: 4 threads, 1 max low-priority.
Godot Engine v4.4.beta1.official.d33da79d3 - https://godotengine.org
TextServer: Added interface "Dummy"
TextServer: Added interface "ICU / HarfBuzz / Graphite (Built-in)"
Native OpenGL API detected: 3.3: NVIDIA - NVIDIA GeForce GTX 1070
NVAPI: Init OK!
NVAPI: Disabled OpenGL threaded optimization successfully
NVAPI: Disabled G-SYNC for windowed mode successfully
```
But once it a while, it outputs a crash report that just looks like
```
PS D:\Downloads\Godot_v4.4-beta1_win64.exe> .\Godot_v4.4-beta1_win64_console.exe -d -v
WorkerThreadPool: 4 threads, 1 max low-priority.
Godot Engine v4.4.beta1.official.d33da79d3 - https://godotengine.org
TextServer: Added interface "Dummy"
TextServer: Added interface "ICU / HarfBuzz / Graphite (Built-in)"
Native OpenGL API detected: 3.3: NVIDIA - NVIDIA GeForce GTX 1070
NVAPI: Init OK!
NVAPI: Disabled OpenGL threaded optimization successfully
NVAPI: Disabled G-SYNC for windowed mode successfully
================================================================
CrashHandlerException: Program crashed with signal 11
Engine version: Godot Engine v4.4.beta1.official (d33da79d3f8fe84be2521d25b9ba8e440cf25a88)
Dumping the backtrace. Please include this when reporting the bug to the project developer.
```
And then it stops, so there is no backtrace.
### Steps to reproduce
Download 4.4beta1
Try to launch it
### Minimal reproduction project (MRP)
N/A
|
bug,crash
|
low
|
Critical
|
2,799,022,857
|
kubernetes
|
ListWatch vs WatchList: Memory Usage in Large-Scale Clusters
|
The APIServer's StreamWatcher continuously decodes event streams from the server in the receive() method, with each event requiring deserialization into an object using proto.Unmarshal. When a large number of watch clients are active, these deserialization operations can result in significant memory usage.
Kubernetes 1.32 introduced the Watch-List feature to reduce memory overhead caused by frequent List operations. However, it doesn't address the issue of watch requests continuously decoding event streams, which still leads to high memory consumption.
In large-scale clusters, how can kube-apiserver effectively manage memory usage when multiple watch clients are simultaneously deserializing objects using proto.Unmarshal?
releated issue: https://github.com/kubernetes/kubernetes/issues/127980
|
sig/scalability,help wanted,needs-triage
|
low
|
Major
|
2,799,054,463
|
pytorch
|
[ARM] - test_quantized_module.py test_lstm_api fails on Aarch64
|
### 🐛 Describe the bug
We are seeing test_lstm_api in test_quantized_module.py fail on Aarch64. It is currently not enabled in CI - we would like to enable this.
This happens due to change of input dimensions here -https://github.com/pytorch/pytorch/blob/92b9da1fc2b0a834f54f4d97fd4a2402f47bce07/test/quantization/core/test_quantized_module.py#L1758
causes cache miss and implementation falls back to default_lowp_kind.
```
AIL: test_lstm_api (__main__.TestDynamicQuantizedModule)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2979, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_quantized.py", line 171, in test_fn
for qengine in supported_qengines:
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/hypothesis/core.py", line 1145, in wrapped_test
raise the_error_hypothesis_found
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_quantized.py", line 174, in test_fn
qfunction(*args, **kwargs)
File "/var/lib/jenkins/workspace/test/quantization/core/test_quantized_module.py", line 1760, in test_lstm_api
self.check_eager_serialization(cell_dq, ref_dq, [x])
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_quantization.py", line 674, in check_eager_serialization
check_outputs(ref_out, load_out)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_quantization.py", line 667, in check_outputs
self.assertEqual(ref_out[0], load_out[0])
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3885, in assertEqual
raise error_metas.pop()[0].to_error(
AssertionError: Tensor-likes are not close!
Mismatched elements: 1400 / 1400 (100.0%)
Greatest absolute difference: 1.1401878595352173 at index (8, 18, 6) (up to 1e-05 allowed)
Greatest relative difference: 5944.72802734375 at index (4, 4, 6) (up to 1.3e-06 allowed)
To execute this test, run the following from the base repo dir:
python test/quantization/core/test_quantized_module.py TestDynamicQuantizedModule.test_lstm_api
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
----------------------------------------------------------------------
Ran 46 tests in 45.840s
FAILED (failures=1, skipped=4)
```
Fixed in https://github.com/pytorch/pytorch/pull/135058
### Versions
jenkins@73bf36410487:~/workspace$ python3 collect_env.py
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (aarch64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.16 | packaged by conda-forge | (main, Dec 5 2024, 14:08:42) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.8.0-1021-aws-aarch64-with-glibc2.35
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Vendor ID: ARM
Model: 1
Thread(s) per core: 1
Core(s) per cluster: 48
Socket(s): -
Cluster(s): 1
Stepping: r1p1
BogoMIPS: 2100.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm ssbs paca pacg dcpodp svei8mm svebf16 i8mm bf16 dgh rng
L1d cache: 3 MiB (48 instances)
L1i cache: 3 MiB (48 instances)
L2 cache: 48 MiB (48 instances)
L3 cache: 32 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-47
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Mitigation; CSV2, BHB
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.13.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.22.4
[pip3] onnx==1.17.0
[pip3] onnxscript==0.1.0.dev20240817
[pip3] optree==0.13.0
[pip3] torch==2.5.1
[conda] No relevant packages
cc @jerryzh168 @jianyuh @raghuramank100 @jamesr66a @vkuzo @jgong5 @Xia-Weiwen @leslie-fang-intel @msaroufim @malfet @snadampal @milpuz01
|
oncall: quantization,module: arm
|
low
|
Critical
|
2,799,064,065
|
deno
|
Improve signature for JSON.parse
|
Version: Deno 2.1.5
Current signature for `JSON.parse`:
```ts
JSON.parse(text: string, reviver?: (this: any, key: string, value: any) => any): any
```
1. The `reviver`'s `this` parameter should be optional
2. The `reviver` should have an optional [`context`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/parse#context) parameter
|
types
|
low
|
Minor
|
2,799,123,078
|
node
|
Possible Null Pointer Dereference in `TLSWrap::PskClientCallback`
|
### Version
20.18.0
### Platform
```text
```
### Subsystem
crypto
### What steps will reproduce the bug?
Problem around with part of code - https://github.com/nodejs/node/blob/da5f7aca6ac1fac2b7840dc11c0ef8e740cfc414/src/crypto/crypto_tls.cc#L1559C1-L1564C58
After creating `Utf8Value` object code checks its length, but not checks for `nullptr`. After this `nullptr` can be dereferenced in `memcpy` call
### How often does it reproduce? Is there a required condition?
Condition - `identity_buf` stores `nullptr`
### What is the expected behavior? Why is that the expected behavior?
Return 0, for consistent API, for example
```c++
if (*identity_buf == nullptr || identity_buf.length() > max_identity_len)
return 0;
```
### What do you see instead?
-
### Additional information
Found by Linux Verification Center (linuxtesting.org) with SVACE.
Reporter: Burkov Egor ([eburkov@rvision.ru](mailto:eburkov@rvision.ru)).
Organization: R-Vision ([support@rvision.ru](mailto:support@rvision.ru)).
|
tls
|
low
|
Critical
|
2,799,123,506
|
deno
|
Receiving error when trying to run Sanity based task via npm in node.js interoperability project
|
Version: Deno 2.1.6
The script being run from `package.json`
``` json
"typegen": "sanity schema extract --path=src/sanity/extract.json && sanity typegen generate",
```
The output from running the command with `deno task typegen`
```
λ deno task typegen
Task typegen sanity schema extract --path=src/sanity/extract.json && sanity typegen generate
ReferenceError: require is not defined
at file:///Users/maclong/Developer/quantum/charles-fox-jewellers/node_modules/.deno/yargs@17.7.2/node_modules/yargs/yargs:3:69
at loadESMFromCJS (node:module:777:21)
at Module._compile (node:module:722:12)
at loadMaybeCjs (node:module:770:10)
at Object.Module._extensions..js (node:module:761:12)
at Module.load (node:module:662:32)
at Function.Module._load (node:module:534:12)
at Module.require (node:module:681:19)
at require (node:module:812:16)
at Object.<anonymous> (file:///Users/maclong/Developer/quantum/charles-fox-jewellers/node_modules/.deno/sanity@3.70.0/node_modules/sanity/lib/_chunks-cjs/_internal.js:21:383)
Unhandled rejection: ReferenceError: require is not defined
at file:///Users/maclong/Developer/quantum/charles-fox-jewellers/node_modules/.deno/yargs@17.7.2/node_modules/yargs/yargs:3:69
at loadESMFromCJS (node:module:777:21)
at Module._compile (node:module:722:12)
at loadMaybeCjs (node:module:770:10)
at Object.Module._extensions..js (node:module:761:12)
at Module.load (node:module:662:32)
at Function.Module._load (node:module:534:12)
at Module.require (node:module:681:19)
at require (node:module:812:16)
at Object.<anonymous> (file:///Users/maclong/Developer/quantum/charles-fox-jewellers/node_modules/.deno/sanity@3.70.0/node_modules/sanity/lib/_chunks-cjs/_internal.js:21:383)
ReferenceError: require is not defined
at loadESMFromCJS (node:module:777:21)
at Module._compile (node:module:722:12)
at loadMaybeCjs (node:module:770:10)
at Object.Module._extensions..js (node:module:761:12)
at Module.load (node:module:662:32)
at Function.Module._load (node:module:534:12)
at Module.require (node:module:681:19)
at require (node:module:812:16)
at Object.<anonymous> (file:///Users/maclong/Developer/quantum/charles-fox-jewellers/node_modules/.deno/sanity@3.70.0/node_modules/sanity/lib/_chunks-cjs/_internal.js:21:383)
at Object.<anonymous> (file:///Users/maclong/Developer/quantum/charles-fox-jewellers/node_modules/.deno/sanity@3.70.0/node_modules/sanity/lib/_chunks-cjs/_internal.js:3635:4)
Unhandled rejection: ReferenceError: require is not defined
at loadESMFromCJS (node:module:777:21)
at Module._compile (node:module:722:12)
at loadMaybeCjs (node:module:770:10)
at Object.Module._extensions..js (node:module:761:12)
at Module.load (node:module:662:32)
at Function.Module._load (node:module:534:12)
at Module.require (node:module:681:19)
at require (node:module:812:16)
at Object.<anonymous> (file:///Users/maclong/Developer/quantum/charles-fox-jewellers/node_modules/.deno/sanity@3.70.0/node_modules/sanity/lib/_chunks-cjs/_internal.js:21:383)
at Object.<anonymous> (file:///Users/maclong/Developer/quantum/charles-fox-jewellers/node_modules/.deno/sanity@3.70.0/node_modules/sanity/lib/_chunks-cjs/_internal.js:3635:4)
```
> [!NOTE]
> I have utilized `pnpm` to run this script temporarily so it isn't affecting my workflow currently however it would be good to at some point be able to run only Deno and still be able to maintain legacy node projects.
|
node compat,node resolution
|
low
|
Critical
|
2,799,135,469
|
kubernetes
|
Encrypt ETCD with service account or namespace specific keys
|
### What would you like to be added?
Right now it is possible to encrypt some resources in etcd by configuring a KVM v2 provider. However, only one "active" key is allowed for write operations as described in the section https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/#developing-a-kms-plugin-gRPC-server-notes-kms-v2: `The API server considers the key_id returned from the Status procedure call to be authoritative. Thus, a change to this value signals to the API server that the remote KEK has changed, and data encrypted with the old KEK should be marked stale when a no-op write is performed (as described below). `.
This request is about adding the ability to KVM (or define a new provider type, like KVM v3 or completely new acronym) to have more that one keys that could be used to encrypt secrets in database. As my current requirement is to have a different key for each namespace or service account, I would propose the following:
* Encryption: right now the behaviour is `The response must include the ciphertext, the key_id for the KEK used, and, optionally, any metadata that the KMS plugin needs to aid in future DecryptRequest calls (via the annotations field).`. The proposal would be to add one additional mandatory field that would represent the service account or the namespace name that the request is targeting (i.e. who tries to create the secret or where the secret will be created).
* Decryption: No change. The encryption request can use the annotations to propagate the additional field. However, if we want to be coherent we could also change the Decryption and allow this additional field.
* Status endpoint: the status endpoint can get one additional parameter (the service account or the namespace name) and return the key that corresponds to it.
My understanding is that the API server would need to be modified in order to support this functionality.
PS: As I am not familiar with the enhancement creation procedure, let me know if I should ask this question somewhere else, e.g. https://github.com/kubernetes/community/blob/master/sig-list.md.
### Why is this needed?
Use case: Different customers are using the same Kubernetes cluster and each one has their own namespace. When they create a secret in their namespace, this secret would be encrypted with a different key, so even if the ETCD data are leaked, the attacker would need one key for each namespace to decrypt them.
Right now with KMS v2 (according to the documentation: `With KMS v2, a new DEK is generated per encryption: the API server uses a key derivation function to generate single use data encryption keys from a secret seed combined with some random data. The seed is rotated whenever the KEK is rotated (see the Understanding key_id and Key Rotation section below for more details).`) each record is encrypted with a different key, but all keys are derived from a single key, so the attacker would need only to find the initial seed key and know the derivation function (which I assume is described in Kubernetes source code).
|
kind/feature,needs-triage,sig/etcd
|
low
|
Minor
|
2,799,155,907
|
godot
|
Circular references in GDScript code may lead to leaks, UB and segmentation faults
|
### Tested versions
Current master (discovered during https://github.com/godotengine/godot/pull/100694 / https://github.com/godotengine/godot/pull/100619).
### System information
All systems are affected.
### Issue description
The GDScript parser has circular ownership (cyclic references). Through it, objects (at least `CowData`, potentially more) are accessed that are in the midst of destructing themselves. This may lead to leaks, segmentation faults, and unexpected behavior (UB).
The cyclic reference should be sanitized, because it is partially relying on UB (accessing half destructed objects) and can lead to problems when attempting seemingly innocent refactors. I ran into it when working on https://github.com/godotengine/godot/pull/100619. I managed to avoid leaks in this particular PR by papering over the issue - specifically improving robustness against misuse of `CowData` through https://github.com/godotengine/godot/pull/100694.
### Steps to reproduce
The objects involved in the cyclic reference are logged [here](https://github.com/godotengine/godot/actions/runs/12435300159/job/34720869143).
It is difficult for me to identify the problematic code further because I am not familiar with the GDScript parsing code.
It may be possible to identify what code accesses `CowData` while it is destructing by storing in it a boolean `is_destructing`, and if it is accessed while the boolean is true, it shall log the stack trace.
### Minimal reproduction project (MRP)
N/A
|
bug,discussion,topic:core
|
low
|
Minor
|
2,799,163,757
|
godot
|
[4.4] Shadows break rendering when 3D MSAA is enabled
|
### Tested versions
This happens since version **4.4 dev4+**, in 4.4 dev3 it does not happen
### System information
Godot v4.4.beta1 - Windows 10 (build 19045) - Multi-window, 1 monitor - Vulkan (Forward+) - dedicated Radeon RX 560 Series (Advanced Micro Devices, Inc.; 31.0.14001.45012) - Intel(R) Core(TM) i5-4570 CPU @ 3.20GHz (4 threads)
### Issue description
Recently I noticed a problem when enabling shadows, this problem is only present in **Forward+** rendering mode, in **Compatibility** mode and in **Mobile render** it does not happen:
https://github.com/user-attachments/assets/5b250255-c510-415b-873c-7a73d4ead631
In some projects the shadow works sometimes, like in this example from GDQuest:
https://github.com/user-attachments/assets/ebbc349b-3aa7-4348-9b4f-2beaf80d6ab4
I also tested this with OmniLight3D, but it only happens when the light source is moving:
https://github.com/user-attachments/assets/0dce122f-11cf-41d0-bec2-77358913000a
I have a suspicion that this only happens on **AMD** graphics cards.
Similar problem that was solved earlier - #90006
### Steps to reproduce
Enable shadow on DirectionalLight3D in PSSM 4 split mode, and enable 3D MSAA.
### Minimal reproduction project (MRP)
N/A
|
bug,topic:rendering,topic:thirdparty,regression,topic:3d
|
low
|
Major
|
2,799,189,123
|
pytorch
|
DISABLED test_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_True_grad_False (__main__.TestFxGraphCache)
|
Platforms: rocm
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_True_grad_False&suite=TestFxGraphCache&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/35863944744).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_True_grad_False`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/pytorch/test/inductor/test_codecache.py", line 146, in test_cache_load_function
self.assertEqual(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 4036, in assertEqual
raise error_metas.pop()[0].to_error( # type: ignore[index]
AssertionError: Scalars are not equal!
Expected 7 but got 14.
Absolute difference: 7
Relative difference: 1.0
To execute this test, run the following from the base repo dir:
PYTORCH_TEST_WITH_ROCM=1 python test/inductor/test_codecache.py TestFxGraphCache.test_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_True_grad_False
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `inductor/test_codecache.py`
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @clee2000 @wdvr @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang @aakhundov
|
module: rocm,triaged,module: flaky-tests,skipped,oncall: pt2,module: inductor
|
low
|
Critical
|
2,799,207,293
|
godot
|
Camera2D Jittering After Setting Position Smoothing Speed
|
### Tested versions
System: MacOS
Godot version: 4.3.stable
### System information
Godot v4.3.stable - macOS 15.3.0 - Vulkan (Forward+) - integrated Apple M1 Pro - Apple M1 Pro (10 Threads)
### Issue description

My camera is attached to a character2d node with this setting. It works well without any jittering by default.
However, jittering happens after I set position_smoothing_speed by code in game. It even happens when my default speed is 4 and set it to 4 by code. I tried both set value of position_smoothing_speed directly or use the setter. The speed setting function is just as simple as below.
```func smoothing(state: bool) -> void:
match state:
true:
set_position_smoothing_speed(4.0)
false:
set_position_smoothing_speed(8.0)
```
I checked other issues, which related to physics and vsync. I tried both and didn't fix this jittering.
### Steps to reproduce
Run test_scene, use up, down, left, right to move, then press space to set value of position smoothing speed to 4 again. Jittering begin.
### Minimal reproduction project (MRP)
[smooth_test.zip](https://github.com/user-attachments/files/18477655/smooth_test.zip)
|
bug,topic:2d
|
low
|
Minor
|
2,799,241,509
|
pytorch
|
getting different results when adding `torch.Tensor` or python number to a DTensor - Is that expected?
|
### 🐛 Describe the bug
```python
# torchrun --nproc-per-node 2 scripts/dtensor.py
import os
import torch
from torch.distributed.tensor import init_device_mesh, Shard, distribute_tensor
use_tensor = False
rank = int(os.getenv("RANK"))
world_size = int(os.getenv("WORLD_SIZE"))
torch.manual_seed(0)
tensor1 = torch.rand(1000, 88)
mesh = init_device_mesh("cpu", (world_size,))
norm1 = torch.linalg.vector_norm(tensor1)
norm1 += torch.tensor(2) if use_tensor else 2
print(f"{norm1}\n")
tensor2 = distribute_tensor(tensor1, mesh, [Shard(dim=0)])
norm2 = torch.linalg.vector_norm(tensor2)
norm2 += torch.tensor(2) if use_tensor else 2
print(f"{norm2.full_tensor()}\n")
```
setting `use_tensor = False` gives different results - is that expected?
`use_tensor = True` works fine and gives same results;
### Versions
```
Collecting environment information...
PyTorch version: 2.5.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.2 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.6)
CMake version: version 3.30.2
Libc version: N/A
Python version: 3.10.8 (main, Nov 24 2022, 08:08:27) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-15.2-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M2
Versions of relevant libraries:
[pip3] flake8==7.1.1
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.25.2
[pip3] pytorch-lightning==2.0.1.post0
[pip3] torch==2.5.1
[pip3] torchaudio==2.0.0.dev20230302
[pip3] torchdata==0.6.1
[pip3] torchmetrics==0.11.4
[pip3] torchtext==0.15.2
[pip3] torchvision==0.19.0
[conda] numpy 1.25.2 pypi_0 pypi
[conda] pytorch-lightning 2.0.1.post0 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] torchaudio 2.0.0.dev20230302 pypi_0 pypi
[conda] torchdata 0.6.1 pypi_0 pypi
[conda] torchmetrics 0.11.4 pypi_0 pypi
[conda] torchtext 0.15.2 pypi_0 pypi
[conda] torchvision 0.19.0 pypi_0 pypi
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @tianyu-l @XilunWu
|
oncall: distributed,module: dtensor
|
low
|
Critical
|
2,799,259,243
|
PowerToys
|
powerRename Stuck and crashed during apply replacement
|
### Microsoft PowerToys version
0.87.1
### Installation method
GitHub
### Running as admin
No
### Area(s) with issue?
PowerRename
### Steps to reproduce

After clicking , the software freezes and does not respond
### ✔️ Expected Behavior
work
### ❌ Actual Behavior
the software freezes and does not respond
### Other Software
[PowerToysReport_2025-01-20-21-17-59.zip](https://github.com/user-attachments/files/18478008/PowerToysReport_2025-01-20-21-17-59.zip)
|
Issue-Bug,Needs-Triage
|
low
|
Critical
|
2,799,275,379
|
vscode
|
extension store show the "fail to fetch"
|
<!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.96.4 (user setup)
- OS Version: Windows_NT x64 10.0.22631
Steps to Reproduce:
1. when i use my mobile's wifi,the extension store show the "XHR error",but when i restart my computer,it show me "fail to fetch ". I can use the extension store in my mobile's wifi a weeks ago ,but when i get home it can't use,how to fix?
2. by the way ,i can use the extension store in my home's wifi,it can't use in my mobile's wifi
3. here are the error code
-----------------------------------------------------------------------------------------------------------
Failed to load resource: net::ERR_FAILED
workbench.desktop.main.js:sourcemap:35 ERR [network] #2: https://marketplace.visualstudio.com/_apis/public/gallery/extensionquery - error POST Failed to fetch
workbench.html:1 Access to fetch at 'https://marketplace.visualstudio.com/_apis/public/gallery/extensionquery' from origin 'vscode-file://vscode-app' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
marketplace.visualstudio.com/_apis/public/gallery/extensionquery:1
Failed to load resource: net::ERR_FAILED
workbench.desktop.main.js:sourcemap:35 ERR [network] #3: https://marketplace.visualstudio.com/_apis/public/gallery/extensionquery - error POST Failed to fetch
workbench.desktop.main.js:sourcemap:35 INFO [perf] Render performance baseline is 12ms
workbench.html:1 Access to fetch at 'https://marketplace.visualstudio.com/_apis/public/gallery/extensionquery' from origin 'vscode-file://vscode-app' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
marketplace.visualstudio.com/_apis/public/gallery/extensionquery:1
Failed to load resource: net::ERR_FAILED
workbench.desktop.main.js:sourcemap:35 ERR [network] #4: https://marketplace.visualstudio.com/_apis/public/gallery/extensionquery - error POST Failed to fetch
workbench.desktop.main.js:sourcemap:35 WARN Settings pattern "issueReporter.*" doesn't match any settings
workbench.desktop.main.js:sourcemap:35 WARN Settings pattern "application.*" doesn't match any settings
workbench.html:1 Access to fetch at 'https://marketplace.visualstudio.com/_apis/public/gallery/extensionquery' from origin 'vscode-file://vscode-app' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
marketplace.visualstudio.com/_apis/public/gallery/extensionquery:1
Failed to load resource: net::ERR_FAILED
workbench.desktop.main.js:sourcemap:35 ERR [network] #24: https://marketplace.visualstudio.com/_apis/public/gallery/extensionquery - error POST Failed to fetch
workbench.desktop.main.js:sourcemap:35 ERR Failed: Failed to fetch
at x3e.H (vscode-file://vscode-app/d:/Microsoft%20VS%20Code/resources/app/out/vs/workbench/workbench.desktop.main.js:1334:42653)
at async x3e.F (vscode-file://vscode-app/d:/Microsoft%20VS%20Code/resources/app/out/vs/workbench/workbench.desktop.main.js:1334:39602)
at async o (vscode-file://vscode-app/d:/Microsoft%20VS%20Code/resources/app/out/vs/workbench/workbench.desktop.main.js:1334:38567)
at async x3e.query (vscode-file://vscode-app/d:/Microsoft%20VS%20Code/resources/app/out/vs/workbench/workbench.desktop.main.js:1334:38853)
at async pet.queryGallery (vscode-file://vscode-app/d:/Microsoft%20VS%20Code/resources/app/out/vs/workbench/workbench.desktop.main.js:2430:14638)
at async Promise.all (index 0)
at async GLs.Mc (vscode-file://vscode-app/d:/Microsoft%20VS%20Code/resources/app/out/vs/workbench/workbench.desktop.main.js:2222:21996)
at async GLs.wc (vscode-file://vscode-app/d:/Microsoft%20VS%20Code/resources/app/out/vs/workbench/workbench.desktop.main.js:2222:13932)
at async vscode-file://vscode-app/d:/Microsoft%20VS%20Code/resources/app/out/vs/workbench/workbench.desktop.main.js:2222:12294
workbench.html:1 Access to fetch at 'https://marketplace.visualstudio.com/_apis/public/gallery/extensionquery' from origin 'vscode-file://vscode-app' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
marketplace.visualstudio.com/_apis/public/gallery/extensionquery:1
|
triage-needed,stale
|
low
|
Critical
|
2,799,275,434
|
PowerToys
|
Image resizer I want default to Jpg please
|
### Description of the new feature / enhancement
I have a huge PNG file, i try to resize to a jpg file, but since the original is png, the resize is also png.
Turns out you cannot choose jpg if you want to.

### Scenario when this would be used?
I want a smaller file size and quality is not much of an issue
### Supporting information
_No response_
|
Resolution-Helped User
|
low
|
Minor
|
2,799,316,868
|
kubernetes
|
Admission controllers for kube-api response
|
### What would you like to be added?
The existing admission controllers offer the ability to alter or validate the incoming request to the kube-api server before it is written to ETCD. I would like to ask for the addition of a similar mechanism for the responses that are returned from the api server. I am not sure if both mutate and validate are needed, maybe only mutate is enough.
### Why is this needed?
Use cases:
* Encrypt data in ETCD with different keys per namespace. The requirement is similar to https://github.com/kubernetes/kubernetes/issues/129708. We want to encrypt the secrets in each namespace with a different key. A mutating webhook (using the admission controller which is already implemented in kubernetes) would encrypt the data before storing them to ETCD.When the secret is read, another mutating webhook (using the proposed admission controller) would decrypt the secret. The proposed mutating webhook would run after the data are read from ETCD but before they are returned to the consumer.
* Transparent sealed secrets: there are solutions of sealed secrets like https://github.com/bitnami-labs/sealed-secrets?tab=readme-ov-file#sealedsecrets-as-templates-for-secrets which use a custom resource definition to create the sealed secret and a controller that creates a standard kubernetes secret from the corresponding sealed secret. Using this approach, we have some duplication. If the proposed admission controller would exist for response, then we could create a standard kubernetes secret (with the encrypted data) and create a mutating webhook for the response that would decrypt the data when someone tried to use them through the k8s api server.
|
sig/api-machinery,kind/feature,needs-triage
|
low
|
Minor
|
2,799,324,585
|
opencv
|
opencv 5.0 build in windows, meet problem with avif
|
### System Information
文件无效或损坏: 无法在 0x348 处读取 opencv_imgcodecs avif,dll
I have tried many ways to install libavif, all the same error
### Detailed description
文件无效或损坏: 无法在 0x348 处读取 opencv_imgcodecs avif,dll
### Steps to reproduce
cpp
### Issue submission checklist
- [x] I report the issue, it's not a question
- [ ] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [x] I updated to the latest OpenCV version and the issue is still there
- [ ] There is reproducer code and related data files (videos, images, onnx, etc)
|
bug,duplicate,category: imgproc
|
low
|
Critical
|
2,799,346,764
|
neovim
|
completion: plugins can define completion sources
|
Part of https://github.com/neovim/neovim/issues/25670
## Problem
completion plugins like https://github.com/saghen/blink.cmp and https://github.com/hrsh7th/nvim-cmp must be manually configured, there's no way for plugins to passively provide a "completion source".
## Expected behavior
- Plugins can declare a completion source.
- The default/builtin omnicomplete merges all such sources.
- Third-party autocomplete plugins can also choose to discover and merge these sources.
- Nvim provides a stdlib function which does a "default" merge of the sources.
|
lua,completion
|
medium
|
Major
|
2,799,348,231
|
godot
|
VideoStreamPlayer rendering frames out of time (Theora video)
|
### Tested versions
Reproducible in: 4.3.stable
### System information
Debian GNU/Linux 12.9
### Issue description
I've encoded a pair of videos to illustrate a problem when rendering pictures with areas of solid colors. It might not be exclussive to any kind of content but probably more noticeable here and it's even more evident when trying to improve compression ratios by increasing the keyframe interval.
Even with default keyframe interval values there's still glitches in the franknstein_short.ogv video. It gets fixed with keyframe interval at 1 but then file size goes up noticeably.
Like #66331, the issue gets worse with higher keyframe intervals but it's not the same.
### Steps to reproduce
Load the attached videos in Godot and play them. Compare them against the same videos played in another player like VLC.
The videos played in Godot freeze in some frames.
In the case of the test.ogv video, colors white, cyan and yellow should stay while red and cyan should past by fast. Godot stays at red and green while white, cyan and yellow past by fast.
The franknstein_short.ogv video displays a border changing color and showing some bands. It freezes at frames that it shouldn't.
I think the issue must be the same in both videos.
### Minimal reproduction project (MRP)
https://www.mediafire.com/file/7abnxvzz54plvos/test.ogv/file
https://www.mediafire.com/file/i2w0wrt5ahi1p0g/franknstein_short.ogv/file
This is the commandline used to to generate test.ogv:
`ffmpeg -f lavfi -i color=white:320x240:d=1.5 -f lavfi -i color=red:320x240:d=0.1 -f lavfi -i color=cyan:320x240:d=1.5 -f lavfi -i color=green:320x240:d=0.1 -f lavfi -i color=yellow:320x240:d=3 -filter_complex "[0:v] [1:v] [2:v] [3:v] [4:v] concat=n=5:v=1 [v]" -map "[v]" -framerate 50 -codec:v libtheora -qscale:v 5 -g:v 500 test.ogv`
- *Production edit: Reupload of the above files (in case they become unavailable): [videos.zip](https://github.com/user-attachments/files/18479999/videos.zip)*
|
bug,topic:core
|
low
|
Minor
|
2,799,351,217
|
godot
|
Inconsistent spacing of groups in TileSet inspector
|
### Tested versions
4.4 beta1
### System information
W10
### Issue description
https://github.com/user-attachments/assets/35f080dd-b32d-4cd0-8b6f-08fbd03079da
All layers except Custom Data have some weird gap, which is also visible when they are expanded.
The categories should have no spacing, but the button inside should have it (Custom Data doesn't space button properly).
### Steps to reproduce
1. Create TileSet
2. Inspect it
### Minimal reproduction project (MRP)
N/A
|
bug,topic:editor
|
low
|
Minor
|
2,799,355,866
|
tensorflow
|
Tutorial "Multi-worker training with Keras" fails to complete
|
### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
v1.12.1-120353-gc5bd67bc56f 2.19.0-dev20250107
### Custom code
No
### OS platform and distribution
Debian 6.1.123-1 (2025-01-02) x86_64 GNU/Linux
### Mobile device
_No response_
### Python version
Python 3.12.8
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
Following the tutorial everything goes well until you start the second worker. Then the below failure occures.
2025-01-20 07:19:35.283801: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2025-01-20 07:19:35.290192: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:467] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1737379175.297785 4595 cuda_dnn.cc:8501] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1737379175.300054 4595 cuda_blas.cc:1407] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2025-01-20 07:19:35.307721: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2025-01-20 07:19:36.510476: E external/local_xla/xla/stream_executor/cuda/cuda_platform.cc:51] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2025-01-20 07:19:36.510494: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:167] env: CUDA_VISIBLE_DEVICES="-1"
2025-01-20 07:19:36.510499: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:170] CUDA_VISIBLE_DEVICES is set to -1 - this hides all GPUs from CUDA
2025-01-20 07:19:36.510501: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:178] verbose logging is disabled. Rerun with verbose logging (usually --v=1 or --vmodule=cuda_diagnostics=1) to get more diagnostic output from this module
2025-01-20 07:19:36.510505: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:183] retrieving CUDA diagnostic information for host: michael
2025-01-20 07:19:36.510507: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:190] hostname: michael
2025-01-20 07:19:36.510562: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:197] libcuda reported version is: 565.77.0
2025-01-20 07:19:36.510572: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:201] kernel reported version is: 565.77.0
2025-01-20 07:19:36.510574: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:291] kernel version seems to match DSO: 565.77.0
2025-01-20 07:19:36.519175: I external/local_xla/xla/tsl/distributed_runtime/coordination/coordination_service.cc:637] Initializing CoordinationService
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1737379176.519611 4595 grpc_server_lib.cc:465] Started server with target: grpc://localhost:12345
2025-01-20 07:19:36.524874: I external/local_xla/xla/tsl/distributed_runtime/coordination/coordination_service.cc:378] /job:worker/replica:0/task:0 has connected to coordination service. Incarnation: 4677280066871850635
2025-01-20 07:19:36.524894: I external/local_xla/xla/tsl/distributed_runtime/coordination/coordination_service.cc:816] Waiting for 1/2 tasks to connect.
2025-01-20 07:19:36.524898: I external/local_xla/xla/tsl/distributed_runtime/coordination/coordination_service.cc:819] Example stragglers:
/job:worker/replica:0/task:1
I0000 00:00:1737379176.525022 4595 coordination_service_agent.cc:369] Coordination agent has successfully connected.
2025-01-20 07:22:27.996664: I external/local_xla/xla/tsl/distributed_runtime/coordination/coordination_service.cc:378] /job:worker/replica:0/task:1 has connected to coordination service. Incarnation: 13530699364709055870
2025-01-20 07:22:27.996686: I external/local_xla/xla/tsl/distributed_runtime/coordination/coordination_service.cc:816] Waiting for 0/2 tasks to connect.
/home/chad/anaconda3/lib/python3.12/site-packages/keras/src/layers/core/input_layer.py:27: UserWarning: Argument `input_shape` is deprecated. Use `shape` instead.
warnings.warn(
2025-01-20 07:22:28.461733: W tensorflow/core/framework/dataset.cc:993] Input of GeneratorDatasetOp::Dataset will not be optimized because the dataset does not implement the AsGraphDefInternal() method needed to apply optimizations.
Traceback (most recent call last):
File "/home/chad/Documents/McCueFiles/NeuralNetworks/TensorFlowProject/TensorFlowDocExample/main.py", line 21, in <module>
multi_worker_model.fit(multi_worker_dataset, epochs=3, steps_per_epoch=70)
File "/home/chad/anaconda3/lib/python3.12/site-packages/keras/src/utils/traceback_utils.py", line 122, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/home/chad/anaconda3/lib/python3.12/site-packages/tensorflow/python/framework/constant_op.py", line 108, in convert_to_eager_tensor
return ops.EagerTensor(value, ctx.device_name, dtype)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: Attempt to convert a value (PerReplica:{
0: <tf.Tensor: shape=(64, 28, 28), dtype=float32, numpy=
array([[[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.]],
[[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.]],
[[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.]],
...,
[[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.]],
[[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.]],
[[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.]]], dtype=float32)>
}) with an unsupported type (<class 'tensorflow.python.distribute.values.PerReplica'>) to a Tensor.
### Standalone code to reproduce the issue
```shell
python main.py &> job_1.log
```
### Relevant log output
```shell
2025-01-20 07:19:35.283801: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2025-01-20 07:19:35.290192: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:467] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1737379175.297785 4595 cuda_dnn.cc:8501] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1737379175.300054 4595 cuda_blas.cc:1407] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2025-01-20 07:19:35.307721: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2025-01-20 07:19:36.510476: E external/local_xla/xla/stream_executor/cuda/cuda_platform.cc:51] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected
2025-01-20 07:19:36.510494: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:167] env: CUDA_VISIBLE_DEVICES="-1"
2025-01-20 07:19:36.510499: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:170] CUDA_VISIBLE_DEVICES is set to -1 - this hides all GPUs from CUDA
2025-01-20 07:19:36.510501: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:178] verbose logging is disabled. Rerun with verbose logging (usually --v=1 or --vmodule=cuda_diagnostics=1) to get more diagnostic output from this module
2025-01-20 07:19:36.510505: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:183] retrieving CUDA diagnostic information for host: michael
2025-01-20 07:19:36.510507: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:190] hostname: michael
2025-01-20 07:19:36.510562: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:197] libcuda reported version is: 565.77.0
2025-01-20 07:19:36.510572: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:201] kernel reported version is: 565.77.0
2025-01-20 07:19:36.510574: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:291] kernel version seems to match DSO: 565.77.0
2025-01-20 07:19:36.519175: I external/local_xla/xla/tsl/distributed_runtime/coordination/coordination_service.cc:637] Initializing CoordinationService
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
I0000 00:00:1737379176.519611 4595 grpc_server_lib.cc:465] Started server with target: grpc://localhost:12345
2025-01-20 07:19:36.524874: I external/local_xla/xla/tsl/distributed_runtime/coordination/coordination_service.cc:378] /job:worker/replica:0/task:0 has connected to coordination service. Incarnation: 4677280066871850635
2025-01-20 07:19:36.524894: I external/local_xla/xla/tsl/distributed_runtime/coordination/coordination_service.cc:816] Waiting for 1/2 tasks to connect.
2025-01-20 07:19:36.524898: I external/local_xla/xla/tsl/distributed_runtime/coordination/coordination_service.cc:819] Example stragglers:
/job:worker/replica:0/task:1
I0000 00:00:1737379176.525022 4595 coordination_service_agent.cc:369] Coordination agent has successfully connected.
2025-01-20 07:22:27.996664: I external/local_xla/xla/tsl/distributed_runtime/coordination/coordination_service.cc:378] /job:worker/replica:0/task:1 has connected to coordination service. Incarnation: 13530699364709055870
2025-01-20 07:22:27.996686: I external/local_xla/xla/tsl/distributed_runtime/coordination/coordination_service.cc:816] Waiting for 0/2 tasks to connect.
/home/chad/anaconda3/lib/python3.12/site-packages/keras/src/layers/core/input_layer.py:27: UserWarning: Argument `input_shape` is deprecated. Use `shape` instead.
warnings.warn(
2025-01-20 07:22:28.461733: W tensorflow/core/framework/dataset.cc:993] Input of GeneratorDatasetOp::Dataset will not be optimized because the dataset does not implement the AsGraphDefInternal() method needed to apply optimizations.
Traceback (most recent call last):
File "/home/chad/Documents/McCueFiles/NeuralNetworks/TensorFlowProject/TensorFlowDocExample/main.py", line 21, in <module>
multi_worker_model.fit(multi_worker_dataset, epochs=3, steps_per_epoch=70)
File "/home/chad/anaconda3/lib/python3.12/site-packages/keras/src/utils/traceback_utils.py", line 122, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/home/chad/anaconda3/lib/python3.12/site-packages/tensorflow/python/framework/constant_op.py", line 108, in convert_to_eager_tensor
return ops.EagerTensor(value, ctx.device_name, dtype)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: Attempt to convert a value (PerReplica:{
0: <tf.Tensor: shape=(64, 28, 28), dtype=float32, numpy=
array([[[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.]],
[[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.]],
[[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.]],
...,
[[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.]],
[[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.]],
[[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.]]], dtype=float32)>
}) with an unsupported type (<class 'tensorflow.python.distribute.values.PerReplica'>) to a Tensor.
```
|
type:bug,TF 2.18
|
medium
|
Critical
|
2,799,382,763
|
pytorch
|
Regression in the compilation of the torch.all operation in PyTorch version 2.6.0 compared to 2.5.1
|
### 🐛 Describe the bug
There is an issue with tracing after upgrading to PyTorch 2.6.0 from 2.5.1. It appears to be a regression related to compiling the torch.all operation.
Before the upgrade, the code below compiles without any graph breaks in PyTorch 2.5.1:
```python
import torch
@torch.compile(backend="inductor")
def compiled_fn(input_tensor: torch.Tensor):
output_tensor = torch.empty((0,), dtype=torch.bool).to(input_tensor.device)
result = torch.all(input_tensor, dim=2, out=output_tensor)
return result
if __name__ == "__main__":
input_tensor = torch.randint(0, 2, (2, 3, 4), dtype=torch.bool, device="cpu")
output = compiled_fn(input_tensor)
```
The code compiles to the following FX graph in PyTorch 2.5.1:
```
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code] TRACED GRAPH
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code] ===== __compiled_fn_1 =====
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code] /home/user1/venv1/lib/python3.10/site-packages/torch/fx/_lazy_graph_module.py class GraphModule(torch.nn.Module):
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code] def forward(self, L_input_tensor_: "b8[2, 3, 4][12, 4, 1]cpu"):
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code] l_input_tensor_ = L_input_tensor_
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code]
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code] # File: tests/compile/test_all.py:5 in compiled_fn, code: output_tensor = torch.empty((0,), dtype=torch.bool).to(input_tensor.device)
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code] empty: "b8[2, 3][3, 1]cpu" = torch.empty((0,), dtype = torch.bool)
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code] output_tensor: "b8[2, 3][3, 1]cpu" = empty.to(device(type='cpu')); empty = None
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code]
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code] # File: tests/compile/test_all.py:6 in compiled_fn, code: result = torch.all(input_tensor, dim=2, out=output_tensor)
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code] result: "b8[2, 3][3, 1]cpu" = torch.all(l_input_tensor_, dim = 2, out = output_tensor); l_input_tensor_ = output_tensor = None
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code] return (result,)
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code]
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code]
```
However, after upgrading to PyTorch 2.6.0, the code fails to compile to the same graph and results in graph breaks:
```
V0120 14:57:46.684000 74548 torch/_dynamo/output_graph.py:972] [0/0_1] COMPILING GRAPH due to GraphCompileReason(reason='out variants with resizing on graph inputs', user_stack=[<FrameSummary file tests/compile/test_all.py, line 6 in compiled_fn>], graph_break=True)
V0120 14:57:46.689000 74548 torch/_dynamo/output_graph.py:1615] [0/0_1] REMOVE UNUSED GRAPHARG L['input_tensor']
V0120 14:57:46.689000 74548 torch/_dynamo/output_graph.py:1353] [0/0_1] [__graph_code] TRACED GRAPH
V0120 14:57:46.689000 74548 torch/_dynamo/output_graph.py:1353] [0/0_1] [__graph_code] ===== __compiled_fn_2 =====
V0120 14:57:46.689000 74548 torch/_dynamo/output_graph.py:1353] [0/0_1] [__graph_code] /home/user1/venv1/lib/python3.10/site-packages/torch/fx/_lazy_graph_module.py class GraphModule(torch.nn.Module):
V0120 14:57:46.689000 74548 torch/_dynamo/output_graph.py:1353] [0/0_1] [__graph_code] def forward(self):
V0120 14:57:46.689000 74548 torch/_dynamo/output_graph.py:1353] [0/0_1] [__graph_code] # File: tests/compile/test_all.py:5 in compiled_fn, code: output_tensor = torch.empty((0,), dtype=torch.bool).to(input_tensor.device)
V0120 14:57:46.689000 74548 torch/_dynamo/output_graph.py:1353] [0/0_1] [__graph_code] empty: "b8[0][1]cpu" = torch.empty((0,), dtype = torch.bool)
V0120 14:57:46.689000 74548 torch/_dynamo/output_graph.py:1353] [0/0_1] [__graph_code] output_tensor: "b8[0][1]cpu" = empty.to(device(type='cpu')); empty = None
V0120 14:57:46.689000 74548 torch/_dynamo/output_graph.py:1353] [0/0_1] [__graph_code] return (output_tensor,)
V0120 14:57:46.689000 74548 torch/_dynamo/output_graph.py:1353] [0/0_1] [__graph_code]
V0120 14:57:46.689000 74548 torch/_dynamo/output_graph.py:1353] [0/0_1] [__graph_code]
```
Please investigate this regression.
Full logs 2.5.1:
```
V0120 14:51:10.919000 72022 torch/_dynamo/convert_frame.py:864] [0/0] torchdynamo start compiling compiled_fn tests/compile/test_all.py:3, stack (elided 5 frames):
V0120 14:51:10.919000 72022 torch/_dynamo/convert_frame.py:864] [0/0] File "tests/compile/test_all.py", line 14, in <module>
V0120 14:51:10.919000 72022 torch/_dynamo/convert_frame.py:864] [0/0] output = compiled_fn(input_tensor)
V0120 14:51:10.919000 72022 torch/_dynamo/convert_frame.py:864] [0/0]
I0120 14:51:10.920000 72022 torch/_dynamo/utils.py:859] [0/0] ChromiumEventLogger initialized with id 11952b32-9bff-4a1f-ae82-08757a4285ab
I0120 14:51:10.921000 72022 torch/_dynamo/logging.py:57] [0/0] Step 1: torchdynamo start tracing compiled_fn tests/compile/test_all.py:3
V0120 14:51:10.922000 72022 torch/fx/experimental/symbolic_shapes.py:2498] [0/0] create_env
V0120 14:51:10.939000 72022 torch/_dynamo/symbolic_convert.py:865] [0/0] [__trace_source] TRACE starts_line tests/compile/test_all.py:5 in compiled_fn (compiled_fn)
V0120 14:51:10.939000 72022 torch/_dynamo/symbolic_convert.py:865] [0/0] [__trace_source] output_tensor = torch.empty((0,), dtype=torch.bool).to(input_tensor.device)
V0120 14:51:10.940000 72022 torch/_dynamo/symbolic_convert.py:888] [0/0] [__trace_bytecode] TRACE LOAD_GLOBAL torch []
V0120 14:51:10.941000 72022 torch/_dynamo/symbolic_convert.py:888] [0/0] [__trace_bytecode] TRACE LOAD_ATTR empty [PythonModuleVariable(<module 'torch' from '/home/user1/venv1/lib/python3.10/site-packages/torch/__init__.py'>)]
V0120 14:51:10.942000 72022 torch/_dynamo/symbolic_convert.py:888] [0/0] [__trace_bytecode] TRACE LOAD_CONST (0,) [TorchInGraphFunctionVariable(<built-in method empty of type object at 0x7f188888aa20>)]
V0120 14:51:10.942000 72022 torch/_dynamo/symbolic_convert.py:888] [0/0] [__trace_bytecode] TRACE LOAD_GLOBAL torch [TorchInGraphFunctionVariable(<built-in method empty of type object at 0x7f188888aa20>), TupleVariable(length=1)]
V0120 14:51:10.943000 72022 torch/_dynamo/symbolic_convert.py:888] [0/0] [__trace_bytecode] TRACE LOAD_ATTR bool [TorchInGraphFunctionVariable(<built-in method empty of type object at 0x7f188888aa20>), TupleVariable(length=1), PythonModuleVariable(<module 'torch' from '/home/user1/venv1/lib/python3.10/site-packages/torch/__init__.py'>)]
V0120 14:51:10.944000 72022 torch/_dynamo/symbolic_convert.py:888] [0/0] [__trace_bytecode] TRACE LOAD_CONST ('dtype',) [TorchInGraphFunctionVariable(<built-in method empty of type object at 0x7f188888aa20>), TupleVariable(length=1), ConstantVariable()]
V0120 14:51:10.944000 72022 torch/_dynamo/symbolic_convert.py:888] [0/0] [__trace_bytecode] TRACE CALL_FUNCTION_KW 2 [TorchInGraphFunctionVariable(<built-in method empty of type object at 0x7f188888aa20>), TupleVariable(length=1), ConstantVariable(), TupleVariable(length=1)]
V0120 14:51:10.947000 72022 torch/_dynamo/symbolic_convert.py:888] [0/0] [__trace_bytecode] TRACE LOAD_ATTR to [TensorVariable()]
V0120 14:51:10.947000 72022 torch/_dynamo/symbolic_convert.py:888] [0/0] [__trace_bytecode] TRACE LOAD_FAST input_tensor [GetAttrVariable()]
V0120 14:51:10.948000 72022 torch/_dynamo/symbolic_convert.py:888] [0/0] [__trace_bytecode] TRACE LOAD_ATTR device [GetAttrVariable(), LazyVariableTracker()]
V0120 14:51:10.948000 72022 torch/_dynamo/output_graph.py:2107] [0/0] create_graph_input L_input_tensor_ L['input_tensor']
V0120 14:51:10.949000 72022 torch/_dynamo/variables/builder.py:2702] [0/0] wrap_to_fake L['input_tensor'] (2, 3, 4) StatefulSymbolicContext(dynamic_sizes=[<DimDynamic.STATIC: 2>, <DimDynamic.STATIC: 2>, <DimDynamic.STATIC: 2>], dynamic_strides=[<DimDynamic.INFER_STRIDE: 4>, <DimDynamic.INFER_STRIDE: 4>, <DimDynamic.INFER_STRIDE: 4>], constraint_sizes=[None, None, None], constraint_strides=[None, None, None], view_base_context=None, tensor_source=LocalSource(local_name='input_tensor', cell_or_freevar=False), shape_env_to_source_to_symbol_cache={}) <class 'torch.Tensor'>
V0120 14:51:10.951000 72022 torch/_dynamo/symbolic_convert.py:888] [0/0] [__trace_bytecode] TRACE CALL_FUNCTION 1 [GetAttrVariable(), ConstantVariable()]
V0120 14:51:10.952000 72022 torch/_dynamo/symbolic_convert.py:888] [0/0] [__trace_bytecode] TRACE STORE_FAST output_tensor [TensorVariable()]
V0120 14:51:10.953000 72022 torch/_dynamo/symbolic_convert.py:865] [0/0] [__trace_source] TRACE starts_line tests/compile/test_all.py:6 in compiled_fn (compiled_fn)
V0120 14:51:10.953000 72022 torch/_dynamo/symbolic_convert.py:865] [0/0] [__trace_source] result = torch.all(input_tensor, dim=2, out=output_tensor)
V0120 14:51:10.953000 72022 torch/_dynamo/symbolic_convert.py:888] [0/0] [__trace_bytecode] TRACE LOAD_GLOBAL torch []
V0120 14:51:10.953000 72022 torch/_dynamo/symbolic_convert.py:888] [0/0] [__trace_bytecode] TRACE LOAD_ATTR all [PythonModuleVariable(<module 'torch' from '/home/user1/venv1/lib/python3.10/site-packages/torch/__init__.py'>)]
V0120 14:51:10.954000 72022 torch/_dynamo/symbolic_convert.py:888] [0/0] [__trace_bytecode] TRACE LOAD_FAST input_tensor [TorchInGraphFunctionVariable(<built-in method all of type object at 0x7f188888aa20>)]
V0120 14:51:10.954000 72022 torch/_dynamo/symbolic_convert.py:888] [0/0] [__trace_bytecode] TRACE LOAD_CONST 2 [TorchInGraphFunctionVariable(<built-in method all of type object at 0x7f188888aa20>), TensorVariable()]
V0120 14:51:10.955000 72022 torch/_dynamo/symbolic_convert.py:888] [0/0] [__trace_bytecode] TRACE LOAD_FAST output_tensor [TorchInGraphFunctionVariable(<built-in method all of type object at 0x7f188888aa20>), TensorVariable(), ConstantVariable()]
V0120 14:51:10.955000 72022 torch/_dynamo/symbolic_convert.py:888] [0/0] [__trace_bytecode] TRACE LOAD_CONST ('dim', 'out') [TorchInGraphFunctionVariable(<built-in method all of type object at 0x7f188888aa20>), TensorVariable(), ConstantVariable(), TensorVariable()]
V0120 14:51:10.956000 72022 torch/_dynamo/symbolic_convert.py:888] [0/0] [__trace_bytecode] TRACE CALL_FUNCTION_KW 3 [TorchInGraphFunctionVariable(<built-in method all of type object at 0x7f188888aa20>), TensorVariable(), ConstantVariable(), TensorVariable(), TupleVariable(length=2)]
V0120 14:51:10.959000 72022 torch/_dynamo/symbolic_convert.py:888] [0/0] [__trace_bytecode] TRACE STORE_FAST result [TensorVariable()]
V0120 14:51:10.960000 72022 torch/_dynamo/symbolic_convert.py:865] [0/0] [__trace_source] TRACE starts_line tests/compile/test_all.py:7 in compiled_fn (compiled_fn)
V0120 14:51:10.960000 72022 torch/_dynamo/symbolic_convert.py:865] [0/0] [__trace_source] return result
V0120 14:51:10.960000 72022 torch/_dynamo/symbolic_convert.py:888] [0/0] [__trace_bytecode] TRACE LOAD_FAST result []
V0120 14:51:10.960000 72022 torch/_dynamo/symbolic_convert.py:888] [0/0] [__trace_bytecode] TRACE RETURN_VALUE None [TensorVariable()]
I0120 14:51:10.961000 72022 torch/_dynamo/logging.py:57] [0/0] Step 1: torchdynamo done tracing compiled_fn (RETURN_VALUE)
V0120 14:51:10.961000 72022 torch/_dynamo/symbolic_convert.py:2971] [0/0] RETURN_VALUE triggered compile
V0120 14:51:10.961000 72022 torch/_dynamo/output_graph.py:1004] [0/0] COMPILING GRAPH due to GraphCompileReason(reason='return_value', user_stack=[<FrameSummary file tests/compile/test_all.py, line 7 in compiled_fn>], graph_break=False)
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code] TRACED GRAPH
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code] ===== __compiled_fn_1 =====
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code] /home/user1/venv1/lib/python3.10/site-packages/torch/fx/_lazy_graph_module.py class GraphModule(torch.nn.Module):
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code] def forward(self, L_input_tensor_: "b8[2, 3, 4][12, 4, 1]cpu"):
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code] l_input_tensor_ = L_input_tensor_
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code]
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code] # File: tests/compile/test_all.py:5 in compiled_fn, code: output_tensor = torch.empty((0,), dtype=torch.bool).to(input_tensor.device)
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code] empty: "b8[2, 3][3, 1]cpu" = torch.empty((0,), dtype = torch.bool)
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code] output_tensor: "b8[2, 3][3, 1]cpu" = empty.to(device(type='cpu')); empty = None
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code]
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code] # File: tests/compile/test_all.py:6 in compiled_fn, code: result = torch.all(input_tensor, dim=2, out=output_tensor)
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code] result: "b8[2, 3][3, 1]cpu" = torch.all(l_input_tensor_, dim = 2, out = output_tensor); l_input_tensor_ = output_tensor = None
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code] return (result,)
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code]
V0120 14:51:10.966000 72022 torch/_dynamo/output_graph.py:1371] [0/0] [__graph_code]
I0120 14:51:10.968000 72022 torch/_dynamo/logging.py:57] [0/0] Step 2: calling compiler function inductor
V0120 14:51:12.792000 72022 torch/fx/experimental/symbolic_shapes.py:5201] [0/0] eval True == True [statically known]
I0120 14:51:22.070000 72022 torch/fx/experimental/symbolic_shapes.py:3646] [0/0] produce_guards
W0120 14:51:22.072000 72022 torch/_inductor/debug.py:434] [0/0] model__0_inference_0 debug trace: /home/user1/qnpu/env_name/src/torch_compile_debug/run_2025_01_20_14_51_10_921557-pid_72022/torchinductor/model__0_inference_0.0
I0120 14:51:22.076000 72022 torch/_dynamo/logging.py:57] [0/0] Step 2: done compiler function inductor
I0120 14:51:22.080000 72022 torch/fx/experimental/symbolic_shapes.py:3646] [0/0] produce_guards
V0120 14:51:22.080000 72022 torch/fx/experimental/symbolic_shapes.py:3830] [0/0] track_symint L['input_tensor'].size()[0] 2 None
V0120 14:51:22.081000 72022 torch/fx/experimental/symbolic_shapes.py:3830] [0/0] track_symint L['input_tensor'].size()[1] 3 None
V0120 14:51:22.081000 72022 torch/fx/experimental/symbolic_shapes.py:3830] [0/0] track_symint L['input_tensor'].size()[2] 4 None
V0120 14:51:22.081000 72022 torch/fx/experimental/symbolic_shapes.py:3830] [0/0] track_symint L['input_tensor'].stride()[0] 12 None
V0120 14:51:22.082000 72022 torch/fx/experimental/symbolic_shapes.py:3830] [0/0] track_symint L['input_tensor'].stride()[1] 4 None
V0120 14:51:22.082000 72022 torch/fx/experimental/symbolic_shapes.py:3830] [0/0] track_symint L['input_tensor'].stride()[2] 1 None
V0120 14:51:22.082000 72022 torch/fx/experimental/symbolic_shapes.py:3830] [0/0] track_symint L['input_tensor'].storage_offset() 0 None
V0120 14:51:22.083000 72022 torch/fx/experimental/symbolic_shapes.py:3998] [0/0] Skipping guard L['input_tensor'].size()[0] == 2
V0120 14:51:22.083000 72022 torch/fx/experimental/symbolic_shapes.py:3998] [0/0] Skipping guard L['input_tensor'].size()[1] == 3
V0120 14:51:22.084000 72022 torch/fx/experimental/symbolic_shapes.py:3998] [0/0] Skipping guard L['input_tensor'].size()[2] == 4
V0120 14:51:22.084000 72022 torch/fx/experimental/symbolic_shapes.py:3998] [0/0] Skipping guard L['input_tensor'].stride()[0] == 12
V0120 14:51:22.085000 72022 torch/fx/experimental/symbolic_shapes.py:3998] [0/0] Skipping guard L['input_tensor'].stride()[1] == 4
V0120 14:51:22.085000 72022 torch/fx/experimental/symbolic_shapes.py:3998] [0/0] Skipping guard L['input_tensor'].stride()[2] == 1
V0120 14:51:22.085000 72022 torch/fx/experimental/symbolic_shapes.py:3998] [0/0] Skipping guard L['input_tensor'].storage_offset() == 0
V0120 14:51:22.086000 72022 torch/_dynamo/guards.py:2314] [0/0] [__guards] GUARDS:
V0120 14:51:22.087000 72022 torch/_dynamo/guards.py:2280] [0/0] [__guards]
V0120 14:51:22.087000 72022 torch/_dynamo/guards.py:2280] [0/0] [__guards] TREE_GUARD_MANAGER:
V0120 14:51:22.087000 72022 torch/_dynamo/guards.py:2280] [0/0] [__guards] +- RootGuardManager
V0120 14:51:22.087000 72022 torch/_dynamo/guards.py:2280] [0/0] [__guards] | +- DEFAULT_DEVICE: utils_device.CURRENT_DEVICE == None # _dynamo/output_graph.py:471 in init_ambient_guards
V0120 14:51:22.087000 72022 torch/_dynamo/guards.py:2280] [0/0] [__guards] | +- GLOBAL_STATE: ___check_global_state()
V0120 14:51:22.087000 72022 torch/_dynamo/guards.py:2280] [0/0] [__guards] | +- TORCH_FUNCTION_MODE_STACK: ___check_torch_function_mode_stack()
V0120 14:51:22.087000 72022 torch/_dynamo/guards.py:2280] [0/0] [__guards] | +- GuardManager: source=L['input_tensor'], accessed_by=DictGetItemGuardAccessor(input_tensor)
V0120 14:51:22.087000 72022 torch/_dynamo/guards.py:2280] [0/0] [__guards] | | +- TENSOR_MATCH: check_tensor(L['input_tensor'], Tensor, DispatchKeySet(CPU, BackendSelect, ADInplaceOrView, AutogradCPU), torch.bool, device=None, requires_grad=False, size=[2, 3, 4], stride=[12, 4, 1]) # output_tensor = torch.empty((0,), dtype=torch.bool).to(input_tensor.device) # qnpu/env_name/src/pytorch-integration/tests/pytest_working/any_mode/test_hpu_all_any.py:5 in compiled_fn
V0120 14:51:22.087000 72022 torch/_dynamo/guards.py:2280] [0/0] [__guards] | | +- NO_HASATTR: hasattr(L['input_tensor'], '_dynamo_dynamic_indices') == False # output_tensor = torch.empty((0,), dtype=torch.bool).to(input_tensor.device) # qnpu/env_name/src/pytorch-integration/tests/pytest_working/any_mode/test_hpu_all_any.py:5 in compiled_fn
V0120 14:51:22.087000 72022 torch/_dynamo/guards.py:2280] [0/0] [__guards] | +- GuardManager: source=G, accessed_by=GlobalsGuardAccessor
V0120 14:51:22.087000 72022 torch/_dynamo/guards.py:2280] [0/0] [__guards] | | +- GuardManager: source=G['torch'], accessed_by=DictGetItemGuardAccessor(torch)
V0120 14:51:22.087000 72022 torch/_dynamo/guards.py:2280] [0/0] [__guards] | | | +- ID_MATCH: ___check_obj_id(G['torch'], 139743351173376) # output_tensor = torch.empty((0,), dtype=torch.bool).to(input_tensor.device) # qnpu/env_name/src/pytorch-integration/tests/pytest_working/any_mode/test_hpu_all_any.py:5 in compiled_fn
V0120 14:51:22.087000 72022 torch/_dynamo/guards.py:2280] [0/0] [__guards] | | | +- GuardManager: source=G['torch'].all, accessed_by=GetAttrGuardAccessor(all)
V0120 14:51:22.087000 72022 torch/_dynamo/guards.py:2280] [0/0] [__guards] | | | | +- ID_MATCH: ___check_obj_id(G['torch'].all, 139743348124352) # result = torch.all(input_tensor, dim=2, out=output_tensor) # qnpu/env_name/src/pytorch-integration/tests/pytest_working/any_mode/test_hpu_all_any.py:6 in compiled_fn
V0120 14:51:22.087000 72022 torch/_dynamo/guards.py:2280] [0/0] [__guards] | | | +- GuardManager: source=G['torch'].bool, accessed_by=GetAttrGuardAccessor(bool)
V0120 14:51:22.087000 72022 torch/_dynamo/guards.py:2280] [0/0] [__guards] | | | | +- EQUALS_MATCH: G['torch'].bool == torch.bool # output_tensor = torch.empty((0,), dtype=torch.bool).to(input_tensor.device) # qnpu/env_name/src/pytorch-integration/tests/pytest_working/any_mode/test_hpu_all_any.py:5 in compiled_fn
V0120 14:51:22.087000 72022 torch/_dynamo/guards.py:2280] [0/0] [__guards] | | | +- GuardManager: source=G['torch'].empty, accessed_by=GetAttrGuardAccessor(empty)
V0120 14:51:22.087000 72022 torch/_dynamo/guards.py:2280] [0/0] [__guards] | | | | +- ID_MATCH: ___check_obj_id(G['torch'].empty, 139743348128512) # output_tensor = torch.empty((0,), dtype=torch.bool).to(input_tensor.device) # qnpu/env_name/src/pytorch-integration/tests/pytest_working/any_mode/test_hpu_all_any.py:5 in compiled_fn
V0120 14:51:22.087000 72022 torch/_dynamo/guards.py:2280] [0/0] [__guards]
V0120 14:51:22.088000 72022 torch/_dynamo/convert_frame.py:1234] skipping: _fn (reason: in skipfiles, file: /home/user1/venv1/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py)
V0120 14:51:22.089000 72022 torch/_dynamo/convert_frame.py:1234] skipping: _maybe_set_eval_frame (reason: in skipfiles, file: /home/user1/venv1/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py)
V0120 14:51:22.089000 72022 torch/_dynamo/convert_frame.py:1234] skipping: justknobs_check (reason: in skipfiles, file: /home/user1/venv1/lib/python3.10/site-packages/torch/_utils_internal.py)
```
Full logs 2.6.0:
```
V0120 14:57:46.629000 74548 torch/_dynamo/convert_frame.py:1345] skipping: _is_skip_guard_eval_unsafe_stance (reason: in skipfiles, file: /home/user1/venv1/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py)
I0120 14:57:46.631000 74548 torch/_dynamo/utils.py:1162] [0/0] ChromiumEventLogger initialized with id 9bec8ac0-9067-4f58-ba32-04edd2949f59
V0120 14:57:46.632000 74548 torch/_dynamo/convert_frame.py:930] [0/0] torchdynamo start compiling compiled_fn tests/compile/test_all.py:3, stack (elided 5 frames):
V0120 14:57:46.632000 74548 torch/_dynamo/convert_frame.py:930] [0/0] File "tests/compile/test_all.py", line 14, in <module>
V0120 14:57:46.632000 74548 torch/_dynamo/convert_frame.py:930] [0/0] output = compiled_fn(input_tensor)
V0120 14:57:46.632000 74548 torch/_dynamo/convert_frame.py:930] [0/0]
I0120 14:57:46.633000 74548 torch/_dynamo/symbolic_convert.py:2706] [0/0] Step 1: torchdynamo start tracing compiled_fn tests/compile/test_all.py:3
I0120 14:57:46.634000 74548 torch/fx/experimental/symbolic_shapes.py:3192] [0/0] create_env
V0120 14:57:46.637000 74548 torch/_dynamo/symbolic_convert.py:932] [0/0] [__trace_source] TRACE starts_line tests/compile/test_all.py:5 in compiled_fn (compiled_fn)
V0120 14:57:46.637000 74548 torch/_dynamo/symbolic_convert.py:932] [0/0] [__trace_source] output_tensor = torch.empty((0,), dtype=torch.bool).to(input_tensor.device)
V0120 14:57:46.638000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0] [__trace_bytecode] TRACE LOAD_GLOBAL torch []
V0120 14:57:46.640000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0] [__trace_bytecode] TRACE LOAD_ATTR empty [PythonModuleVariable(<module 'torch' from '/home/user1/venv1/lib/python3.10/site-packages/torch/__init__.py'>)]
V0120 14:57:46.641000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0] [__trace_bytecode] TRACE LOAD_CONST (0,) [TorchInGraphFunctionVariable(<built-in method empty of type object at 0x7f144a228020>)]
V0120 14:57:46.642000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0] [__trace_bytecode] TRACE LOAD_GLOBAL torch [TorchInGraphFunctionVariable(<built-in method empty of type object at 0x7f144a228020>), TupleVariable(length=1)]
V0120 14:57:46.642000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0] [__trace_bytecode] TRACE LOAD_ATTR bool [TorchInGraphFunctionVariable(<built-in method empty of type object at 0x7f144a228020>), TupleVariable(length=1), PythonModuleVariable(<module 'torch' from '/home/user1/venv1/lib/python3.10/site-packages/torch/__init__.py'>)]
V0120 14:57:46.643000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0] [__trace_bytecode] TRACE LOAD_CONST ('dtype',) [TorchInGraphFunctionVariable(<built-in method empty of type object at 0x7f144a228020>), TupleVariable(length=1), ConstantVariable(dtype: torch.bool)]
V0120 14:57:46.643000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0] [__trace_bytecode] TRACE CALL_FUNCTION_KW 2 [TorchInGraphFunctionVariable(<built-in method empty of type object at 0x7f144a228020>), TupleVariable(length=1), ConstantVariable(dtype: torch.bool), TupleVariable(length=1)]
V0120 14:57:46.655000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0] [__trace_bytecode] TRACE LOAD_ATTR to [TensorVariable()]
V0120 14:57:46.655000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0] [__trace_bytecode] TRACE LOAD_FAST input_tensor [GetAttrVariable(TensorVariable(), to)]
V0120 14:57:46.656000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0] [__trace_bytecode] TRACE LOAD_ATTR device [GetAttrVariable(TensorVariable(), to), LazyVariableTracker()]
V0120 14:57:46.656000 74548 torch/_dynamo/variables/builder.py:2853] [0/0] wrap_to_fake L['input_tensor'] (2, 3, 4) StatefulSymbolicContext(dynamic_sizes=[<DimDynamic.STATIC: 2>, <DimDynamic.STATIC: 2>, <DimDynamic.STATIC: 2>], dynamic_strides=[<DimDynamic.INFER_STRIDE: 4>, <DimDynamic.INFER_STRIDE: 4>, <DimDynamic.INFER_STRIDE: 4>], constraint_sizes=[None, None, None], constraint_strides=[None, None, None], view_base_context=None, tensor_source=LocalSource(local_name='input_tensor', is_input=True, is_derefed_cell_contents=False), shape_env_to_source_to_symbol_cache={}) <class 'torch.Tensor'>
V0120 14:57:46.658000 74548 torch/_dynamo/output_graph.py:2156] [0/0] create_graph_input L_input_tensor_ L['input_tensor'] FakeTensor(..., size=(2, 3, 4), dtype=torch.bool) at debug_level 0 before=False
V0120 14:57:46.659000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0] [__trace_bytecode] TRACE CALL_FUNCTION 1 [GetAttrVariable(TensorVariable(), to), ConstantVariable(device: device(type='cpu'))]
V0120 14:57:46.660000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0] [__trace_bytecode] TRACE STORE_FAST output_tensor [TensorVariable()]
V0120 14:57:46.661000 74548 torch/_dynamo/symbolic_convert.py:932] [0/0] [__trace_source] TRACE starts_line tests/compile/test_all.py:6 in compiled_fn (compiled_fn)
V0120 14:57:46.661000 74548 torch/_dynamo/symbolic_convert.py:932] [0/0] [__trace_source] result = torch.all(input_tensor, dim=2, out=output_tensor)
V0120 14:57:46.661000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0] [__trace_bytecode] TRACE LOAD_GLOBAL torch []
V0120 14:57:46.662000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0] [__trace_bytecode] TRACE LOAD_ATTR all [PythonModuleVariable(<module 'torch' from '/home/user1/venv1/lib/python3.10/site-packages/torch/__init__.py'>)]
V0120 14:57:46.662000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0] [__trace_bytecode] TRACE LOAD_FAST input_tensor [TorchInGraphFunctionVariable(<built-in method all of type object at 0x7f144a228020>)]
V0120 14:57:46.663000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0] [__trace_bytecode] TRACE LOAD_CONST 2 [TorchInGraphFunctionVariable(<built-in method all of type object at 0x7f144a228020>), TensorVariable()]
V0120 14:57:46.663000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0] [__trace_bytecode] TRACE LOAD_FAST output_tensor [TorchInGraphFunctionVariable(<built-in method all of type object at 0x7f144a228020>), TensorVariable(), ConstantVariable(int: 2)]
V0120 14:57:46.664000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0] [__trace_bytecode] TRACE LOAD_CONST ('dim', 'out') [TorchInGraphFunctionVariable(<built-in method all of type object at 0x7f144a228020>), TensorVariable(), ConstantVariable(int: 2), TensorVariable()]
V0120 14:57:46.664000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0] [__trace_bytecode] TRACE CALL_FUNCTION_KW 3 [TorchInGraphFunctionVariable(<built-in method all of type object at 0x7f144a228020>), TensorVariable(), ConstantVariable(int: 2), TensorVariable(), TupleVariable(length=2)]
V0120 14:57:46.668000 74548 torch/_dynamo/symbolic_convert.py:435] [0/0] [__graph_breaks] Graph break in user code at tests/compile/test_all.py:6
V0120 14:57:46.668000 74548 torch/_dynamo/symbolic_convert.py:435] [0/0] [__graph_breaks] Reason: Unsupported: out variants with resizing on graph inputs
V0120 14:57:46.668000 74548 torch/_dynamo/symbolic_convert.py:435] [0/0] [__graph_breaks] User code traceback:
V0120 14:57:46.668000 74548 torch/_dynamo/symbolic_convert.py:435] [0/0] [__graph_breaks] File "tests/compile/test_all.py", line 6, in compiled_fn
V0120 14:57:46.668000 74548 torch/_dynamo/symbolic_convert.py:435] [0/0] [__graph_breaks] result = torch.all(input_tensor, dim=2, out=output_tensor)
V0120 14:57:46.668000 74548 torch/_dynamo/symbolic_convert.py:435] [0/0] [__graph_breaks]
I0120 14:57:46.668000 74548 torch/_dynamo/convert_frame.py:755] [0/0] Restarting analysis due to _dynamo/symbolic_convert.py:161 in fail_and_restart_analysis
I0120 14:57:46.669000 74548 torch/_dynamo/symbolic_convert.py:2706] [0/0_1] Step 1: torchdynamo start tracing compiled_fn tests/compile/test_all.py:3
I0120 14:57:46.670000 74548 torch/fx/experimental/symbolic_shapes.py:3192] [0/0_1] create_env
V0120 14:57:46.671000 74548 torch/_dynamo/symbolic_convert.py:932] [0/0_1] [__trace_source] TRACE starts_line tests/compile/test_all.py:5 in compiled_fn (compiled_fn)
V0120 14:57:46.671000 74548 torch/_dynamo/symbolic_convert.py:932] [0/0_1] [__trace_source] output_tensor = torch.empty((0,), dtype=torch.bool).to(input_tensor.device)
V0120 14:57:46.671000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0_1] [__trace_bytecode] TRACE LOAD_GLOBAL torch []
V0120 14:57:46.672000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0_1] [__trace_bytecode] TRACE LOAD_ATTR empty [PythonModuleVariable(<module 'torch' from '/home/user1/venv1/lib/python3.10/site-packages/torch/__init__.py'>)]
V0120 14:57:46.672000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0_1] [__trace_bytecode] TRACE LOAD_CONST (0,) [TorchInGraphFunctionVariable(<built-in method empty of type object at 0x7f144a228020>)]
V0120 14:57:46.673000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0_1] [__trace_bytecode] TRACE LOAD_GLOBAL torch [TorchInGraphFunctionVariable(<built-in method empty of type object at 0x7f144a228020>), TupleVariable(length=1)]
V0120 14:57:46.673000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0_1] [__trace_bytecode] TRACE LOAD_ATTR bool [TorchInGraphFunctionVariable(<built-in method empty of type object at 0x7f144a228020>), TupleVariable(length=1), PythonModuleVariable(<module 'torch' from '/home/user1/venv1/lib/python3.10/site-packages/torch/__init__.py'>)]
V0120 14:57:46.674000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0_1] [__trace_bytecode] TRACE LOAD_CONST ('dtype',) [TorchInGraphFunctionVariable(<built-in method empty of type object at 0x7f144a228020>), TupleVariable(length=1), ConstantVariable(dtype: torch.bool)]
V0120 14:57:46.674000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0_1] [__trace_bytecode] TRACE CALL_FUNCTION_KW 2 [TorchInGraphFunctionVariable(<built-in method empty of type object at 0x7f144a228020>), TupleVariable(length=1), ConstantVariable(dtype: torch.bool), TupleVariable(length=1)]
V0120 14:57:46.675000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0_1] [__trace_bytecode] TRACE LOAD_ATTR to [TensorVariable()]
V0120 14:57:46.676000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0_1] [__trace_bytecode] TRACE LOAD_FAST input_tensor [GetAttrVariable(TensorVariable(), to)]
V0120 14:57:46.676000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0_1] [__trace_bytecode] TRACE LOAD_ATTR device [GetAttrVariable(TensorVariable(), to), LazyVariableTracker()]
V0120 14:57:46.677000 74548 torch/_dynamo/variables/builder.py:2853] [0/0_1] wrap_to_fake L['input_tensor'] (2, 3, 4) StatefulSymbolicContext(dynamic_sizes=[<DimDynamic.STATIC: 2>, <DimDynamic.STATIC: 2>, <DimDynamic.STATIC: 2>], dynamic_strides=[<DimDynamic.INFER_STRIDE: 4>, <DimDynamic.INFER_STRIDE: 4>, <DimDynamic.INFER_STRIDE: 4>], constraint_sizes=[None, None, None], constraint_strides=[None, None, None], view_base_context=None, tensor_source=LocalSource(local_name='input_tensor', is_input=True, is_derefed_cell_contents=False), shape_env_to_source_to_symbol_cache={}) <class 'torch.Tensor'>
V0120 14:57:46.678000 74548 torch/_dynamo/output_graph.py:2156] [0/0_1] create_graph_input L_input_tensor_ L['input_tensor'] FakeTensor(..., size=(2, 3, 4), dtype=torch.bool) at debug_level 0 before=False
V0120 14:57:46.679000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0_1] [__trace_bytecode] TRACE CALL_FUNCTION 1 [GetAttrVariable(TensorVariable(), to), ConstantVariable(device: device(type='cpu'))]
V0120 14:57:46.680000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0_1] [__trace_bytecode] TRACE STORE_FAST output_tensor [TensorVariable()]
V0120 14:57:46.681000 74548 torch/_dynamo/symbolic_convert.py:932] [0/0_1] [__trace_source] TRACE starts_line tests/compile/test_all.py:6 in compiled_fn (compiled_fn)
V0120 14:57:46.681000 74548 torch/_dynamo/symbolic_convert.py:932] [0/0_1] [__trace_source] result = torch.all(input_tensor, dim=2, out=output_tensor)
V0120 14:57:46.681000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0_1] [__trace_bytecode] TRACE LOAD_GLOBAL torch []
V0120 14:57:46.681000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0_1] [__trace_bytecode] TRACE LOAD_ATTR all [PythonModuleVariable(<module 'torch' from '/home/user1/venv1/lib/python3.10/site-packages/torch/__init__.py'>)]
V0120 14:57:46.682000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0_1] [__trace_bytecode] TRACE LOAD_FAST input_tensor [TorchInGraphFunctionVariable(<built-in method all of type object at 0x7f144a228020>)]
V0120 14:57:46.682000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0_1] [__trace_bytecode] TRACE LOAD_CONST 2 [TorchInGraphFunctionVariable(<built-in method all of type object at 0x7f144a228020>), TensorVariable()]
V0120 14:57:46.683000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0_1] [__trace_bytecode] TRACE LOAD_FAST output_tensor [TorchInGraphFunctionVariable(<built-in method all of type object at 0x7f144a228020>), TensorVariable(), ConstantVariable(int: 2)]
V0120 14:57:46.683000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0_1] [__trace_bytecode] TRACE LOAD_CONST ('dim', 'out') [TorchInGraphFunctionVariable(<built-in method all of type object at 0x7f144a228020>), TensorVariable(), ConstantVariable(int: 2), TensorVariable()]
V0120 14:57:46.684000 74548 torch/_dynamo/symbolic_convert.py:955] [0/0_1] [__trace_bytecode] TRACE CALL_FUNCTION_KW 3 [TorchInGraphFunctionVariable(<built-in method all of type object at 0x7f144a228020>), TensorVariable(), ConstantVariable(int: 2), TensorVariable(), TupleVariable(length=2)]
V0120 14:57:46.684000 74548 torch/_dynamo/output_graph.py:972] [0/0_1] COMPILING GRAPH due to GraphCompileReason(reason='out variants with resizing on graph inputs', user_stack=[<FrameSummary file tests/compile/test_all.py, line 6 in compiled_fn>], graph_break=True)
V0120 14:57:46.689000 74548 torch/_dynamo/output_graph.py:1615] [0/0_1] REMOVE UNUSED GRAPHARG L['input_tensor']
V0120 14:57:46.689000 74548 torch/_dynamo/output_graph.py:1353] [0/0_1] [__graph_code] TRACED GRAPH
V0120 14:57:46.689000 74548 torch/_dynamo/output_graph.py:1353] [0/0_1] [__graph_code] ===== __compiled_fn_2 =====
V0120 14:57:46.689000 74548 torch/_dynamo/output_graph.py:1353] [0/0_1] [__graph_code] /home/user1/venv1/lib/python3.10/site-packages/torch/fx/_lazy_graph_module.py class GraphModule(torch.nn.Module):
V0120 14:57:46.689000 74548 torch/_dynamo/output_graph.py:1353] [0/0_1] [__graph_code] def forward(self):
V0120 14:57:46.689000 74548 torch/_dynamo/output_graph.py:1353] [0/0_1] [__graph_code] # File: tests/compile/test_all.py:5 in compiled_fn, code: output_tensor = torch.empty((0,), dtype=torch.bool).to(input_tensor.device)
V0120 14:57:46.689000 74548 torch/_dynamo/output_graph.py:1353] [0/0_1] [__graph_code] empty: "b8[0][1]cpu" = torch.empty((0,), dtype = torch.bool)
V0120 14:57:46.689000 74548 torch/_dynamo/output_graph.py:1353] [0/0_1] [__graph_code] output_tensor: "b8[0][1]cpu" = empty.to(device(type='cpu')); empty = None
V0120 14:57:46.689000 74548 torch/_dynamo/output_graph.py:1353] [0/0_1] [__graph_code] return (output_tensor,)
V0120 14:57:46.689000 74548 torch/_dynamo/output_graph.py:1353] [0/0_1] [__graph_code]
V0120 14:57:46.689000 74548 torch/_dynamo/output_graph.py:1353] [0/0_1] [__graph_code]
I0120 14:57:46.691000 74548 torch/_dynamo/output_graph.py:1458] [0/0_1] Step 2: calling compiler function inductor
W0120 14:57:48.602000 74548 torch/_inductor/debug.py:435] [0/0_1] model__0_inference_0 debug trace: /home/user1/qnpu/env_name/src/torch_compile_debug/run_2025_01_20_14_57_46_633319-pid_74548/torchinductor/model__0_inference_0.0
I0120 14:57:48.606000 74548 torch/_dynamo/output_graph.py:1463] [0/0_1] Step 2: done compiler function inductor
I0120 14:57:48.611000 74548 torch/fx/experimental/symbolic_shapes.py:4547] [0/0_1] produce_guards
V0120 14:57:48.612000 74548 torch/fx/experimental/symbolic_shapes.py:4755] [0/0_1] track_symint L['input_tensor'].size()[0] 2 None
V0120 14:57:48.612000 74548 torch/fx/experimental/symbolic_shapes.py:4755] [0/0_1] track_symint L['input_tensor'].size()[1] 3 None
V0120 14:57:48.612000 74548 torch/fx/experimental/symbolic_shapes.py:4755] [0/0_1] track_symint L['input_tensor'].size()[2] 4 None
V0120 14:57:48.613000 74548 torch/fx/experimental/symbolic_shapes.py:4755] [0/0_1] track_symint L['input_tensor'].stride()[0] 12 None
V0120 14:57:48.613000 74548 torch/fx/experimental/symbolic_shapes.py:4755] [0/0_1] track_symint L['input_tensor'].stride()[1] 4 None
V0120 14:57:48.613000 74548 torch/fx/experimental/symbolic_shapes.py:4755] [0/0_1] track_symint L['input_tensor'].stride()[2] 1 None
V0120 14:57:48.614000 74548 torch/fx/experimental/symbolic_shapes.py:4755] [0/0_1] track_symint L['input_tensor'].storage_offset() 0 None
V0120 14:57:48.614000 74548 torch/fx/experimental/symbolic_shapes.py:4958] [0/0_1] Skipping guard L['input_tensor'].size()[0] == 2
V0120 14:57:48.615000 74548 torch/fx/experimental/symbolic_shapes.py:4958] [0/0_1] Skipping guard L['input_tensor'].size()[1] == 3
V0120 14:57:48.615000 74548 torch/fx/experimental/symbolic_shapes.py:4958] [0/0_1] Skipping guard L['input_tensor'].size()[2] == 4
V0120 14:57:48.616000 74548 torch/fx/experimental/symbolic_shapes.py:4958] [0/0_1] Skipping guard L['input_tensor'].stride()[0] == 12
V0120 14:57:48.616000 74548 torch/fx/experimental/symbolic_shapes.py:4958] [0/0_1] Skipping guard L['input_tensor'].stride()[1] == 4
V0120 14:57:48.616000 74548 torch/fx/experimental/symbolic_shapes.py:4958] [0/0_1] Skipping guard L['input_tensor'].stride()[2] == 1
V0120 14:57:48.617000 74548 torch/fx/experimental/symbolic_shapes.py:4958] [0/0_1] Skipping guard L['input_tensor'].storage_offset() == 0
V0120 14:57:48.617000 74548 torch/_dynamo/guards.py:2364] [0/0_1] [__guards] GUARDS:
V0120 14:57:48.618000 74548 torch/_dynamo/guards.py:2321] [0/0_1] [__guards]
V0120 14:57:48.618000 74548 torch/_dynamo/guards.py:2321] [0/0_1] [__guards] TREE_GUARD_MANAGER:
V0120 14:57:48.618000 74548 torch/_dynamo/guards.py:2321] [0/0_1] [__guards] +- RootGuardManager
V0120 14:57:48.618000 74548 torch/_dynamo/guards.py:2321] [0/0_1] [__guards] | +- DEFAULT_DEVICE: utils_device.CURRENT_DEVICE == None # _dynamo/output_graph.py:493 in init_ambient_guards
V0120 14:57:48.618000 74548 torch/_dynamo/guards.py:2321] [0/0_1] [__guards] | +- GLOBAL_STATE: ___check_global_state()
V0120 14:57:48.618000 74548 torch/_dynamo/guards.py:2321] [0/0_1] [__guards] | +- TORCH_FUNCTION_MODE_STACK: ___check_torch_function_mode_stack()
V0120 14:57:48.618000 74548 torch/_dynamo/guards.py:2321] [0/0_1] [__guards] | +- GuardManager: source=L['input_tensor'], accessed_by=DictGetItemGuardAccessor('input_tensor')
V0120 14:57:48.618000 74548 torch/_dynamo/guards.py:2321] [0/0_1] [__guards] | | +- TENSOR_MATCH: check_tensor(L['input_tensor'], Tensor, DispatchKeySet(CPU, BackendSelect, ADInplaceOrView, AutogradCPU), torch.bool, device=None, requires_grad=False, size=[2, 3, 4], stride=[12, 4, 1]) # output_tensor = torch.empty((0,), dtype=torch.bool).to(input_tensor.device) # qnpu/env_name/src/pytorch-integration/tests/pytest_working/any_mode/test_hpu_all_any.py:5 in compiled_fn
V0120 14:57:48.618000 74548 torch/_dynamo/guards.py:2321] [0/0_1] [__guards] | | +- NO_HASATTR: hasattr(L['input_tensor'], '_dynamo_dynamic_indices') == False # output_tensor = torch.empty((0,), dtype=torch.bool).to(input_tensor.device) # qnpu/env_name/src/pytorch-integration/tests/pytest_working/any_mode/test_hpu_all_any.py:5 in compiled_fn
V0120 14:57:48.618000 74548 torch/_dynamo/guards.py:2321] [0/0_1] [__guards] | +- GuardManager: source=G, accessed_by=GlobalsGuardAccessor
V0120 14:57:48.618000 74548 torch/_dynamo/guards.py:2321] [0/0_1] [__guards] | | +- GuardManager: source=G['torch'], accessed_by=DictGetItemGuardAccessor('torch')
V0120 14:57:48.618000 74548 torch/_dynamo/guards.py:2321] [0/0_1] [__guards] | | | +- ID_MATCH: ___check_obj_id(G['torch'], 139725124415584) # output_tensor = torch.empty((0,), dtype=torch.bool).to(input_tensor.device) # qnpu/env_name/src/pytorch-integration/tests/pytest_working/any_mode/test_hpu_all_any.py:5 in compiled_fn
V0120 14:57:48.618000 74548 torch/_dynamo/guards.py:2321] [0/0_1] [__guards] | | | +- GuardManager: source=G['torch'].all, accessed_by=GetAttrGuardAccessor(all)
V0120 14:57:48.618000 74548 torch/_dynamo/guards.py:2321] [0/0_1] [__guards] | | | | +- ID_MATCH: ___check_obj_id(G['torch'].all, 139725121374464) # result = torch.all(input_tensor, dim=2, out=output_tensor) # qnpu/env_name/src/pytorch-integration/tests/pytest_working/any_mode/test_hpu_all_any.py:6 in compiled_fn
V0120 14:57:48.618000 74548 torch/_dynamo/guards.py:2321] [0/0_1] [__guards] | | | +- GuardManager: source=G['torch'].bool, accessed_by=GetAttrGuardAccessor(bool)
V0120 14:57:48.618000 74548 torch/_dynamo/guards.py:2321] [0/0_1] [__guards] | | | | +- EQUALS_MATCH: G['torch'].bool == torch.bool # output_tensor = torch.empty((0,), dtype=torch.bool).to(input_tensor.device) # qnpu/env_name/src/pytorch-integration/tests/pytest_working/any_mode/test_hpu_all_any.py:5 in compiled_fn
V0120 14:57:48.618000 74548 torch/_dynamo/guards.py:2321] [0/0_1] [__guards] | | | +- GuardManager: source=G['torch'].empty, accessed_by=GetAttrGuardAccessor(empty)
V0120 14:57:48.618000 74548 torch/_dynamo/guards.py:2321] [0/0_1] [__guards] | | | | +- ID_MATCH: ___check_obj_id(G['torch'].empty, 139725121378624) # output_tensor = torch.empty((0,), dtype=torch.bool).to(input_tensor.device) # qnpu/env_name/src/pytorch-integration/tests/pytest_working/any_mode/test_hpu_all_any.py:5 in compiled_fn
V0120 14:57:48.618000 74548 torch/_dynamo/guards.py:2321] [0/0_1] [__guards]
V0120 14:57:49.619000 74548 torch/_dynamo/guards.py:2346] [0/0_1] [__guards] Guard eval latency = 0.76 us
I0120 14:57:49.620000 74548 torch/_dynamo/pgo.py:636] [0/0_1] put_code_state: no cache key, skipping
V0120 14:57:49.626000 74548 torch/_dynamo/convert_frame.py:1345] skipping: _fn (reason: in skipfiles, file: /home/user1/venv1/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py)
V0120 14:57:49.627000 74548 torch/_dynamo/convert_frame.py:1345] skipping: _callback_from_stance (reason: in skipfiles, file: /home/user1/venv1/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py)
V0120 14:57:49.627000 74548 torch/_dynamo/convert_frame.py:1345] skipping: _maybe_set_eval_frame (reason: in skipfiles, file: /home/user1/venv1/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py)
V0120 14:57:49.628000 74548 torch/_dynamo/convert_frame.py:1345] skipping: justknobs_check (reason: in skipfiles, file: /home/user1/venv1/lib/python3.10/site-packages/torch/_utils_internal.py)
V0120 14:57:49.629000 74548 torch/_dynamo/convert_frame.py:930] [1/0] torchdynamo start compiling torch_dynamo_resume_in_compiled_fn_at_6 tests/compile/test_all.py:6, stack (elided 5 frames):
V0120 14:57:49.629000 74548 torch/_dynamo/convert_frame.py:930] [1/0] File "tests/compile/test_all.py", line 14, in <module>
V0120 14:57:49.629000 74548 torch/_dynamo/convert_frame.py:930] [1/0] output = compiled_fn(input_tensor)
V0120 14:57:49.629000 74548 torch/_dynamo/convert_frame.py:930] [1/0] File "/home/user1/venv1/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 574, in _fn
V0120 14:57:49.629000 74548 torch/_dynamo/convert_frame.py:930] [1/0] return fn(*args, **kwargs)
V0120 14:57:49.629000 74548 torch/_dynamo/convert_frame.py:930] [1/0]
I0120 14:57:49.630000 74548 torch/_dynamo/symbolic_convert.py:2706] [1/0] Step 1: torchdynamo start tracing torch_dynamo_resume_in_compiled_fn_at_6 tests/compile/test_all.py:6
I0120 14:57:49.631000 74548 torch/fx/experimental/symbolic_shapes.py:3192] [1/0] create_env
V0120 14:57:49.632000 74548 torch/_dynamo/symbolic_convert.py:932] [1/0] [__trace_source] TRACE starts_line tests/compile/test_all.py:6 in torch_dynamo_resume_in_compiled_fn_at_6 (compiled_fn)
V0120 14:57:49.632000 74548 torch/_dynamo/symbolic_convert.py:932] [1/0] [__trace_source] result = torch.all(input_tensor, dim=2, out=output_tensor)
V0120 14:57:49.632000 74548 torch/_dynamo/symbolic_convert.py:955] [1/0] [__trace_bytecode] TRACE LOAD_FAST ___stack0 []
V0120 14:57:49.633000 74548 torch/_dynamo/symbolic_convert.py:955] [1/0] [__trace_bytecode] TRACE JUMP_ABSOLUTE 42 [LazyVariableTracker()]
V0120 14:57:49.633000 74548 torch/_dynamo/symbolic_convert.py:955] [1/0] [__trace_bytecode] TRACE STORE_FAST result [LazyVariableTracker()]
V0120 14:57:49.634000 74548 torch/_dynamo/variables/builder.py:2853] [1/0] wrap_to_fake L['___stack0'] (2, 3) StatefulSymbolicContext(dynamic_sizes=[<DimDynamic.STATIC: 2>, <DimDynamic.STATIC: 2>], dynamic_strides=[<DimDynamic.INFER_STRIDE: 4>, <DimDynamic.INFER_STRIDE: 4>], constraint_sizes=[None, None], constraint_strides=[None, None], view_base_context=None, tensor_source=LocalSource(local_name='___stack0', is_input=True, is_derefed_cell_contents=False), shape_env_to_source_to_symbol_cache={}) <class 'torch.Tensor'>
V0120 14:57:49.635000 74548 torch/_dynamo/output_graph.py:2156] [1/0] create_graph_input L_stack0_ L['___stack0'] FakeTensor(..., size=(2, 3), dtype=torch.bool) at debug_level 0 before=False
V0120 14:57:49.637000 74548 torch/_dynamo/symbolic_convert.py:932] [1/0] [__trace_source] TRACE starts_line tests/compile/test_all.py:7 in torch_dynamo_resume_in_compiled_fn_at_6 (compiled_fn)
V0120 14:57:49.637000 74548 torch/_dynamo/symbolic_convert.py:932] [1/0] [__trace_source] return result
V0120 14:57:49.637000 74548 torch/_dynamo/symbolic_convert.py:955] [1/0] [__trace_bytecode] TRACE LOAD_FAST result []
V0120 14:57:49.637000 74548 torch/_dynamo/symbolic_convert.py:955] [1/0] [__trace_bytecode] TRACE RETURN_VALUE None [TensorVariable()]
V0120 14:57:49.638000 74548 torch/_dynamo/convert_frame.py:768] [1/0] Skipping frame because no content in function call torch_dynamo_resume_in_compiled_fn_at_6 tests/compile/test_all.py 6
I0120 14:57:49.638000 74548 torch/_dynamo/pgo.py:636] [1/0] put_code_state: no cache key, skipping
I0120 14:57:49.644000 74548 torch/_dynamo/eval_frame.py:398] TorchDynamo attempted to trace the following frames: [
I0120 14:57:49.644000 74548 torch/_dynamo/eval_frame.py:398] * compiled_fn tests/compile/test_all.py:3
I0120 14:57:49.644000 74548 torch/_dynamo/eval_frame.py:398] ]
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0a0+gitc15b011
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 14.0.5
CMake version: version 3.31.2
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-127-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 43 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz
CPU family: 6
Model: 85
Thread(s) per core: 1
Core(s) per socket: 6
Socket(s): 2
Stepping: 0
BogoMIPS: 5187.81
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon nopl xtopology tsc_reliable nonstop_tsc cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 invpcid avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xsaves arat pku ospke md_clear flush_l1d arch_capabilities
Virtualization: VT-x
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 384 KiB (12 instances)
L1i cache: 384 KiB (12 instances)
L2 cache: 12 MiB (12 instances)
L3 cache: 38.5 MiB (2 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX flush not necessary, SMT disabled
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0a0+gitc15b011
[pip3] torch_tb_profiler==0.4.0
[pip3] triton==3.1.0
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @amjames
|
triaged,module: regression,oncall: pt2,module: dynamo,module: empty tensor
|
low
|
Critical
|
2,799,392,560
|
go
|
net/http/fcgi: request context not canceled on aborted connection
|
### Go version
go version go1.23.4 darwin/amd64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE=''
GOARCH='amd64'
GOBIN=''
GOCACHE='/Users/user/Library/Caches/go-build'
GOENV='/Users/user/Library/Application Support/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='amd64'
GOHOSTOS='darwin'
GOINSECURE=''
GOMODCACHE='/Users/user/go/pkg/mod'
GOOS='darwin'
GOPATH='/Users/user/go'
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/usr/local/Cellar/go/1.23.4/libexec'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='local'
GOTOOLDIR='/usr/local/Cellar/go/1.23.4/libexec/pkg/tool/darwin_amd64'
GOVCS=''
GOVERSION='go1.23.4'
GODEBUG=''
GOTELEMETRY='local'
GOTELEMETRYDIR='/Users/user/Library/Application Support/go/telemetry'
GCCGO='gccgo'
GOAMD64='v1'
AR='ar'
CC='cc'
CXX='c++'
CGO_ENABLED='1'
GOMOD='/dev/null'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -arch x86_64 -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -ffile-prefix-map=/Users/user/Development/temp/go-build2053259560=/tmp/go-build -gno-record-gcc-switches -fno-common'
```
### What did you do?
Start an FCGI server.
```go
listener, err := net.Listen("tcp", ":0")
defer listener.Close()
fcgi.Serve(listener, http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Emulate some slow operation, waiting for the context to cancel
<-r.Context().Done()
fmt.Println("Done with slow operation")
}))
```
Using an FCGI client, or a reverse proxy like Apache with FCGI support, issue a request to the server and then close the request. Unfortunately the code for this can get quite verbose as the package only implements the server parts and I'm not sure it provides much context in this case.
### What did you see happen?
The string `Done with slow operation` is never printed as the context is never canceled.
### What did you expect to see?
According to the documentation on the http request's context field, the context should be set and canceled when the connection closes or the server closes. As I haven't found any documentation in the fcgi package stating otherwise, I expect the same to be true.
From what I can tell, the issue comes from the fcgi package never setting a context that corresponds to the incoming connection. It relies on the default `context.Background()` returned by `http.Request.Context` if it's `nil`.
https://github.com/golang/go/blob/40b3c0e58a0ae8dec4684a009bf3806769e0fc41/src/net/http/fcgi/child.go#L292-L302
This makes the fcgi package difficult to use when web clients are involved as there's seemingly no way to react on aborted / closed requests, making it difficult to stop ongoing work.
|
NeedsInvestigation,FeatureRequest,BugReport
|
low
|
Critical
|
2,799,393,212
|
deno
|
BUG - watcher from Deno.watchFs creating duplicate events if closed and created again on windows
|
On Windows closing a `Deno.FsWatcher` doesn't correctly close it and still keeps watching separately a path. Which then causes duplicate events on other created `Deno.FsWatcher`'s
Script 1:
```ts
let watcher = Deno.watchFs(".", { recursive: false });
setTimeout(() => {
watcher.close();
}, 300);
for await (const event of watcher) {
console.log("WATCHER 1 >>>> event", event);
}
console.log("1");
watcher = Deno.watchFs(".", { recursive: false });
setTimeout(() => {
watcher.close();
}, 300);
for await (const event of watcher) {
console.log("WATCHER 2 >>>> event", event);
}
console.log("2");
watcher = Deno.watchFs(".", { recursive: false });
setTimeout(() => {
watcher.close();
}, 300);
for await (const event of watcher) {
console.log("WATCHER 3 >>>> event", event);
}
console.log("3");
watcher = Deno.watchFs(".", { recursive: false });
for await (const event of watcher) {
console.log("WATCHER 4 >>>> event", event);
}
console.log("4");
```
Output 1 [REDACTED] (used touch test.txt):
```
1
2
3
WATCHER 4 >>>> event [Object: null prototype] {
kind: "modify",
paths: [ "REDACTED\\test.txt" ],
flag: null
}
WATCHER 4 >>>> event [Object: null prototype] {
kind: "modify",
paths: [ "REDACTED\\test.txt" ],
flag: null
}
WATCHER 4 >>>> event [Object: null prototype] {
kind: "modify",
paths: [ "REDACTED\\test.txt" ],
flag: null
}
WATCHER 4 >>>> event [Object: null prototype] {
kind: "modify",
paths: [ "REDACTED\\test.txt" ],
flag: null
}
```
Script 2:
```ts
let watcher = Deno.watchFs(".", { recursive: false });
setTimeout(() => {
watcher.close();
}, 300);
for await (const event of watcher) {
console.log("WATCHER 1 >>>> event", event);
}
console.log("1");
watcher = Deno.watchFs(".", { recursive: false });
setTimeout(() => {
watcher.close();
}, 300);
for await (const event of watcher) {
console.log("WATCHER 2 >>>> event", event);
}
console.log("2");
watcher = Deno.watchFs(".", { recursive: false });
for await (const event of watcher) {
console.log("WATCHER 3 >>>> event", event);
}
console.log("3");
```
Output 1 [REDACTED] (used touch test.txt):
```
1
2
WATCHER 3 >>>> event [Object: null prototype] {
kind: "modify",
paths: [ "REDACTED\\test.txt" ],
flag: null
}
WATCHER 3 >>>> event [Object: null prototype] {
kind: "modify",
paths: [ "REDACTED\\test.txt" ],
flag: null
}
WATCHER 3 >>>> event [Object: null prototype] {
kind: "modify",
paths: [ "REDACTED\\test.txt" ],
flag: null
}
```
>#### Version:
> deno 2.1.6 (stable, release, x86_64-pc-windows-msvc)
> v8 13.0.245.12-rusty
> typescript 5.6.2
|
bug,ext/fs
|
low
|
Critical
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.