id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k ⌀ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,492,059,840 | flutter | ElevatedButton's background color isn't being animated between enabled/disabled states | ### Steps to reproduce
1. Run sample code
2. Tap on the button "Press here"
### Expected results
The background color of the button should be animated
### Actual results
The background color is instantly swapped.
### Code sample
<details open><summary>Code sample</summary>
```dart
void main() async {
WidgetsFlutterBinding.ensureInitialized();
runApp(const MyApp());
}
class MyApp extends StatefulWidget {
const MyApp({super.key});
@override
State<MyApp> createState() => _MyAppState();
}
class _MyAppState extends State<MyApp> {
bool enabled = false;
@override
Widget build(BuildContext context) {
return MaterialApp(
theme: ThemeData.light().copyWith(
elevatedButtonTheme: ElevatedButtonThemeData(
style: ElevatedButton.styleFrom(
backgroundColor: Colors.black,
foregroundColor: Colors.white,
disabledBackgroundColor: Colors.grey[300],
disabledForegroundColor: Colors.grey[500],
),
),
),
home: Scaffold(
body: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: [
ElevatedButton(
onPressed: enabled ? () {} : null,
child: Text(enabled ? "Enabled" : "Disabled"),
),
CupertinoButton(
onPressed: () {
setState(() {
enabled = !enabled;
});
},
child: const Text("Press here"),
),
],
),
),
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
https://github.com/user-attachments/assets/02679c52-1e07-47fc-9334-4cea79e0bc6c
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
Doctor summary (to see all details, run flutter doctor -v):
[✓] Flutter (Channel master, 3.25.0-1.0.pre.160, on macOS 14.6.1 23G93 darwin-arm64, locale en-SI)
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
[✓] Xcode - develop for iOS and macOS (Xcode 15.4)
[✓] Chrome - develop for the web
[✓] Android Studio (version 2023.2)
[✓] VS Code (version 1.92.2)
[✓] Connected device (4 available)
[✓] Network resources
• No issues found!
```
</details>
| framework,a: animation,f: material design,has reproducible steps,P2,team-design,triaged-design,found in release: 3.24,found in release: 3.25 | low | Minor |
2,492,062,772 | kubernetes | Watches are not drained during graceful termination when feature gate APIServingWithRoutine is on | ### What happened?
I tried to observe the graceful termination of kube-apiserver, in particular, how watches are drained.
1) I created a cluster in 1.30 with kube-apiserver flags
```
--shutdown-delay-duration=10s --shutdown-send-retry-after=true --shutdown-watch-termination-grace-period=60s
```
and with feature gate `APIServingWithRoutine` on.
Also, I added to kube-apiserver's manifest the following line
```
"terminationGracePeriodSeconds"=60,
```
2) I killed the kube-apiserver, and found the following log during graceful termination:
```
"[graceful-termination] active watch request(s) have drained" duration="1m0s" activeWatchesBefore=0 activeWatchesAfter=0 error=null
```
`activeWatchesBefore=0` is not the expected behavior.
Also, I do not observe any logs of watches being closed.
### What did you expect to happen?
I expected a non-zero number of watches to be drained during graceful termination.
When I do the same procedure as above but for cluster 1.29.6 (or a cluster with feature gate `APIServingWithRoutine` off) I see a log similar to this one
```
"[graceful-termination] active watch request(s) have drained" duration="1m0s" activeWatchesBefore=623 activeWatchesAfter=0 error=null"
```
and, preceding it, there are logs of watches being closed (with latency in ~minutes)
### How can we reproduce it (as minimally and precisely as possible)?
Create clusters in 1.30 with provided kube-apiserver flags and with feature gate `APIServingWithRoutine` on and off, respectively.
### Anything else we need to know?
This issue seems to be related to:
- https://github.com/kubernetes/kubernetes/issues/125614
In particular, disabling the feature gate `APIServingWithRoutine` on the kube-apiserver leads to the correct behavior (watches are being drained as in 1.29 version)
### Kubernetes version
Observed in 1.30.2+
### Cloud provider
N/A
### OS version
N/A
### Install tools
N/A
### Container runtime (CRI) and version (if applicable)
N/A
### Related plugins (CNI, CSI, ...) and versions (if applicable)
N/A | kind/bug,sig/api-machinery,triage/accepted | low | Critical |
2,492,072,498 | vscode | The "add dev container files" quick pick can remove the item you want | Testing #226686
If your timing is unlucky it's very easy to lose the custom template you pasted in:
1. Open the "add dev container files" quick pick
2. Choose add to workspace
3. Paste in your custom template path
4. See that the quick pick item offers to use your custom template ✅
5. Pause, and see that the quick pick item that offers to use your custom template is removed 🐛

| bug,containers | low | Minor |
2,492,120,199 | godot | Viewport camera won't rotate when pressing right mouse click while using a Wacom Tablet and Pen. | ### Tested versions
This is reproducible in any project in any version of godot.
### System information
Windows 11 - Godot Engine v4.3.stable.official
### Issue description
Viewport camera won't rotate when pressing right mouse click while using a Wacom Tablet and Pen. The right mouse button has to be configured to one of the pen buttons to test this.
When using a mouse the viewport camera rotates when pressing and moving a mouse. So it should have the same behavior when using a tablet for navigating.
This is quite a bummer for artists that use devices like Wacom tablet for pretty much any kind of work of navigation using a computer.
This is reproducible in any project in any version of godot.
### Steps to reproduce
The right mouse button has to be configured to one of the pen buttons to test this.
When using a mouse the viewport camera rotates when pressing and moving a mouse. But won't work the same way with a pen and tablet.
### Minimal reproduction project (MRP)
This is reproducible in any project in any version of godot. | topic:editor,usability,needs testing,topic:3d | low | Minor |
2,492,217,488 | godot | Lambda requires extra line before End Of File | ### Tested versions
- Reproducible in Godot 4.3
### System information
Windows 11, Godot 4.3-stable
### Issue description
If I leave a lamda at the end of the file like this (line 12):

The parser throws an error:

However, this is fine (with extra line 13):

Ignore the fact that I'm doing stupid signal stuff above and causing infinite recursion, that's not a bug that's lack of sleep 💩
Pretty annoying as I frequently write lambdas like this and they happen to be at the EOF.
### Steps to reproduce
Open the MRP, issue.gd will not be parsed correctly.
### Minimal reproduction project (MRP)
[lambda-eof-mrp.zip](https://github.com/user-attachments/files/16784606/lambda-eof-mrp.zip)
| bug,discussion,topic:gdscript,confirmed | low | Critical |
2,492,279,208 | pytorch | [CPU] jx_nest_base float32 both inductor and eager performance regression in 2024-08-26 nightly release | ### 🐛 Describe the bug
<p>fp32 static shape default wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>timm_models</td>
<td>jx_nest_base</td>
<td>single</td>
<td>1</td>
<td>1.235563</td>
<td>0.27360534200000003</td>
<td>0.338056637177546</td>
<td>62.567019</td>
<td>1</td>
<td>1.201868</td>
<td>0.234643102</td>
<td>0.282010035714536</td>
<td>61.797824</td>
<td>1.03</td>
<td>0.83</td>
<td>0.86</td>
<td>0.99</td>
</tr>
</tbody>
</table>
<p>fp32 dynamic shape default wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>timm_models</td>
<td>jx_nest_base</td>
<td>single</td>
<td>1</td>
<td>1.240841</td>
<td>0.276368383</td>
<td>0.342929220730103</td>
<td>62.597383</td>
<td>1</td>
<td>1.192174</td>
<td>0.23566079199999998</td>
<td>0.280948669041808</td>
<td>61.646299</td>
<td>1.04</td>
<td>0.82</td>
<td>0.85</td>
<td>0.98</td>
</tr>
</tbody>
</table>
<p>fp32 static shape cpp wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>timm_models</td>
<td>jx_nest_base</td>
<td>single</td>
<td>1</td>
<td>1.278338</td>
<td>0.26669467300000005</td>
<td>0.34092593489347406</td>
<td>46.669321</td>
<td>1</td>
<td>1.202949</td>
<td>0.23166499200000001</td>
<td>0.278681170461408</td>
<td>45.100968</td>
<td>1.06</td>
<td>0.82</td>
<td>0.87</td>
<td>0.97</td>
</tr>
</tbody>
</table>
<p>fp32 dynamic shape cpp wrapper</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>suite</th>
<th>name</th>
<th>thread</th>
<th>batch_size_new</th>
<th>speed_up_new</th>
<th>inductor_new</th>
<th>eager_new</th>
<th>compilation_latency_new</th>
<th>batch_size_old</th>
<th>speed_up_old</th>
<th>inductor_old</th>
<th>eager_old</th>
<th>compilation_latency_old</th>
<th>Ratio Speedup(New/old)</th>
<th>Eager Ratio(old/new)</th>
<th>Inductor Ratio(old/new)</th>
<th>Compilation_latency_Ratio(old/new)</th>
</tr>
</thead>
<tbody>
<tr>
<td>timm_models</td>
<td>jx_nest_base</td>
<td>single</td>
<td>1</td>
<td>1.277099</td>
<td>0.27013244200000003</td>
<td>0.34498587154575805</td>
<td>47.364179</td>
<td>1</td>
<td>1.225369</td>
<td>0.231195754</td>
<td>0.283300109883226</td>
<td>45.65043</td>
<td>1.04</td>
<td>0.82</td>
<td>0.86</td>
<td>0.96</td>
</tr>
</tbody>
</table>
### Versions
<p>SW info</p><table border="1" class="dataframe table">
<thead>
<tr style="text-align: right;">
<th>name</th>
<th>target_branch</th>
<th>target_commit</th>
<th>refer_branch</th>
<th>refer_commit</th>
</tr>
</thead>
<tbody>
<tr>
<td>torchbench</td>
<td>main</td>
<td>23512dbe</td>
<td>main</td>
<td>23512dbe</td>
</tr>
<tr>
<td>torch</td>
<td>main</td>
<td>2553278bae5993bd94bae4f04bf4586fb3f30d57</td>
<td>main</td>
<td>b4a1673a6741e183856cf3503f0574d3ac881ce0</td>
</tr>
<tr>
<td>torchvision</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
<td>main</td>
<td>0.19.0a0+d23a6e1</td>
</tr>
<tr>
<td>torchtext</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
<td>main</td>
<td>0.16.0a0+b0ebddc</td>
</tr>
<tr>
<td>torchaudio</td>
<td>main</td>
<td>2.4.0a0+b3f6f51</td>
<td>main</td>
<td>2.4.0a0+b3f6f51</td>
</tr>
<tr>
<td>torchdata</td>
<td>main</td>
<td>0.7.1a0+0790338</td>
<td>main</td>
<td>0.7.1a0+0790338</td>
</tr>
<tr>
<td>dynamo_benchmarks</td>
<td>main</td>
<td>nightly</td>
<td>main</td>
<td>nightly</td>
</tr>
</tbody>
</table>
</table>
Repro:
[inductor_single_run.sh](https://github.com/chuanqi129/inductor-tools/blob/main/scripts/modelbench/inductor_single_run.sh)
bash inductor_single_run.sh single inference performance timm_models jx_nest_base float32 first static cpp
Suspected guilty commit: fb26b843906bbad5e28d1edccf298c74b8e00492
[timm_models-jx_nest_base-inference-float32-static-cpp-single-performance-drop_guilty_commit.log](https://github.com/user-attachments/files/16784943/timm_models-jx_nest_base-inference-float32-static-cpp-single-performance-drop_guilty_commit.log)
cc @ezyang @chauhang @penguinwu @WeizhuoZhang-intel @chuanqi129 | oncall: pt2,oncall: cpu inductor | low | Critical |
2,492,333,192 | pytorch | Use `torch._C._stash_obj_in_tls` for global state in serialization | ### 🐛 Describe the bug
Specifically, this issue is to track https://github.com/pytorch/pytorch/pull/134504#discussion_r1733494807 in #134504
Fixing this will properly propagate thread local state to our cpp threadpool (used for DDP and such)
However, we should also do this properly for all global state in serialization (e.g. `_safe_globals`, `_default_mmap_options`, everything in `_serialization_tls`
### Versions
main
cc @mruberry | module: serialization,triaged | low | Critical |
2,492,340,918 | flutter | Preserve functionality of `ModalRoute.onPopInvoked` to maintain backward compatibility | Hi, I am writing to propose a modification to the ModalRoute class that would help maintain backward compatibility in light of a breaking change introduced in Flutter 3.24 regarding the PopScope widget and related methods.
## Context
I have developed a bottom sheet package called [smooth_sheets](https://github.com/fujidaiti/smooth_sheets) that provides custom modal routes to show modal bottom sheets. This package, like many others, relies on certain behaviors of the Flutter framework that have been affected by recent changes.
## Current Issue
1. The modal route in my package uses `ModalRoute.onPopInvoked` method to invoke `PopScope.onPopInvoked` callback when the user swipes the sheet down to dismiss the modal.
3. Flutter 3.24 has introduced [a breaking change](https://docs.flutter.dev/release/breaking-changes/popscope-with-result) related to the PopScope widget, and `PopScope.onPopInvoked` is now deprecated.
4. Consequently, `ModalRoute.onPopInvoked` is also deprecated and no longer calls `PopScope.onPopInvoked` callback. This is why PopScope doesn't work with my modals in the latest SDK.
The only apparent solution is to replace `Route.onPopInvoked` with `Route.onPopInvokedWithResult`, which was newly added in Flutter 3.24. However, this forces package users to use the latest Flutter version, which is not ideal for maintaining broad compatibility and ease of use.
## Proposed Solution
To address these issues and maintain backward compatibility, I would greatly appreciate if the SDK could revert the following deletion made in [this commit](https://github.com/flutter/flutter/commit/007faa980d10f2412252b0237c0c2daefcce5f5a#diff-748ff41c8eac8d1b2daddbdc16ae69880172e1790662d957824518e1c1c4e47aL1737-L1739), until `Route.onPopInvoked` method is completely deleted:
```diff
// In ModalRoute class
- @override
- void onPopInvoked(bool didPop) {
- for (final PopEntry popEntry in _popEntries) {
- popEntry.onPopInvoked?.call(didPop);
- }
- }
```
Thanks. | framework,f: routes,P2,team-framework,triaged-framework | low | Major |
2,492,377,741 | pytorch | RuntimeError: cuDNN version incompatibility: happened when I have lstm layer | ### 🐛 Describe the bug
I have a simple deep learning learning model like this
```
class model(nn.Module):
def __init__(self, insize=32, hiddensize = 16, level=10):
super(model, self).__init__()
self.conv = nn.Conv2d(1, 32, kernel_size=(1,20))
self.lstm = nn.LSTM(input_size=insize, hidden_size=hiddensize, batch_first=True)
self.fc1 = nn.Linear(in_features=hiddensize, out_features=3)
def forward(self, x):
out = self.conv(x)
out = out.permute(0, 2, 1, 3)
out = out.squeeze(3)
out, _ = self.lstm(out)
out = torch.narrow(out, 1, out.shape[1]-1, 1).squeeze(1)
out = self.fc1(out)
return out
```
The input I used is a 100*20 image with 1 channel.
When I run the model, the error happened
```
RuntimeError: cuDNN version incompatibility: PyTorch was compiled against (8, 3, 2) but found runtime version (8, 2, 4).
PyTorch already comes bundled with cuDNN. One option to resolving this error is to ensure PyTorch can find the bundled cuDNN.
Looks like your LD_LIBRARY_PATH contains incompatible version of cudnnPlease either remove it from the path or install cudnn (8, 3, 2)
```
But when I disable my lstm layer like this
```
class model(nn.Module):
def __init__(self, insize=32, hiddensize = 16, level=10):
super(model, self).__init__()
self.conv = nn.Conv2d(1, 32, kernel_size=(1,20))
#self.lstm = nn.LSTM(input_size=insize, hidden_size=hiddensize, batch_first=True)
self.fc1 = nn.Linear(in_features=hiddensize, out_features=3)
def forward(self, x):
out = self.conv(x)
out = out.permute(0, 2, 1, 3)
out = out.squeeze(3)
#out, _ = self.lstm(out)
out = torch.narrow(out, 1, out.shape[1]-1, 1).squeeze(1)
out = self.fc1(out)
return out
```
Everything worked fine, no more error happened.
I wonder why the error only happened when I have lstm layer
### Versions
```
Versions of relevant libraries:
[pip3] efficientnet-pytorch==0.7.1
[pip3] flake8==3.9.2
[pip3] focal-loss-torch==0.1.2
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.20.3
[pip3] numpydoc==1.1.0
[pip3] onnx==1.11.0
[pip3] onnx-simplifier==0.3.10
[pip3] onnxoptimizer==0.2.7
[pip3] onnxruntime==1.11.1
[pip3] pytorchcv==0.0.67
[pip3] torch==1.13.0+cu116
[pip3] torch-tb-profiler==0.4.1
[pip3] torch2trt==0.3.0
[pip3] torchaudio==0.13.0+cu116
[pip3] torchcv==0.0.2
[pip3] torchinfo==1.6.3
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.14.0+cu116
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h2bc3f7f_2
[conda] efficientnet-pytorch 0.7.1 pypi_0 pypi
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] focal-loss-torch 0.1.2 pypi_0 pypi
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py39h7f8727e_0
[conda] mkl_fft 1.3.1 py39hd3c417c_0
[conda] mkl_random 1.2.2 py39h51133e4_0
[conda] numpy 1.20.3 py39hf144106_0
[conda] numpy-base 1.20.3 py39h74d4b33_0
[conda] numpydoc 1.1.0 pyhd3eb1b0_1
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] pytorchcv 0.0.67 pypi_0 pypi
[conda] torch 1.13.0+cu116 pypi_0 pypi
[conda] torch-tb-profiler 0.4.1 pypi_0 pypi
[conda] torch2trt 0.3.0 pypi_0 pypi
[conda] torchaudio 0.13.0+cu116 pypi_0 pypi
[conda] torchcv 0.0.2 pypi_0 pypi
[conda] torchinfo 1.6.3 pypi_0 pypi
[conda] torchsummary 1.5.1 pypi_0 pypi
[conda] torchvision 0.14.0+cu116 pypi_0 pypi
```
cc @malfet @seemethere @csarofeen @ptrblck @xwang233 | module: build,module: cudnn,triaged | low | Critical |
2,492,382,329 | ui | [feat]: Provide prop to make popover component width dynamic or fit trigger size | ### Feature description
According to @a1danw's report in issue #3045, components like `Dropdown`, `Popover`, and `Combobox` with fixed widths should be able to expand to match the width of their trigger button.
As suggested in this [comment](https://github.com/shadcn-ui/ui/issues/3045#issuecomment-2005644793), adding the `w-[--radix-popover-trigger-width]` class to the content element of the component resolves this issue.
For example:
```tsx
<Popover>
<PopoverTrigger>Trigger</PopoverTrigger>
<PopoverContent className="w-[--radix-popover-trigger-width]">
Place content for the popover here.
</PopoverContent>
</Popover>
```
### Output

I suggest a prop is exposed to provide this functionality or at least a simple note is added to the docs on the individual components for future users.
### Affected component/components
Dropdown, Popover, Combobox
### Additional Context
None
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | low | Minor |
2,492,413,793 | rust | Moving mutable borrows in/out of inferred types results in the compiler thinking they are moved as if they were owned values | <!--
Thank you for filing a bug report! 🐛 Please provide a short summary of the bug,
along with any information you feel relevant to replicating the bug.
-->
Apologies for the slightly long title, but I'm not sure how to best summarize this issue. The problem is as follows: I have some code that uses a stack of sorts (`VecDeque` in my case, but the exact type isn't relevant) and pops data from it. It then does something with the data, and optionally pushes it back onto the stack. Crucially, the data is a _mutable borrow_ of something, i.e. `&mut Whatever`. The code roughly looks like this:
```rust
let mut work = VecDeque::new();
...
while let Some((module, node)) = work.pop_front() {
if pass.run(module, node) {
resolved = true;
} else {
work.push_back((module, node));
}
}
...
```
Here `node` is a `&mut Something`. The problem I'm running into is that type-checking this code results in the following error:
```
--> compiler/src/type_check/expressions.rs:632:45
|
628 | while let Some((module, node)) = work.pop_front() {
| ---- move occurs because `node` has type `&mut hir::DefineConstant`, which does not implement the `Copy` trait
629 | if pass.run(module, node) {
| ---- value moved here
...
632 | work.push_back((module, node));
| ^^^^ value borrowed here after move
|
note: consider changing this parameter type in method `run` to borrow instead if owning the value isn't necessary
--> compiler/src/type_check/expressions.rs:659:15
|
656 | fn run(
| --- in this method
...
659 | node: &mut hir::DefineConstant,
| ^^^^^^^^^^^^^^^^^^^^^^^^ this parameter takes ownership of the value
For more information about this error, try `rustc --explain E0382`.
error: could not compile `compiler` (lib) due to 1 previous error
```
In other words, it seems that the compiler treats `node` as if it were an owned value and thus applies move semantics to it, but it's a borrow instead.
This issue _can_ in fact be resolved by giving `work` an explicit type annotation like so:
```rust
let mut work: VecDeque<(ModuleId, &mut hir::DefineConstant)> = VecDeque::new();
```
This suggests the issue is potentially due to type inference not inferring the correct type or ownership.
A very easy way to reproduce this is the following snippet:
```rust
struct Person {}
fn main() {
let mut work = Vec::new();
while let Some(person) = work.pop() {
run(person);
work.push(person);
}
}
fn run(_person: &mut Person) {}
```
I expected to see this happen: it should just work, because `person` is a borrow
Instead, this happened: the compiler produces the following error:
```
Checking playground v0.1.0 (/var/home/yorickpeterse/Projects/rust/playground)
error[E0382]: borrow of moved value: `person`
--> src/main.rs:8:19
|
6 | while let Some(person) = work.pop() {
| ------ move occurs because `person` has type `&mut Person`, which does not implement the `Copy` trait
7 | run(person);
| ------ value moved here
8 | work.push(person);
| ^^^^^^ value borrowed here after move
|
note: consider changing this parameter type in function `run` to borrow instead if owning the value isn't necessary
--> src/main.rs:12:17
|
12 | fn run(_person: &mut Person) {}
| --- ^^^^^^^^^^^ this parameter takes ownership of the value
| |
| in this function
help: consider cloning the value if the performance cost is acceptable
|
7 | run(person).clone();
| ++++++++
For more information about this error, try `rustc --explain E0382`.
error: could not compile `playground` (bin "playground") due to 1 previous error
```
Like with my actual code, one can prevent this error by defining `work` as `let mut work: Vec<&mut Person> = Vec::new();` instead of relying on type inference.
### Meta
<!--
If you're using the stable version of the compiler, you should also check if the
bug also exists in the beta or nightly versions.
-->
`rustc --version --verbose`:
```
rustc 1.80.1 (3f5fd8dd4 2024-08-06)
binary: rustc
commit-hash: 3f5fd8dd41153bc5fdca9427e9e05be2c767ba23
commit-date: 2024-08-06
host: x86_64-unknown-linux-gnu
release: 1.80.1
LLVM version: 18.1.7
``` | A-borrow-checker,C-discussion | low | Critical |
2,492,433,194 | terminal | Author a new default color scheme (to replace Campbell) | There's been some internal discussions around replacing Campbell as the default scheme.
* There are parts where the contrast isn't great
* The chroma is super inconsistent
* _some other words about colors that Leonard said that sounded smart_ too.
I'm filing this to track that discussion | Area-Settings,Product-Terminal,Issue-Task | low | Minor |
2,492,445,264 | vscode | Rename widget logging | * do one rename
* enable trace log
* scroll in the editor
* the log is flooded with messages like those ⏬
```
2024-08-28 17:37:26.171 [trace] RenameWidget invoking afterRender, position: null
2024-08-28 17:37:26.171 [trace] RenameWidget invoking cancelInput, caller: afterRender (because position is null), _currentCancelInput: undefined
2024-08-28 17:37:26.189 [trace] RenameWidget invoking afterRender, position: null
2024-08-28 17:37:26.189 [trace] RenameWidget invoking cancelInput, caller: afterRender (because position is null), _currentCancelInput: undefined
2024-08-28 17:37:26.197 [trace] RenameWidget invoking afterRender, position: null
2024-08-28 17:37:26.197 [trace] RenameWidget invoking cancelInput, caller: afterRender (because position is null), _currentCancelInput: undefined
2024-08-28 17:37:26.206 [trace] RenameWidget invoking afterRender, position: null
``` | debt | low | Minor |
2,492,450,119 | ant-design | Sorting on a table with the sticky prop and fixed columns breaks alignment | ### Reproduction link
[](https://stackblitz.com/edit/antd-reproduce-5x-cfowqs?file=demo.tsx)
### Steps to reproduce
- Create an Ant Design Table with at 10 or so columns and at least 1 fixed column on the left
- Add the sorter property to all columns
- Set the sticky property on the table
- Using the tab key, navigate to the last column
### What is expected?
The table should focus the last column and move the entire table to the right ensuring the table header columns and row data columns are aligned.
### What is actually happening?
The table header columns move to the right but the row data columns do not causing misalignment.
<img width="732" alt="Screenshot 2024-08-28 at 11 40 11 AM" src="https://github.com/user-attachments/assets/6a9e6683-b3f1-44ec-98d3-983a2bef47a7">
| Environment | Info |
| --- | --- |
| antd | 5.15.4 |
| React | 17.0.2 |
| System | MacOS |
| Browser | Chrome |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | 🐛 Bug,Inactive | low | Minor |
2,492,471,017 | go | x/mobile: gomobile bind: Crash on android: dlopen failed: TLS symbol "(null)" in dlopened x86_64/libgojni.so using IE access model | ### Go version
go version go1.21.0 linux/amd64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE=''
GOARCH='amd64'
GOBIN=''
GOCACHE='/home/marius/.cache/go-build'
GOENV='/home/marius/.config/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='amd64'
GOHOSTOS='linux'
GOINSECURE=''
GOMODCACHE='/home/marius/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='linux'
GOPATH='/home/marius/go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/usr/local/go'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/usr/local/go/pkg/tool/linux_amd64'
GOVCS=''
GOVERSION='go1.21.0'
GCCGO='gccgo'
GOAMD64='v1'
AR='ar'
CC='gcc'
CXX='g++'
CGO_ENABLED='1'
GOMOD='/home/marius/work/project/go.mod'
GOWORK='/home/marius/work/go.work'
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -m64 -pthread -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=/tmp/go-build238633052=/tmp/go-build -gno-record-gcc-switches'
```
### What did you do?
I am creating a go library for Android that I use in React Native. The go library uses CGO (and has a as a dependency a static library that is built against android ndk llvm libc++ ), and also uses gRPC for communication with the server (However in the gRPC package I do not rely on TLS - I have a custom connection protocol, nor anywhere in the library I rely on TLS)
```
CC_FOR_TARGET=/home/marius/work/android-ndk-r26b/toolchains/llvm/prebuilt/linux-x86_64/sysroot/usr/lib/x86_64-linux-android34-clang gomobile bind -target android/amd64 -androidapi 34 -o enlockd.aar ./mobile/
```
And then I load the .aar archive into a react native project in the respective android project, and created the library data bindings in Kotlin. The project compiles.
### What did you see happen?
My Android Kotlin binding that loads the library compiles and I can build it from react native, but when loading it into the Android emulator (Pixel 3a API 34 x86_64) I get the following Crash:
```
--------- beginning of crash
08-28 18:23:13.711 11376 11425 E AndroidRuntime: FATAL EXCEPTION: create_react_context
08-28 18:23:13.711 11376 11425 E AndroidRuntime: Process: com.enlockmobile, PID: 11376
08-28 18:23:13.711 11376 11425 E AndroidRuntime: java.lang.UnsatisfiedLinkError: dlopen failed: TLS symbol "(null)" in dlopened "/data/app/~~DwBiQaHdfiu5-dwMAj1Ulw==/com.enlockmobile-6XrWB7ekbVezJa8jWGp8QQ==/base.apk!/lib/x86_64/libgojni.so" referenced from "/data/app/~~DwBiQaHdfiu5-dwMAj1Ulw==/com.enlockmobile-6XrWB7ekbVezJa8jWGp8QQ==/base.apk!/lib/x86_64/libgojni.so" using IE access model
08-28 18:23:13.711 11376 11425 E AndroidRuntime: at java.lang.Runtime.loadLibrary0(Runtime.java:1082)
08-28 18:23:13.711 11376 11425 E AndroidRuntime: at java.lang.Runtime.loadLibrary0(Runtime.java:1003)
08-28 18:23:13.711 11376 11425 E AndroidRuntime: at java.lang.System.loadLibrary(System.java:1661)
08-28 18:23:13.711 11376 11425 E AndroidRuntime: at go.Seq.<clinit>(Seq.java:37)
08-28 18:23:13.711 11376 11425 E AndroidRuntime: at mobile.Mobile.<clinit>(Mobile.java:12)
08-28 18:23:13.711 11376 11425 E AndroidRuntime: at mobile.API.<clinit>(API.java:11)
08-28 18:23:13.711 11376 11425 E AndroidRuntime: at com.enlockmobile.Service.<init>(Service.kt:38)
08-28 18:23:13.711 11376 11425 E AndroidRuntime: at com.enlockmobile.ServicePackage.createNativeModules(Service.kt:27)
08-28 18:23:13.711 11376 11425 E AndroidRuntime: at com.facebook.react.ReactPackageHelper.getNativeModuleIterator(ReactPackageHelper.java:35)
08-28 18:23:13.711 11376 11425 E AndroidRuntime: at com.facebook.react.NativeModuleRegistryBuilder.processPackage(NativeModuleRegistryBuilder.java:40)
08-28 18:23:13.711 11376 11425 E AndroidRuntime: at com.facebook.react.ReactInstanceManager.processPackage(ReactInstanceManager.java:1510)
08-28 18:23:13.711 11376 11425 E AndroidRuntime: at com.facebook.react.ReactInstanceManager.processPackages(ReactInstanceManager.java:1481)
08-28 18:23:13.711 11376 11425 E AndroidRuntime: at com.facebook.react.ReactInstanceManager.createReactContext(ReactInstanceManager.java:1392)
08-28 18:23:13.711 11376 11425 E AndroidRuntime: at com.facebook.react.ReactInstanceManager.lambda$runCreateReactContextOnNewThread$2(ReactInstanceManager.java:1161)
08-28 18:23:13.711 11376 11425 E AndroidRuntime: at com.facebook.react.ReactInstanceManager.$r8$lambda$PrBhihCbbAFk4ZReAALGanVLCyc(Unknown Source:0)
08-28 18:23:13.711 11376 11425 E AndroidRuntime: at com.facebook.react.ReactInstanceManager$$ExternalSyntheticLambda1.run(D8$$SyntheticClass:0)
08-28 18:23:13.711 11376 11425 E AndroidRuntime: at java.lang.Thread.run(Thread.java:1012)
08-28 18:23:14.543 418 418 E BpTransactionCompletedListener: Failed to transact (-32)
08-28 18:23:15.144 585 1658 E TaskPersister: File error accessing recents directory (directory doesn't exist?).
```
(I think a relevant document is this one: https://android.googlesource.com/platform/bionic/+/HEAD/docs/elf-tls.md)
### What did you expect to see?
I was expecting for the react native program to not crash. | NeedsInvestigation,mobile | low | Critical |
2,492,502,845 | pytorch | DISABLED test_transformerencoderlayer_cuda_float32 (__main__.TestNNDeviceTypeCUDA) | Platforms: rocm
Broken by https://github.com/pytorch/pytorch/pull/133331
cc @jeffdaily @sunway513 @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang | module: rocm,triaged,skipped | low | Critical |
2,492,550,765 | pytorch | [docs] We should add a lint for bullet points and multiple lines | People (me) commonly get this wrong: https://stackoverflow.com/questions/54677795/python-and-sphinx-bullet-point-list-in-multiline-google-style-docstring
cc @svekars @brycebortree @tstatler | module: docs,module: lint,triaged | low | Minor |
2,492,553,282 | deno | Normalize error message grammar (TS/JS) | There are a variety of different ways errors in Deno's TS/JS layer are raised. Some are sentence case, others not. Some end with a period, others do not. Some provide the current state of the system, others do not.
Without a standard error guide, it makes it hard for new contributors to determine how to format error message. Furthermore, without a standard error guide, it makes it hard for users to know what to expect.
We just went through the `std` and normalized the errors there. We should do the same with the JavaScript / TypeScript code in the cli / runtime. There is an issue about improving error messages in the cli [1] which is orthogonal to this.
I think we should start with the Error Guide used in the `std` and see if that's a good fit for the JavaScript / TypeScript code in the cli [2].
[1] https://github.com/denoland/deno/issues/24699
[2] https://github.com/denoland/std/issues/5574 | feat | low | Critical |
2,492,576,733 | flutter | [engine, riscv64] Fuschia LLVM toolchain fails to link | ### Steps to reproduce
```
/home/tcna/workspace-automation/app/engine/src/flutter/buildtools/linux-x64/clang/bin/clang++ --target=riscv64-unknown-linux-gnu --sysroot /home/tcna/workspace-automation/app/riscv64_sysroot -shared -Wl,--fatal-warnings -fPIC -Wl,-z,noexecstack -Wl,-z,now -Wl,-z,relro -Wl,-z,defs -pthread -Wl,--undefined-version --sysroot=/home/tcna/workspace-automation/app/riscv64_sysroot -Wl,-O2 -Wl,--gc-sections -Wl,--as-needed -o ./so.unstripped/libtessellator.so -L/home/tcna/workspace-automation/app/engine/src/flutter/buildtools/linux-x64/clang/lib -Wl,--build-id=sha1 -Wl,-soname=libtessellator.so @./libtessellator.so.rsp && { /home/tcna/workspace-automation/app/engine/src/flutter/buildtools/linux-x64/clang/bin/llvm-readelf -d ./so.unstripped/libtessellator.so | grep SONAME ; /home/tcna/workspace-automation/app/engine/src/flutter/buildtools/linux-x64/clang/bin/llvm-nm -gD -f posix ./so.unstripped/libtessellator.so | cut -f1-2 -d' '; } > ./libtessellator.so.tmp && if ! cmp -s ./libtessellator.so.tmp ./libtessellator.so.TOC; then mv ./libtessellator.so.tmp ./libtessellator.so.TOC; fi && /home/tcna/workspace-automation/app/engine/src/flutter/buildtools/linux-x64/clang/bin/llvm-strip -o ./libtessellator.so ./so.unstripped/libtessellator.so
ld.lld: error: /home/tcna/workspace-automation/app/riscv64_sysroot/usr/lib/riscv64-linux-gnu/libc.so:5: cannot find /lib/riscv64-linux-gnu/libc.so.6 inside /home/tcna/workspace-automation/app/riscv64_sysroot
>>> GROUP ( /lib/riscv64-linux-gnu/libc.so.6 /usr/lib/riscv64-linux-gnu/libc_nonshared.a AS_NEEDED ( /lib/ld-linux-riscv64-lp64d.so.1 ) )
>>> ^
clang++: error: linker command failed with exit code 1 (use -v to see invocation)
```
This is a LLVM issue that is resolved in more current LLVM build.
### Expected results
Link without error
### Actual results
```
>>> GROUP ( /lib/riscv64-linux-gnu/libc.so.6 /usr/lib/riscv64-linux-gnu/libc_nonshared.a AS_NEEDED ( /lib/ld-linux-riscv64-lp64d.so.1 ) )
>>> ^
clang++: error: linker command failed with exit code 1 (use -v to see invocation)
```
### Code sample
<details open><summary>Code sample</summary>
```dart
[Paste your code here]
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[Paste your output here]
```
</details>
| engine,P3,team-engine,triaged-engine | low | Critical |
2,492,616,500 | opencv | Missing opencv_world4100.dll and opencv_world4100d.dll in Bin Folder during Extraction using GitHub Actions | ### System Information
GitHub Actions Workflow
Windows environment
OpenCV version: 4.10.0
### Detailed description
When extracting the OpenCV 4.10.0 release using GitHub Actions, the opencv_world4100.dll and opencv_world4100d.dll files are missing in the bin directory. However, the corresponding .pdb files (opencv_world4100.pdb and opencv_world4100d.pdb) are present. This issue occurs when extracting the bin folder (opencv\build\x64\vc16\bin) using a PowerShell script.
### Steps to reproduce
1. Set up a GitHub Actions workflow using the following YML commands:
```
- name: Download and Extract OpenCV
shell: pwsh
run: |
curl -L "https://github.com/opencv/opencv/releases/download/4.10.0/opencv-4.10.0-windows.exe" -o opencv_binary.exe
./opencv_binary.exe -y -d .
```
2. After running the above script, navigate to the bin folder located at opencv\build\x64\vc16\bin.
3. Observe that the .dll files (opencv_world4100.dll and opencv_world4100d.dll) are missing, while the .pdb files are present.
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [X] I updated to the latest OpenCV version and the issue is still there
- [ ] There is reproducer code and related data files (videos, images, onnx, etc) | bug,category: build/install,platform: win32 | low | Minor |
2,492,621,511 | node | AbortSignal.any() causes memory leak | ### Version
v22.6.0
### Platform
```text
Microsoft Windows NT 10.0.22631.0 x64
Linux ****** 4.4.0-22621-Microsoft #3672-Microsoft Fri Jan 01 08:00:00 PST 2016 x86_64 x86_64 x86_64 GNU/Linux
```
### Subsystem
https://nodejs.org/api/globals.html#class-abortsignal
### What steps will reproduce the bug?
Run this and watch memory usage.
```javascript
const formatMemoryUsage = (data) => `${Math.round(data / 1024 / 1024 * 100) / 100} MB`;
let memoryData = process.memoryUsage();
console.log('Mem before loop', formatMemoryUsage(memoryData.rss));
for (let i = 0; true; i++) {
const abortController = new AbortController();
const signal = abortController.signal;
const composedSignal = AbortSignal.any([signal]);
if (i === 1000000) {
break;
}
}
memoryData = process.memoryUsage();
console.log('Mem after 1 million iteration', formatMemoryUsage(memoryData.rss));
```
This is what I get on my local machine

### How often does it reproduce? Is there a required condition?
Always reproducible as far as I can tell
### What is the expected behavior? Why is that the expected behavior?
Memory post loop execution should be fairly equivalent to the first log but somehow the `const composedSignal = AbortSignal.any([signal]);` does not get cleaned up from memory, I would expect this to get cleaned properly or if this is the intended behavior to have a clear warning in the documentation.
### What do you see instead?
We see a memory leak that will eventually lead to an out of memory error.
### Additional information
This has been tested on Node 22.6 on different machine and both Windows + Unix versions. Happy to provide more details if needed | memory,abortcontroller | low | Critical |
2,492,634,804 | vscode | Enable trust link in getting started walkthrough does not run the correct command | 1. Open an empty VS Code window.
2. Set `security.workspace.trust.emptyWindow` to false. Reload VS Code if needed.
3. Navigate to the Learn the Fundamentals walkthrough.
4. Click on the step "Safely browse and edit code".
5. Click on "enable trust".
6. :bug: A trusted domains file pops out. Instead, the link should run the command "Workspaces: Manage Workspace Trust" | bug,workspace-trust,getting-started | low | Critical |
2,492,634,831 | flutter | `TextWidthBasis.longestLine` with `TextOverflow.ellipsis` produces clipped text | ## Description
When using a `Text` widget with both `overflow: TextOverflow.ellipsis` and `textAlign: TextAlign.center`, it clips the start of the text instead of ellipsizing to size first and then centering.
@LongCatIsLooong suggested that it may be due to:
> It's hard to tell what exactly happened without a repro, but the "longestLine" calculation does happen before ellipsization so it could be inaccurate:
> https://github.com/google/skia/blob/298a39597601ca5a60efb7bf4f49a91a6133a58c/modules/skparagraph/src/ParagraphImpl.cpp#L653-L658
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
home: Scaffold(
body: Container(
color: Colors.red,
width: 74,
child: Center(
child: Text(
'Hello, World!',
style: TextStyle(fontSize: 32),
overflow: TextOverflow.ellipsis,
textAlign: TextAlign.center,
maxLines: 1,
textWidthBasis: TextWidthBasis.longestLine,
),
),
),
),
);
}
}
```
</details>
<details open>
<summary>Screenshot</summary>

</details>
<details><summary>Doctor output</summary>
```console
[✓] Flutter (Channel main, 3.25.0-1.0.pre.162, on Debian GNU/Linux rodete 6.9.10-1rodete4-amd64, locale en_US.UTF-8)
• Flutter version 3.25.0-1.0.pre.162 on channel main at /usr/local/google/home/gspencer/code/flutter
• Upstream repository git@github.com:flutter/flutter.git
• Framework revision 446be11037 (2 minutes ago), 2024-08-28 10:20:12 -0700
• Engine revision 8d248aead3
• Dart version 3.6.0 (build 3.6.0-175.0.dev)
• DevTools version 2.39.0-dev.15
[✓] Android toolchain - develop for Android devices (Android SDK version 33.0.1)
• Android SDK at /usr/local/google/home/gspencer/Android/Sdk
• Platform android-34, build-tools 33.0.1
• ANDROID_HOME = /usr/local/google/home/gspencer/Android/Sdk
• Java binary at: /usr/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.12+7-Debian-1build1)
• All Android licenses accepted.
[✓] Chrome - develop for the web
• Chrome at google-chrome
[✓] Linux toolchain - develop for Linux desktop
• Debian clang version 16.0.6 (26)
• cmake version 3.29.6
• ninja version 1.12.1
• pkg-config version 1.8.1
[!] Android Studio (not installed)
• Android Studio not found; download from https://developer.android.com/studio/index.html
(or visit https://flutter.dev/to/linux-android-setup for detailed instructions).
[✓] Connected device (2 available)
• Linux (desktop) • linux • linux-x64 • Debian GNU/Linux rodete 6.9.10-1rodete4-amd64
• Chrome (web) • chrome • web-javascript • Google Chrome 128.0.6613.84
[✓] Network resources
• All expected network resources are available.
! Doctor found issues in 1 category.
```
</details>
| framework,a: typography,platform-web,customer: google,has reproducible steps,P2,team-framework,triaged-framework,found in release: 3.24,found in release: 3.25 | low | Major |
2,492,662,302 | PowerToys | Explorer Add-In to copy file(s) and not the shortcut | ### Description of the new feature / enhancement
Just as one can select file(s) for copying allow an option to select a shortcut but copy the target file.
### Scenario when this would be used?
When users have many files stored across multiple folders but use file shortcuts for a unified view this allows copying these files in a simple straight forward way.
### Supporting information
In my research the only way to accomplish this is with scripting and that does not allow for dynamic file selection. | Needs-Triage | low | Minor |
2,492,683,081 | pytorch | [c10d][MPI] Attempting to create a new group after MPI causes "RuntimeError: Underlying Non-PrefixStore shouldn't be null" | ### 🐛 Describe the bug
```python
import torch
torch.distributed.init_process_group(backend="mpi")
nccl_group = torch.distributed.new_group(backend="nccl")
```
```
[rank0]: Traceback (most recent call last):
[rank0]: File "/opt/pytorch/pytorch/repro.py", line 4, in <module>
[rank0]: nccl_group = torch.distributed.new_group(backend="nccl")
[rank0]: File "/opt/pytorch/pytorch/torch/distributed/c10d_logger.py", line 97, in wrapper
[rank0]: func_return = func(*args, **kwargs)
[rank0]: File "/opt/pytorch/pytorch/torch/distributed/distributed_c10d.py", line 4577, in new_group
[rank0]: return _new_group_with_tag(
[rank0]: File "/opt/pytorch/pytorch/torch/distributed/distributed_c10d.py", line 4660, in _new_group_with_tag
[rank0]: pg, pg_store = _new_process_group_helper(
[rank0]: File "/opt/pytorch/pytorch/torch/distributed/distributed_c10d.py", line 1783, in _new_process_group_helper
[rank0]: backend_class = ProcessGroupNCCL(
[rank0]: RuntimeError: Underlying Non-PrefixStore shouldn't be null.
```
### Versions
main
cc @XilunWu @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed,triaged,module: mpi | low | Critical |
2,492,707,439 | flutter | [web] unify CkPaint and SwasmPaint classes | In https://github.com/flutter/engine/pull/54818 `CkPaint` becomes essentially a PODO. It's not backed by the C++ `SkPaint` until the paint is used to draw something. This means the class is almost renderer-agnostic.
We could take it further and make it fully agnostic. This can be done either in the same PR where `SkwasmPaint` moves to the PODO model, or immediately after that. | engine,platform-web,c: rendering,e: web_canvaskit,P2,c: tech-debt,e: web_skwasm,team-web,triaged-web | low | Minor |
2,492,717,883 | flutter | Warn when Flutter metadata in AndroidManifest.xml is specified in the wrong spot. | Some Flutter metadata keys (like "io.flutter.embedding.android.EnableImpeller"), must be specified under the `<application>` tag while others (like "io.flutter.Entrypoint") under the `<activity>` tag.
We try to document which keys go where but this relies on diligence while [pasting in the values in the XML](https://github.com/flutter/flutter/issues/154252). Other times, [documentation is missing](https://github.com/flutter/engine/pull/54814) and there is some trial and error.
The different places where the keys are read only care about the metadata keys they understand. And so, it is hard for these checks to warn on unexpected keys.
There should ideally be a single registry of all keys Flutter cares about. That way, every instance of a key that is in the wrong place can emit a warning for the user. | c: new feature,tool,c: proposal,team-tool | low | Critical |
2,492,725,857 | vscode | context `"inlineChatFocused": true` remains after closing generate cell | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version:
- OS Version:
Steps to Reproduce:
1. open a notebook with a code cell
2. click the generate button below the cell
3. ask copilot to create a cell to `print the working directory`
4. press escape to get rid of the generated cell and chat
5. focus the code cell and press `shift+enter`
:bug: cell does not run because `"inlineChatFocused": true`
| bug,notebook | low | Critical |
2,492,779,605 | pytorch | Better namings for triton fusion ops when a custom triton kernel is present? | ### 🚀 The feature, motivation and pitch
Hi, the code can run fine. It is just that the generated comments and names are a bit confusing.
Say we have a function with some torch ops at the beginning and _scaled_mm (which has inductor triton lowerings added earlier).
```
amax_row = torch.max(torch.abs(x), dim=1, keepdim=True).values
scale = _amax_to_scale(amax_row, dtype_float8, x.dtype) # shape is [M]
# x * scale is M x K * M, broadcast
x_fp8 = _to_fp8_saturated(x * scale, dtype_float8) # clamp and cast
x_inverse_scale = scale.reciprocal()
y = torch._scaled_mm(
x_fp8,
w_t_fp8,
x_inverse_scale,
w_inverse_scale_row.t(),
bias,
out_dtype=output_dtype,
use_fast_accum=use_fast_accum,
)
```
After inductor, we have two fused triton kernels. The first one comprises of all the torch ops and the second one is the triton lowering of _scaled_mm.
```
# Topologically Sorted Source Nodes: [x, abs_1, max_1, amax, clamp, res, mul, x_1, x_fp8, x_inverse_scale, y], Original ATen: [aten.sigmoid, aten.abs, aten.max, aten._to_copy, aten.clamp, aten.reciprocal, aten.mul, aten._scaled_mm]
stream0 = get_raw_stream(0)
triton_per_fused__scaled_mm__to_copy_abs_clamp_max_mul_reciprocal_sigmoid_0.run(arg0_1, buf2, buf3, 1024, 512, grid=grid(1024), stream=stream0)
del arg0_1
buf5 = empty_strided_cuda((1024, 2048), (2048, 1), torch.bfloat16)
# Topologically Sorted Source Nodes: [x, amax, clamp, res, mul, x_1, x_fp8, x_inverse_scale, y, y_1, y_2], Original ATen: [aten.sigmoid, aten._to_copy, aten.clamp, aten.reciprocal, aten.mul, aten._scaled_mm, aten.relu, aten.add]
triton_tem_fused__scaled_mm__to_copy_add_clamp_mul_reciprocal_relu_sigmoid_1.run(buf2, arg2_1, buf3, arg1_1, buf5, grid=torch._inductor.kernel.mm_common.mm_grid(1024, 2048, meta0), stream=stream0)
```
However, the namings and comments are a bit unintuitive:
1. aten._scaled_mm is listed as original atens in both kernels, but it is only present in the second one.
2. [partly explained by the numberings] The first triton kernel does not use aten._scaled_mm but has it in the name. The second triton kernel only comprises of aten._scaled_mm but has other ops in its name.
Any idea on how to possibly improve them?
### Alternatives
I imagine the solution would look like this:
* do not attempt to fuse them together, since we cannot really fuse them
* [optional] Sort the list of original atens before printing the comments
### Additional context
repro (need H100):
```
import os
os.environ["TORCH_LOGS"] = "+output_code"
os.environ["TORCHINDUCTOR_UNIQUE_KERNEL_NAMES"] = "1"
import torch
import torch._inductor.config
torch._inductor.config.force_disable_caches = True
# inductor_config.triton.descriptive_names = "False"
################## inputs for each scenario
fusion_case = "pointwise" # "pointwise" or "reduction"
# Matmul Y = X [M, K] x W [N, K]
M = 1024 # batch size
K = 512 # in_features
N = 2048 # out_features
################## setup
device = "cuda:0"
dtype_float8 = torch.float8_e4m3fn
input_dtype = torch.bfloat16 # torch.float32 or bfloat16
output_dtype = torch.bfloat16 # torch.float32 or dtype_float8 or torch.bfloat16
use_fast_accum = True
x = torch.rand(M, K, dtype=input_dtype, device=device)
w = torch.rand(N, K, dtype=input_dtype, device=device)
bias = None
################### utilities
# ref fbcode/caffe2/torch/fb/model_transform/experimental/fp8_linear.py FP8LinearDynamic
E4M3_MAX_POS: float = torch.finfo(torch.float8_e4m3fn).max
E5M2_MAX_POS: float = torch.finfo(torch.float8_e5m2).max
FP16_MAX_POS: float = torch.finfo(torch.float16).max
EPS: float = 1e-12
# fbcode/caffe2/torch/fb/model_transform/experimental/fp8_linear.py
@torch.no_grad()
def _amax_to_scale(
amax: torch.Tensor, float8_dtype: torch.dtype, orig_dtype: torch.dtype
) -> torch.Tensor:
# To make scale dtype to be fp32 for accuracy
amax = amax.float()
if float8_dtype == torch.float8_e4m3fn:
res = E4M3_MAX_POS / torch.clamp(amax, min=EPS)
else: # e5m2
res = E5M2_MAX_POS / torch.clamp(amax, min=EPS)
# Ensure that the scale is representable in float16,
# this helps when amax is small. We are assuming that we don't need
# to care about this for float32/bfloat16.
if orig_dtype is torch.float16:
res = torch.clamp(res, max=FP16_MAX_POS)
return res
def _to_fp8_saturated(x: torch.Tensor, float8_dtype: torch.dtype) -> torch.Tensor:
# The default behavior in PyTorch for casting to `float8_e4m3fn`
# and `e5m2` is to not saturate. In this context, we should saturate.
# A common case where we want to saturate is when the history of a
# tensor has a maximum value of `amax1`, and the current amax value
# is `amax2`, where `amax1 < amax2`. This is common when using delayed
# scaling.
if float8_dtype == torch.float8_e4m3fn:
x = x.clamp(min=-1 * E4M3_MAX_POS, max=E4M3_MAX_POS)
else:
x = x.clamp(min=-1 * E5M2_MAX_POS, max=E5M2_MAX_POS)
return x.to(float8_dtype)
################### rowwise scaling fp8
# quantize weight, done in model building stage prior to inference
weight_amax_row = torch.max(torch.abs(w), dim=1, keepdim=True).values
weight_scale_row = _amax_to_scale(weight_amax_row, dtype_float8, w.dtype)
w_t_fp8_row = _to_fp8_saturated(w * weight_scale_row, dtype_float8).t()
w_inverse_scale_row = weight_scale_row.reciprocal() # element-wise reciprocal
@torch.no_grad()
def fb8_rowwise_scaling(x, w_t_fp8, w_inverse_scale_row):
if fusion_case == "pointwise":
# Fusion Case 1: Pointwise (e.g. Sigmoid) + Matmul
x = torch.sigmoid(x)
else:
# Fusion Case 2: Reduction (e.g. LayerNorm) + Matmul
layer_norm = torch.nn.LayerNorm(K, device=device, dtype=input_dtype)
x = layer_norm(x)
# quantize input x
amax_row = torch.max(torch.abs(x), dim=1, keepdim=True).values
scale = _amax_to_scale(amax_row, dtype_float8, x.dtype) # shape is [M]
# x * scale is M x K * M, broadcast
x_fp8 = _to_fp8_saturated(x * scale, dtype_float8) # clamp and cast
x_inverse_scale = scale.reciprocal()
y = torch._scaled_mm(
x_fp8,
w_t_fp8,
x_inverse_scale,
w_inverse_scale_row.t(),
bias,
out_dtype=output_dtype,
use_fast_accum=use_fast_accum,
)
# epilogue
y = torch.nn.functional.relu(y)
y = y + 0.01
return y
fb8_rowwise_scaling_compiled = torch.compile(
fb8_rowwise_scaling, mode="max-autotune-no-cudagraphs"
)
y = fb8_rowwise_scaling_compiled(x, w_t_fp8_row, w_inverse_scale_row)
print("done")
# Matmul Y = X [M, K] x W [N, K]
```
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire | triaged,oncall: pt2,module: inductor | low | Major |
2,492,786,010 | next.js | Memory issue with API routes and middleware | ### Link to the code that reproduces this issue
https://github.com/a-hyssopus/nextjs_memory_issue
### To Reproduce
1. `bun/npm install`
2. `docker build -t memory_repo .`
3. `docker run -p 3000:3000 memory_repo`
4. `docker stats` in a new terminal tab
5. Click the "Send POST request of normal size" button in application. Observe the behaviour of stats in the tab which tracks Docker, and pay attention to the memory consumption.
6. Click "Send POST request of huge size" button, observe skyrocketed memory usage in Docker stats.
### Current vs. Expected behavior
**Actual result**:
Having `middleware.js` (or `.ts`) and reaching out to a BE endpoint through an API route makes memory consumption raise significantly if a huge request (>50 MB) or a sequence of relatively big requests are sent.
**A very important note**: it only happens when **both** `middleware` and API routes are used.
All tested scenarios are also listed in README of reproduction repo and have tags assigned (check README):
1. Reproduction created with API route (with `getOnProxyInit`) and `middleware`: hits 940 MB when a 400 MB request is sent, and although error was returned from server, precisely HTTP 413, the memory consumption doesn't decrease at all
2. Removed `getOnProxyInit` from API route: the same situation as above
3. Removed `middleware.js`: memory consumption doesn't raise at all
4. Commented out API route, but restored `middleware.ts`: memory consumption doesn't raise at all
5. Restored API route with `bodyParser: false`: the same as in 1 and 2
6. Completely removed `httpProxyMiddleware`: hit the value >1 GB and never came back to normal values
**The main points here are**:
1. Removal of `middleware.js` but preservation of API route doesn't trigger high memory consumption
2. Removal of API route but preservation of `middleware.js` doesn't trigger high memory consumption
**Expected result**:
I expect no high memory consumption when a network call happens in a project that has both API routes and middleware present in a project, if neither of these read/modify request's body. I also expect `bodyParse: false` to improve the situation.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.5.0: Wed May 1 20:12:58 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6000
Available memory (MB): 32768
Available CPU cores: 10
Binaries:
Node: 18.20.0
npm: 10.5.0
Yarn: 1.22.22
pnpm: 9.7.0
Relevant Packages:
next: 14.2.7 // Latest available version is detected (14.2.7).
eslint-config-next: 14.2.7
```
### Which area(s) are affected? (Select all that apply)
create-next-app, Middleware, Pages Router, App Router
### Which stage(s) are affected? (Select all that apply)
next dev (local), next build (local), Other (Deployed)
### Additional context
**Edit:** Tested with Next 15.0.3 and App router, the issue persists. Check the branch `app_router` and follow the instructions from above to see the result.
_No response_ | create-next-app,bug,Middleware,Pages Router | low | Critical |
2,492,825,883 | pytorch | [dashboard][aarch64] fp16 is slower than bf16 | From https://github.com/pytorch/pytorch/pull/134282#issuecomment-2307157197, in the aarch64 dashboard results, if we benchmark with fp16, it is 2x~10x slower than bf16, often causing timeout in cases.
# bfloat16
https://hud.pytorch.org/benchmark/huggingface/inductor_no_cudagraphs?dashboard=torchinductor&startTime=Fri,%2016%20Aug%202024%2013:17:17%20GMT&stopTime=Fri,%2023%20Aug%202024%2013:17:17%20GMT&granularity=hour&mode=inference&dtype=bfloat16&deviceName=cpu%20(aarch64)&lBranch=main&lCommit=ca3f48dd5ba387cdbf1b5106e4050e8b5c2f175c&rBranch=main&rCommit=ca3f48dd5ba387cdbf1b5106e4050e8b5c2f175c
# float16
https://hud.pytorch.org/benchmark/huggingface/inductor_no_cudagraphs?dashboard=torchinductor&startTime=Fri,%2016%20Aug%202024%2013:12:39%20GMT&stopTime=Fri,%2023%20Aug%202024%2013:12:39%20GMT&granularity=hour&mode=inference&dtype=float16&deviceName=cpu%20(aarch64)&lBranch=desertfire/aarch64_4&lCommit=6078701acfe49369714d45653eb3b662dbe02106&rBranch=desertfire/aarch64_4&rCommit=6078701acfe49369714d45653eb3b662dbe02106
cc @msaroufim @malfet @snadampal @milpuz01 @ezyang @chauhang @penguinwu | module: performance,triaged,module: arm,oncall: pt2 | low | Major |
2,492,830,291 | go | time: TestAfterTick failures | ```
#!watchflakes
default <- pkg == "time" && test == "TestAfterTick"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8738339611694488129)):
=== RUN TestAfterTick
=== PAUSE TestAfterTick
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,492,902,510 | go | net/http: http.ReadResponse does not handle 100-continue | ### Go version
go 1.22.5, but also on go.dev/play on 1.23
### Output of `go env` in your module/workspace:
```shell
GO111MODULE=''
GOARCH='amd64'
GOBIN=''
GOCACHE='/home/pivotal/.cache/go-build'
GOENV='/home/pivotal/.config/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='amd64'
GOHOSTOS='linux'
GOINSECURE=''
GOMODCACHE='/home/pivotal/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='linux'
GOPATH='/home/pivotal/go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/usr/local/go'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/usr/local/go/pkg/tool/linux_amd64'
GOVCS=''
GOVERSION='go1.22.5'
GCCGO='gccgo'
GOAMD64='v1'
AR='ar'
CC='gcc'
CXX='g++'
CGO_ENABLED='1'
GOMOD='/dev/null'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -m64 -pthread -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=/tmp/go-build3271148184=/tmp/go-build -gno-record-gcc-switches'
```
### What did you do?
Issue two requests with `Expect: 100-continue` headers, to the same server. One using http.DefaultClient.Do(), and the other using raw network connections and parsing the output. Then examine the returned data.
[Reproduction on go.dev](https://go.dev/play/p/fB2Mtrl0X8S)
### What did you see happen?
The final status code in the response object was 100.
### What did you expect to see?
The final status code returned should be a 405 (method not allowed). There may be other differences in how `http.ReadResponse()` works compared to the `readResponse()` function in transport.go as well. Can the logic for those two functions be shared? | NeedsInvestigation | low | Minor |
2,492,904,714 | godot | Godot 4.3 Crashes During Import of Large Asset Directory | ### Tested versions
Reproducible in Godot Engine v4.3.stable.flathub.77dcf97d8. This also occurs on the 4.3 Steam release as well. This did not occur in the 4.2.1 release.
EDIT: It does occur in the 4.2.2 release on Steam, but instead of crashing, it becomes unresponsive while maxing my CPU cores. I let it run for 25 minutes, and it was still unresponsive. This is on an AMD 5800X3D, so it's unlikely that I killed it too soon.
### System information
MX Linux 23.3 6.9.12-2-liquorix-amd64
### Issue description
Importing a large directory of assets causes a crash when the editor attempts to import and index them. To give an idea:
79926 items (76209 files, 3716 folders)
643.0 MiB (674,255,333 bytes)
I get the following output from the terminal running it with the --verbose flag:
```
Owner@mx:~
$ /usr/bin/flatpak run --branch=stable --arch=x86_64 --verbose --command=godot --file-forwarding org.godotengine.Godot @@ %f @@
F: No installations directory in /etc/flatpak/installations.d. Skipping
F: Opening system flatpak installation at path /var/lib/flatpak
F: Opening user flatpak installation at path /home/Owner/.local/share/flatpak
F: Opening user flatpak installation at path /home/Owner/.local/share/flatpak
F: Opening system flatpak installation at path /var/lib/flatpak
F: Skipping parental controls check for app/org.godotengine.Godot/x86_64/stable since parental controls are disabled globally
F: Opening user flatpak installation at path /home/Owner/.local/share/flatpak
F: Opening system flatpak installation at path /var/lib/flatpak
F: /var/lib/flatpak/runtime/org.freedesktop.Sdk/x86_64/23.08/be4a045f86be2b8a7a592bab299c7dd41c174eba94ab8048401b1fa01c9eb86a/files/lib32 does not exist
F: Cleaning up unused container id 2813547324
F: Cleaning up per-app-ID state for org.godotengine.Godot
F: Allocated instance id 3955385820
F: Add defaults in dir /org/godotengine/Godot/
F: Add locks in dir /org/godotengine/Godot/
F: Allowing host-fs access
F: Not sharing "/run/media" with sandbox: Unable to open path "/run/media": No such file or directory
F: Allowing wayland access
F: Allowing x11 access
F: Allowing pulseaudio access
F: Pulseaudio user configuration file '/home/Owner/.config/pulse/client.conf': Error opening file /home/Owner/.config/pulse/client.conf: No such file or directory
F: Failed to run in transient scope: No systemd user session available, cgroups not available
F: Running 'bwrap --args 40 -- xdg-dbus-proxy --args=42'
F: Running 'bwrap --args 40 -- godot '%f''
Godot Engine v4.3.stable.flathub.77dcf97d8 - https://godotengine.org
Inconsistent value (1) for DRI_PRIME. Should be < 1 (GPU devices count). Using: 0
OpenGL API 4.6 (Core Profile) Mesa 24.1.3 (git-0c49f54c76) - Compatibility - Using Device: AMD - AMD Radeon RX 6900 XT (radeonsi, navi21, LLVM 17.0.6, DRM 3.57, 6.9.12-2-liquorix-amd64)
Editing project: /home/Owner/Godot/mastermind-test
Godot Engine v4.3.stable.flathub.77dcf97d8 - https://godotengine.org
Vulkan 1.3.278 - Forward Mobile - Using Device #0: AMD - AMD Radeon RX 6900 XT (RADV NAVI21)
Owner@mx:~
$ ERROR: Condition "line.size() <= 1" is true. Returning: ERR_PARSE_ERROR
at: import (editor/import/resource_importer_csv_translation.cpp:95)
ERROR: Error importing 'res://MasterMind Assets/symbolClass/symbols.csv'.
at: _reimport_file (editor/editor_file_system.cpp:2607)
ERROR: Caller thread can't call this function in this node (/root). Use call_deferred() or call_thread_group() instead.
at: propagate_notification (scene/main/node.cpp:2422)
================================================================
handle_crash: Program crashed with signal 11
Engine version: Godot Engine v4.3.stable.flathub (77dcf97d82cbfe4e4615475fa52ca03da645dbd8)
Dumping the backtrace. Please include this when reporting the bug to the project developer.
[1] /usr/lib/x86_64-linux-gnu/libc.so.6(+0x3ee80) [0x7fd0ab550e80] (??:0)
[2] /app/bin/godot-bin(+0x132f1e3) [0x5b7e28b311e3] (??:?)
[3] /app/bin/godot-bin(+0x132f2d7) [0x5b7e28b312d7] (??:?)
[4] /app/bin/godot-bin(+0x132f569) [0x5b7e28b31569] (??:?)
[5] /app/bin/godot-bin(+0x132facb) [0x5b7e28b31acb] (??:?)
[6] /app/bin/godot-bin(+0x132fcde) [0x5b7e28b31cde] (??:?)
[7] /app/bin/godot-bin(+0x13300e9) [0x5b7e28b320e9] (??:?)
[8] /app/bin/godot-bin(+0x1331ae7) [0x5b7e28b33ae7] (??:?)
[9] /app/bin/godot-bin(+0x132ec77) [0x5b7e28b30c77] (??:?)
[10] /app/bin/godot-bin(+0x132e068) [0x5b7e28b30068] (??:?)
[11] /app/bin/godot-bin(+0x1348efe) [0x5b7e28b4aefe] (??:?)
[12] /app/bin/godot-bin(+0x134ac14) [0x5b7e28b4cc14] (??:?)
[13] /app/bin/godot-bin(+0x1348c72) [0x5b7e28b4ac72] (??:?)
[14] /app/bin/godot-bin(+0x134ac14) [0x5b7e28b4cc14] (??:?)
[15] /app/bin/godot-bin(+0x1348c72) [0x5b7e28b4ac72] (??:?)
[16] /app/bin/godot-bin(+0x134ac14) [0x5b7e28b4cc14] (??:?)
[17] /app/bin/godot-bin(+0x1348c72) [0x5b7e28b4ac72] (??:?)
[18] /app/bin/godot-bin(+0x134ac14) [0x5b7e28b4cc14] (??:?)
[19] /app/bin/godot-bin(+0x1348c72) [0x5b7e28b4ac72] (??:?)
[20] /app/bin/godot-bin(+0x1348f86) [0x5b7e28b4af86] (??:?)
[21] /app/bin/godot-bin(+0x134706e) [0x5b7e28b4906e] (??:?)
[22] /app/bin/godot-bin(+0x1336046) [0x5b7e28b38046] (??:?)
[23] /app/bin/godot-bin(+0x133693b) [0x5b7e28b3893b] (??:?)
[24] /app/bin/godot-bin(+0x1336a30) [0x5b7e28b38a30] (??:?)
[25] /app/bin/godot-bin(+0x1336e0d) [0x5b7e28b38e0d] (??:?)
[26] /app/bin/godot-bin(+0x48d1d17) [0x5b7e2c0d3d17] (??:?)
[27] /app/bin/godot-bin(+0x1cfa667) [0x5b7e294fc667] (??:?)
[28] /app/bin/godot-bin(+0x185d695) [0x5b7e2905f695] (??:?)
[29] /app/bin/godot-bin(+0x185fbaa) [0x5b7e29061baa] (??:?)
[30] /app/bin/godot-bin(+0x4d5dbc3) [0x5b7e2c55fbc3] (??:?)
[31] /app/bin/godot-bin(+0x4d5e46c) [0x5b7e2c56046c] (??:?)
[32] /app/bin/godot-bin(+0x4794add) [0x5b7e2bf96add] (??:?)
[33] /app/bin/godot-bin(+0x4faaeb4) [0x5b7e2c7aceb4] (??:?)
[34] /usr/lib/x86_64-linux-gnu/libc.so.6(+0x8ee39) [0x7fd0ab5a0e39] (??:0)
[35] /usr/lib/x86_64-linux-gnu/libc.so.6(clone+0x44) [0x7fd0ab6289c4] (??:0)
-- END OF BACKTRACE --
================================================================
```
It also keeps the focus of my terminal emulator despite the process having completely crashed and no longer having a PID. I have to CTRL+C to escape it.
### Steps to reproduce
1. This is a real simple one. Create a new project with a new directory.
2. Copy the assets to the root of the directory.
3. Select the //res folder to initiate the import.
4. Wait for the crash to happen.
I have put a download of a 7zip archive of the asset directory that causes the crash in the MRP.
### Minimal reproduction project (MRP)
[MRP.zip](https://github.com/user-attachments/files/16788310/MRP.zip)
Asset directory: https://mega.nz/file/eyhBkBAK#mDAghej8jNCh62ZaKzWqSq0xJ8flDerRUHCUPDx4zUY
Please only download this if you are actually working on the troubleshooting, I have limited bandwidth allotment for this. | bug,topic:editor,confirmed,needs testing,crash | low | Critical |
2,492,926,294 | go | time: TestIssue5745 failures | ```
#!watchflakes
default <- pkg == "time" && test == "TestIssue5745"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8738339611694488129)):
=== RUN TestIssue5745
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,492,926,355 | go | x/vulndb: add a lint check for unmerged modules | Add a lint check for vulndb reports to ensure that if modules can be merged (they have the same module and packages, but possibly distinct versions), then they are merged. This will make it less likely for us to submit UNREVIEWED reports that incorrectly mark an entire module as vulnerable. | vulncheck or vulndb | low | Minor |
2,492,960,907 | vscode | Support reverse search in tree find | Once #225417 is done, the remaining thing that the filter widget has that find does not is reverse search.
If we support that in tree find, it would enable us to remove the debug filter widget, which has some issues currently. | feature-request,under-discussion,debug-console | low | Critical |
2,492,962,541 | vscode | Allow setting Telemetry Output Logs to Trace always and independently |
Type: <b>Feature Request</b>
I'd like to set `Telemetry` and `Extension Telemetry` to log level trace so they are always visible for me, but I don't want to have all of VS Code set to trace all the time. The quick pick for managing these log levels doesn't include them at present.
VS Code version: Code - Insiders 1.93.0-insider (d1388fd24fc0acf17ae1f759e85c1acf559ed759, 2024-08-28T05:14:36.262Z)
OS version: Windows_NT x64 10.0.22631
Modes:
<!-- generated by issue reporter --> | feature-request | low | Major |
2,493,010,265 | godot | `get_child` error when duplicating .GLB model nodes with animations | ### Tested versions
reproducible in 4.3 stable
not reproducible in 4.2 stable
not tested in other builds
### System information
Godot v4.3.stable - Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1070 (NVIDIA; 31.0.15.5161) - Intel(R) Core(TM) i7-7700 CPU @ 3.60GHz (8 Threads)
### Issue description
when attempting to duplicate() a GLB node with animations under its animation player it is unable to understand the children and produces this error
```
E 0:00:00:0862 node_3d.gd:5 @ _ready(): Index p_index = 1 is out of bounds ((int)data.children_cache.size() = 1).
<C++ Source> scene/main/node.cpp:1688 @ get_child()
<Stack Trace> node_3d.gd:5 @ _ready()
```
that also causes the next error in the debugger
```
E 0:00:00:0863 node_3d.gd:5 @ _ready(): Child node disappeared while duplicating.
<C++ Error> Parameter "copy_child" is null.
<C++ Source> scene/main/node.cpp:2926 @ _duplicate_properties()
<Stack Trace> node_3d.gd:5 @ _ready()
```
### Steps to reproduce
import a .GLB model with a animation into a scene and call the duplicate function on it
### Minimal reproduction project (MRP)
[glb-duplicate-test.zip](https://github.com/user-attachments/files/16789050/glb-duplicate-test.zip)
| bug,regression,topic:animation | low | Critical |
2,493,013,077 | go | time: TestParseYday failures | ```
#!watchflakes
default <- pkg == "time" && test == "TestParseYday"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8738339611694488129)):
=== RUN TestParseYday
=== PAUSE TestParseYday
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,493,035,777 | pytorch | DISABLED test_prod_large (__main__.TestCuda) | Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_prod_large&suite=TestCuda&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/29382477213).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 15 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_prod_large`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1159, in originate_pairs
return [pair_type(actual, expected, id=id, **options)]
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2610, in __init__
super().__init__(actual, expected, check_dtype=False, **other_parameters)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_comparison.py", line 536, in __init__
actual, expected = self._process_inputs(actual, expected, id=id)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2625, in _process_inputs
return [self._to_number(input, id=id) for input in (actual, expected)]
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2625, in <listcomp>
return [self._to_number(input, id=id) for input in (actual, expected)]
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2637, in _to_number
number = number_like.item()
RuntimeError: Host and device pointer dont match with cudaHostRegister. Please dont use this feature by setting PYTORCH_CUDA_ALLOC_CONF=use_cuda_host_register:False (default)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_cuda.py", line 816, in test_prod_large
self.assertEqual(x.prod(), 1)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3846, in assertEqual
error_metas = not_close_error_metas(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1217, in not_close_error_metas
pairs = originate_pairs(
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1172, in originate_pairs
f"Originating a {pair_type.__name__}() at item {''.join(str([item]) for item in id)} with\n\n"
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/_tensor.py", line 993, in __format__
return self.item().__format__(format_spec)
RuntimeError: Host and device pointer dont match with cudaHostRegister. Please dont use this feature by setting PYTORCH_CUDA_ALLOC_CONF=use_cuda_host_register:False (default)
To execute this test, run the following from the base repo dir:
python test/test_cuda.py TestCuda.test_prod_large
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_cuda.py`
cc @ptrblck @msaroufim @clee2000 | module: cuda,triaged,module: flaky-tests,skipped | low | Critical |
2,493,050,234 | godot | Playing multiple AudioStreams together causes alterations of pitch or volume | ### Tested versions
- Reproducible in v4.3.stable.mono.official [77dcf97d8]
- Reproducible in v4.2.1.stable.mono.official [b09f793f5]
### System information
Godot v4.3.stable.mono - Windows 10.0.22631 - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 2080 Super with Max-Q Design (NVIDIA; 32.0.15.6081) - Intel(R) Core(TM) i7-10875H CPU @ 2.30GHz (16 Threads)
### Issue description
This audio bug is hard to spot. When multiple AudioStreamPlayers play different AudioStreams, the pitch or volume of certain frequencies of the AudioStream get altered.
I've provided a video below to show the issue better.
https://github.com/user-attachments/assets/11866800-dc11-4011-97f2-3ee54e52349f
In the video, a brownian noise is played constantly, with no major change in pitch or volume. When a pluck sound effect is played, however, you can hear the pitch of the brownian noise get very slightly altered. I suspect this happens to the audio frequencies shared by both sound effects, as it is easier to spot this bug if two AudioStreams have a very similar pitch range.
### Steps to reproduce
1. Start with a scene of your liking
2. Add two AudioStreamPlayers
3. Add two AudioStreams to the Players. (I recommend using a constant audio sample and a short sound effect, like a pluck)
4. Play the AudioStreamPlayers, and make sure both AudioStreams are audible at the same time
5. Try to spot the alteration in audio/volume
### Minimal reproduction project (MRP)
[audio-pitch-issue.zip](https://github.com/user-attachments/files/16789278/audio-pitch-issue.zip)
This is the same project I've used in the video. To replicate the issue, simply set the "playing" property of the first AudioStreamPlayer to True, and then do the same to the second one. The bug happens both in editor and in the running application. | bug,needs testing,topic:audio | low | Critical |
2,493,051,912 | vscode | Typing to reveal hidden notebook cell sometimes scrolls after reval |
Type: <b>Bug</b>
From #163943
Potentially related to inertial scrolling on a trackpad. Tested on MacBook Pro trackpad
1. Create long notebook with multiple cells
1. Start editing in first cell
1. Scroll cell off screen
1. Type to make an edit
**Bug**
Sometimes when the cell is revealed, the entire notebook editor also scrolls down afterwards
https://github.com/user-attachments/assets/3a5fb7b3-44e8-466e-8352-5c68d54edf9e
VS Code version: Code - Insiders 1.93.0-insider (Universal) (d1388fd24fc0acf17ae1f759e85c1acf559ed759, 2024-08-28T05:14:36.262Z)
OS version: Darwin arm64 23.6.0
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Apple M2 Max (12 x 2400)|
|GPU Status|2d_canvas: unavailable_software<br>canvas_oop_rasterization: disabled_off<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: disabled_software<br>multiple_raster_threads: enabled_on<br>opengl: disabled_off<br>rasterization: disabled_software<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: disabled_software<br>video_encode: disabled_software<br>webgl: unavailable_software<br>webgl2: unavailable_software<br>webgpu: unavailable_software<br>webnn: disabled_off|
|Load (avg)|3, 3, 3|
|Memory (System)|64.00GB (0.19GB free)|
|Process Argv|--crash-reporter-id 0fffb5da-9cd7-46fd-9e7f-a1564e8c5fda|
|Screen Reader|no|
|VM|0%|
</details>
<!-- generated by issue reporter --> | bug,notebook-layout | low | Critical |
2,493,090,561 | rust | rustdoc search: allow type-based search for constants and statics | Currently it is possible to search for function items based on their signature, but it is *not* possible to search for other kinds of items based on their type.
@rustbot label A-rustdoc-search | T-rustdoc,C-feature-request,A-rustdoc-search | low | Minor |
2,493,091,495 | rust | Tracking Issue for fmt-debug option | This is a tracking issue for the `-Z fmt-debug` option
The feature gate for the option is `#![feature(fmt_debug)]`
### About tracking issues
Tracking issues are used to record the overall progress of implementation.
They are also used as hubs connecting to other relevant issues, e.g., bugs or open design questions.
A tracking issue is however *not* meant for large scale discussion, questions, or bug reports about a feature.
Instead, open a dedicated issue for the specific matter and add the relevant feature gate label.
Discussion comments will get marked as off-topic or deleted.
Repeated discussions on the tracking issue may lead to the tracking issue getting locked.
### Steps
- [x] Implement the proposal (https://github.com/rust-lang/rust/pull/123940)
- [ ] Adjust (stable user-facing) documentation ([see instructions on rustc-dev-guide][doc-guide])
- [ ] Stabilization PR ([see instructions on rustc-dev-guide][stabilization-guide])
[stabilization-guide]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#stabilization-pr
[doc-guide]: https://rustc-dev-guide.rust-lang.org/stabilization_guide.html#documentation-prs
[Style Guide]: https://github.com/rust-lang/rust/tree/master/src/doc/style-guide
### Unresolved Questions
<!--
Include any open questions that need to be answered before the feature can be
stabilised.
-->
### Implementation history
* https://github.com/rust-lang/rust/pull/123940
| T-compiler,C-tracking-issue,F-fmt_debug,-Zfmt-debug | low | Critical |
2,493,127,954 | go | x/tools/gopls: add a source action to see assembly for a range of lines | **Is your feature request related to a problem? Please describe.**
I wanted to understand why this allocates:
```go
func normalize(v any) (any, typeCode, bool) {
switch v := v.(type) {
case string:
return v, typeCodeString, true
// ...
}
}
```
**Describe the solution you'd like**
I'd like to be able to select a line and view the assembly for that line (or for a range of lines).
**Describe alternatives you've considered**
I can (and did) use "Browse amd64 assembly for normalize" but that doesn't scale well to larger functions.
**Additional context**
https://github.com/golang/go/issues/67478 | FeatureRequest,gopls,Tools | low | Minor |
2,493,141,236 | rust | Very long compilation time on Apple Silicon platform | I have a crate with about 2k files and 600kloc. Most of the code is autogenerated. It compiles in less than 2 minutes with release profile on Ryzen 7950X3D 96GB Linux and 40+ minutes on M1Max 64GB MacOS.
According to `-Ztime-passes` the majority of the time is spent in `finish_ongoing_codegen`. I can't share the original code, it uses a lot of private deps, but the link to the repro case is below.
Reproduces with rustc 1.78, 1.80, 1.81-nightly, 1.82-nightly.
The code in the repo takes 40+ minutes to compile on Apple Silicon (M1 and M3 tested)
On Linux (quite powerful box, 7950X3D, 96GB RAM) it takes 2 minutes.
This is a distilled down version of the original code, the original is autogenerated from a DSL and thus has such weird structure.
Originally it's a HTTP API service with A LOT of endpoints (621 specifically).
The python script recreates the `bug.rs` module and could be used to play with the number of "endpoints". It basically looks like that:

X axis: number of "endpoints"
Y axis: seconds to compile.
If the fields in the struct could be made non-optional the problem goes away.
Chat discussion: https://rust-lang.zulipchat.com/#narrow/stream/247081-t-compiler.2Fperformance/topic/Major.20slowdown.20on.20aarch64-apple-darwin
Repro case: https://github.com/kika/rust-apple-silicon-bug | A-LLVM,O-macos,I-compiletime,O-AArch64 | low | Critical |
2,493,184,769 | flutter | Clang compiler crashes on `mac-698-h526` | Bot: <https://chromium-swarm.appspot.com/bot?id=mac-698-h526>
[Example failure](https://logs.chromium.org/logs/flutter/buildbucket/cr-buildbucket/8738320501209133089/+/u/build_ci_host_profile_flutter_build_archives:artifacts/stdout):
```txt
[5474/6425] CXX clang_arm64/obj/flutter/third_party/vulkan-deps/spirv-tools/src/source/opt/libspvtools_opt.eliminate_dead_io_components_pass.o
FAILED: clang_arm64/obj/flutter/third_party/vulkan-deps/spirv-tools/src/source/opt/libspvtools_opt.eliminate_dead_io_components_pass.o
../../../flutter/buildtools/mac-arm64/clang/bin/clang++ -MMD -MF clang_arm64/obj/flutter/third_party/vulkan-deps/spirv-tools/src/source/opt/libspvtools_opt.eliminate_dead_io_components_pass.o.d -DUSE_OPENSSL=1 -D__STDC_CONSTANT_MACROS -D__STDC_FORMAT_MACROS -D_FORTIFY_SOURCE=2 -D_LIBCPP_DISABLE_AVAILABILITY=1 -D_LIBCPP_DISABLE_VISIBILITY_ANNOTATIONS -D_LIBCPP_ENABLE_THREAD_SAFETY_ANNOTATIONS -DNDEBUG -DNVALGRIND -DDYNAMIC_ANNOTATIONS_ENABLED=0 -I../../.. -Iclang_arm64/gen -I../../../flutter/third_party/libcxx/include -I../../../flutter/third_party/libcxxabi/include -I../../../flutter/build/secondary/flutter/third_party/libcxx/config -I../../../flutter/third_party/vulkan-deps/spirv-tools/src -I../../../flutter/third_party/vulkan-deps/spirv-headers/src/include -I../../../flutter/third_party/vulkan-deps/spirv-tools/src/include -Iclang_arm64/gen/flutter/third_party/vulkan-deps/spirv-tools/src -fno-strict-aliasing -fstack-protector-all --target=arm64-apple-macos -arch arm64 -fcolor-diagnostics -Wall -Wextra -Wendif-labels -Werror -Wno-missing-field-initializers -Wno-unused-parameter -Wno-unused-but-set-parameter -Wno-unused-but-set-variable -Wno-implicit-int-float-conversion -Wno-deprecated-copy -Wno-psabi -Wno-deprecated-literal-operator -Wno-unqualified-std-cast-call -Wno-non-c-typedef-for-linkage -Wno-range-loop-construct -Wunguarded-availability -Wno-deprecated-declarations -fdebug-prefix-map=/Volumes/Work/s/w/ir/cache/builder/src/= -no-canonical-prefixes -fvisibility=hidden -Wstring-conversion -Wnewline-eof -O2 -fno-ident -fdata-sections -ffunction-sections -g2 -Wno-implicit-fallthrough -Wno-newline-eof -Wno-unreachable-code-break -Wno-unreachable-code-return -std=c++17 -fvisibility-inlines-hidden -std=c++17 -fno-rtti -nostdinc++ -nostdinc++ -fvisibility=hidden -fno-exceptions -stdlib=libc++ -isysroot ../../../../../osx_sdk/xcode_15a240d/XCode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX14.0.sdk -mmacosx-version-min=10.14.0 -c ../../../flutter/third_party/vulkan-deps/spirv-tools/src/source/opt/eliminate_dead_io_components_pass.cpp -o clang_arm64/obj/flutter/third_party/vulkan-deps/spirv-tools/src/source/opt/libspvtools_opt.eliminate_dead_io_components_pass.o
clang++: error: clang frontend command failed with exit code 134 (use -v to see invocation)
Fuchsia clang version 18.0.0 (https://llvm.googlesource.com/llvm-project 725656bdd885483c39f482a01ea25d67acf39c46)
Target: arm64-apple-macos
Thread model: posix
InstalledDir: ../../../flutter/buildtools/mac-arm64/clang/bin
```
@zanderso mentioned I should file this to see if in the near future this bot fails again/i.e. trend analysis, so here it is. | engine,team-infra,P2,triaged-infra | low | Critical |
2,493,227,760 | vscode | Infinite loading when filesystem has no registered provider | 1. Debug the filesystem provider API
2. Try to do a search using the file picker
3. Notice that it infinitely loads because there is no file search provider :bug: | bug,search,search-api | low | Critical |
2,493,240,397 | terminal | ConPTY inside WSL is broken (e.g. running cmd.exe inside WSL) | ### Windows Terminal version
1.23.2391.0
### Windows build number
_No response_
### Other Software
_No response_
### Steps to reproduce
Run cmd.exe inside WSL.
### Expected Behavior
_No response_
### Actual Behavior
* W32IM never gets turned off.
--> We must emit the corresponding mode resets before exiting VtIo.
* Launching the win32 process freezes for 3s as it waits for the DA1 response which gets mangled. Almost certainly the same issue as #17813. | Product-Conpty,Area-Input,Issue-Bug,Priority-3 | low | Critical |
2,493,244,796 | next.js | next build hangs after next dev has compiled on Windows | ### Link to the code that reproduces this issue
https://github.com/jazelly/next-hang
### To Reproduce
1. `create-next-app`
2. `yarn dev` in one terminal
3. visit it to let it compile
4. `yarn build` in a different terminal
5. You might see an error, try again then you will see it hangs
### Current vs. Expected behavior
**Expect:**
I expect it to pass/fail consistently
**Actual:**
It hangs at
PS E:\github\my\next-bug\next-dead-lock> npx yarn build
npm info using npm@10.8.2
npm info using node@v22.5.1
npm http fetch GET 200 https://registry.npmjs.org/yarn 136ms (cache revalidated)
yarn run v1.22.22
$ next build
▲ Next.js 14.2.7
### Provide environment information
```bash
Operating System:
Platform: win32
Arch: x64
Version: Windows 11 Home
Binaries:
Node: 22.5.1
npm: 10.8.2
Yarn: 1.22.22
pnpm: N/A
Relevant Packages:
next: 14.2.5
eslint-config-next: 14.2.5
react: 18.3.1
react-dom: 18.3.1
typescript: 5.5.4
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Developer Experience
### Which stage(s) are affected? (Select all that apply)
next build (local)
### Additional context
_No response_ | bug | low | Critical |
2,493,245,366 | pytorch | Compile fails on Flex attention + FSDP | ### 🐛 Describe the bug
Flex attention on FSDP works without compile, but not with compile. The key error seems to be `ValueError: Pointer argument (at 2) cannot be accessed from Triton (cpu tensor?)`. It also works without a `block_mask`.
### Error logs
```
-> % torchrun --nproc_per_node=2 flex_fsdp.py
W0829 00:56:14.552000 680552 torch/distributed/run.py:793]
W0829 00:56:14.552000 680552 torch/distributed/run.py:793] *****************************************
W0829 00:56:14.552000 680552 torch/distributed/run.py:793] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
W0829 00:56:14.552000 680552 torch/distributed/run.py:793] *****************************************
skipping cudagraphs due to skipping cudagraphs due to multiple devices: device(type='cuda', index=1), device(type='cuda', index=0)
[rank1]: Traceback (most recent call last):
[rank1]: File "[REDACTED]/flex_fsdp.py", line 56, in <module>
[rank1]:
[rank1]: File "[REDACTED]/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
[rank1]: return self._call_impl(*args, **kwargs)
[rank1]: File "[REDACTED]/torch/nn/modules/module.py", line 1747, in _call_impl
[rank1]: return forward_call(*args, **kwargs)
[rank1]: File "[REDACTED]/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 864, in forward
[rank1]: output = self._fsdp_wrapped_module(*args, **kwargs)
[rank1]: File "[REDACTED]/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
[rank1]: return self._call_impl(*args, **kwargs)
[rank1]: File "[REDACTED]/torch/nn/modules/module.py", line 1747, in _call_impl
[rank1]: return forward_call(*args, **kwargs)
[rank1]: File "[REDACTED]/torch/_dynamo/eval_frame.py", line 465, in _fn
[rank1]: return fn(*args, **kwargs)
[rank1]: File "[REDACTED]/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
[rank1]: return self._call_impl(*args, **kwargs)
[rank1]: File "[REDACTED]/torch/nn/modules/module.py", line 1747, in _call_impl
[rank1]: return forward_call(*args, **kwargs)
[rank1]: File "[REDACTED]/flex_fsdp.py", line 23, in forward
[rank1]: def forward(self, x):
[rank1]: File "[REDACTED]/torch/_dynamo/eval_frame.py", line 632, in _fn
[rank1]: return fn(*args, **kwargs)
[rank1]: File "[REDACTED]/torch/_functorch/aot_autograd.py", line 1100, in forward
[rank1]: return compiled_fn(full_args)
[rank1]: File "[REDACTED]/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 308, in runtime_wrapper
[rank1]: all_outs = call_func_at_runtime_with_args(
[rank1]: File "[REDACTED]/torch/_functorch/_aot_autograd/utils.py", line 124, in call_func_at_runtime_with_args
[rank1]: out = normalize_as_list(f(args))
[rank1]: File "[REDACTED]/torch/_functorch/_aot_autograd/utils.py", line 98, in g
[rank1]: return f(*args)
[rank1]: File "[REDACTED]/torch/autograd/function.py", line 575, in apply
[rank1]: return super().apply(*args, **kwargs) # type: ignore[misc]
[rank1]: File "[REDACTED]/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 1525, in forward
[rank1]: fw_outs = call_func_at_runtime_with_args(
[rank1]: File "[REDACTED]/torch/_functorch/_aot_autograd/utils.py", line 124, in call_func_at_runtime_with_args
[rank1]: out = normalize_as_list(f(args))
[rank1]: File "[REDACTED]/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 488, in wrapper
[rank1]: return compiled_fn(runtime_args)
[rank1]: File "[REDACTED]/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 667, in inner_fn
[rank1]: outs = compiled_fn(args)
[rank1]: File "[REDACTED]/torch/_inductor/codecache.py", line 1464, in __call__
[rank1]: return self.current_callable(inputs)
[rank1]: File "[REDACTED]/torch/_inductor/utils.py", line 1954, in run
[rank1]: return model(new_inputs)
[rank1]: File "[REDACTED]/torchinductor_victor/wj/cwjdqvgoke23vff3px7gsasmyalyiwh5hxpqyks5gmydiuy4z45h.py", line 624, in call
[rank1]: triton_tem_fused_0.run(primals_1, buf0, primals_2, primals_3, primals_4, primals_5, buf1, grid=torch._inductor.kernel.flex_attention.flex_attention_grid(1, 2, 256, 64, meta0), stream=stream1)
[rank1]: File "[REDACTED]/torch/_inductor/runtime/triton_heuristics.py", line 884, in run
[rank1]: return launcher(
[rank1]: File "<string>", line 13, in launcher
[rank1]: File "[REDACTED]/triton/backends/nvidia/driver.py", line 365, in __call__
[rank1]: self.launch(*args, **kwargs)
[rank1]: ValueError: Pointer argument (at 2) cannot be accessed from Triton (cpu tensor?)
W0829 00:56:24.984000 680552 torch/distributed/elastic/multiprocessing/api.py:890] Sending process 681845 closing signal SIGTERM
E0829 00:56:25.150000 680552 torch/distributed/elastic/multiprocessing/api.py:862] failed (exitcode: 1) local_rank: 1 (pid: 681849) of binary: [REDACTED]/python
Traceback (most recent call last):
File "[REDACTED]/torchrun", line 8, in <module>
sys.exit(main())
File "[REDACTED]/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
return f(*args, **kwargs)
File "[REDACTED]/torch/distributed/run.py", line 919, in main
run(args)
File "[REDACTED]/torch/distributed/run.py", line 910, in run
elastic_launch(
File "[REDACTED]/torch/distributed/launcher/api.py", line 138, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "[REDACTED]/torch/distributed/launcher/api.py", line 269, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
flex_fsdp.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2024-08-29_00:56:24
host : compute-h100-ord-node-13.local.vcn
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 681849)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
```
### Minified repro
```python
import torch
import torch.nn as nn
from torch.distributed.fsdp import FullyShardedDataParallel as FSDP
from torch.nn.attention.flex_attention import flex_attention, create_block_mask
torch.set_default_device("cuda")
def sliding_window(b, h, q_idx, kv_idx):
return q_idx - kv_idx <= 128
B, S, H, D = 1, 256, 2, 64
d_model = H * D # Total model dimension
block_mask = create_block_mask(sliding_window, B=None, H=None, Q_LEN=S, KV_LEN=S)
class TransformerBlock(nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
# Self-attention
B, S, _ = x.shape
x_reshaped = x.view(B, S, H, D).transpose(1, 2)
attn_output = flex_attention(
x_reshaped, x_reshaped, x_reshaped, block_mask=block_mask
)
x = x + attn_output.sum()
return x
# Initialize distributed environment
torch.distributed.init_process_group(backend="nccl")
torch.cuda.set_device(torch.distributed.get_rank())
# Create and wrap the model with FSDP
model = TransformerBlock()
model = torch.compile(model, mode="reduce-overhead", dynamic=False)
fsdp_model = FSDP(
model,
use_orig_params=True,
mixed_precision=torch.distributed.fsdp.MixedPrecision(
param_dtype=torch.float16,
reduce_dtype=torch.float16,
buffer_dtype=torch.float16,
),
)
# Create input tensor
x = torch.randn(B, S, d_model, device="cuda", dtype=torch.float16, requires_grad=True)
# Forward pass with FSDP
output = fsdp_model(x)
# Backward pass
output.sum().backward()
# Check gradients
print("Gradients computed:")
print(f"x.grad is not None: {x.grad is not None}")
# Print model parameters
for name, param in fsdp_model.named_parameters():
if param.grad is not None:
print(f"{name} has gradients")
else:
print(f"{name} has no gradients")
# Clean up
torch.distributed.destroy_process_group()
```
Run with `torchrun --nproc_per_node=2 flex_fsdp.py`
### Versions
```
python3 collect_env.py
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 23357 100 23357 0 0 105k 0 --:--:-- --:--:-- --:--:-- 106k
/home/victor/anaconda3/envs/flex/lib/python3.10/site-packages/torch/_subclasses/functional_tensor.py:271: UserWarning: Failed to initialize NumPy: No module named 'numpy' (Triggered internally at ../torch/csrc/utils/tensor_numpy.cpp:84.)
cpu = _conversion_method_template(device=torch.device("cpu"))
Collecting environment information...
PyTorch version: 2.5.0.dev20240827+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.5.0-1018-oracle-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.161.07
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 224
On-line CPU(s) list: 0-111
Off-line CPU(s) list: 112-223
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8480+
CPU family: 6
Model: 143
Thread(s) per core: 1
Core(s) per socket: 56
Socket(s): 2
Stepping: 8
CPU max MHz: 3800.0000
CPU min MHz: 0.0000
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 5.3 MiB (112 instances)
L1i cache: 3.5 MiB (112 instances)
L2 cache: 224 MiB (112 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-55
NUMA node1 CPU(s): 56-111
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Vulnerable
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] pytorch-triton==3.0.0+dedb7bdf33
[pip3] torch==2.5.0.dev20240827+cu121
[conda] pytorch-triton 3.0.0+dedb7bdf33 pypi_0 pypi
[conda] torch 2.5.0.dev20240827+cu121 pypi_0 pypi
```
cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @zhaojuanmao @mrshenli @rohan-varma @chauhang @penguinwu @zou3519 @ydwu4 @bdhirsh @yf225 @Chillee @drisspg @yanboliang @BoyuanFeng @XilunWu @ezyang | oncall: distributed,triaged,module: fsdp,oncall: pt2,module: higher order operators,module: pt2-dispatcher,module: flex attention,pt2d-triage-nov2024 | low | Critical |
2,493,256,598 | rust | rustdoc search: path matches that skip segments should be deprioritized | a search for `collections::b` matches `std::collections::LinkedList::back` far before it matches `std::collections::BTreeMap`.
ommitting segments at the start and end should not be penalized, but ommitting them in the *middle* should be. | T-rustdoc,C-enhancement,A-rustdoc-search | low | Minor |
2,493,260,598 | godot | GDShader has several syntax issues when you `#define` function-like macros | ### Tested versions
- Reproducible in v4.3.stable.flathub [77dcf97d8]
### System information
Godot v4.3.stable (77dcf97d8) - Freedesktop SDK 23.08 (Flatpak runtime) - X11 - Vulkan (Forward+) - integrated Intel(R) HD Graphics 5500 (BDW GT2) - Intel(R) Core(TM) i5-5300U CPU @ 2.30GHz (4 Threads)
### Issue description
I've identified several issues when defining macros with arguments in GDShader preprocessor.
Issues are confirmed by comparing expected behavior to GLSL (ShaderToy / glslang) and C Preprocessor (GCC).
- [ ] 1. Preprocessor must report error on duplicate parameter names in macros
- [ ] 2. Preprocessor should leave names of function-like macros alone when used without parentheses
- [ ] 3. Preprocessor should allow function-like macros with no parameters (empty parentheses)
- [x] 4. Preprocessor should not replace parameter names inside strings when expanding macros - *(won't fix)*
- [ ] 5. Macro expansion could disallow recursion to avoid issues
- [ ] 6. Expand macro itself before (not after) its arguments
- [ ] 8. Replace macro arguments all at once, not sequentially
- [ ] 9. Forbid defining or undefining a macro named "defined"
#### Issue 1: Report duplicate macro parameters
GLSL in ShaderToy gives the following error in this case:
> 'myParam' : duplicate macro parameter name
In the C preprocessor (GCC) the error looks like this:
> test.c:1:14: error: duplicate macro parameter "myParam"
#### Issue 2: Leave function-like macro name alone when used without parentheses
There is another inconsistency between GDShader and GLSL/C preprocessors.
If you define a function-like macro with parentheses, GLSL and C will still allow that name to appear without parentheses and leave it alone (it won't try to expand it). GDShader, however, doesn't do this, and its way of handling it causes issues (not entirely sure to what extent, but I found a very weird issue in my tests).
#### Issue 3: Support function-like macros with zero parameters
GLSL replaces `macroName()` (no arguments, but including the parentheses) when it's defined with parentheses, and just `macroName` if it's defined without parentheses. I tested the C preprocessor (GCC) and it works the same way.
This should be allowed in GDShader as well.
#### Issue 4: Do not expand macro parameters defined inside strings
Strings are not supported in GLSL, but since they are going to be supported in GDShader, their handling in the preprocessor should match C. I assume it's still an issue, but note that I didn't test this against master (please confirm).
Parameters in macros are replaced wherever they appear in their definition. But if they appear inside a string, it should of course not be replaced. That's how the C preprocessor handles it.
#### Other issues: see comments below
### Steps to reproduce
Hint: Using an invalid token like `$` logs the preprocessor output (even on otherwise "valid" code) to help testing.
`issue1.gdshader`
```glsl
shader_type spatial;
#define join(x, x) x ## x
// it seems to be simply using just the first parameter name
// it must raise a preprocessor error like "duplicate macro parameter name"
const int join(a,b) = join(1,2); // becomes: `const int aa = 11 ;`
$ // voluntary error to log preprocessor output
```
`issue2.gdshader`
```glsl
shader_type spatial;
#define bar(x) x ## x
const int a = bar a b / c d bar(12);
// incorrectly results in this code: `const int a = 1212 ;`
// no idea what the preprocessor is doing here; ignoring everything between macro name and `(` perhaps?
// should be: `const int a = bar a b / c d 1212 ;`
$ // voluntary error to log preprocessor output
```
`issue3.gdshader`
```glsl
shader_type spatial;
#define foo() whatever()
// raises "invalid argument name" error
// empty parentheses should be allowed; this form must require parentheses to expand it
foo foo() foo // should become: foo whatever() foo
```
`issue4.gdshader`
```glsl
shader_type spatial;
#define str(x) "x"
uniform int a: hint_enum(str(etc)); // should be "x", not "etc"
$ // voluntary error to log preprocessor output
```
### Minimal reproduction project (MRP)
N/A | enhancement,topic:shaders | low | Critical |
2,493,262,632 | go | proposal: slices: ChunkFunc to determine chunks of a slice using a function | ### Proposal Details
I propose adding a new function to the `slices` package that would return an iterator yielding variable-length subslices of a given slice based on a provided function. There are two possible implementations of this.
Version 1
-------------
The first is something I have personally used several times before, as well as being similar to [a function in the Elixir standard library](https://hexdocs.pm/elixir/Enum.html#chunk_by/2), looks like
```go
func ChunkFunc[T any, C comparable, S ~[]T](s S, chunker func(T) C) iter.Seq[[]T]
```
This function returns an iterator that yields subslices of `s` composed of consecutive runs of elements for which `chunker` returns the same value. In other words,
```go
ChunkFunc([]int{-1, -2, -3, 1, 2, -1, 3, 2, 1}, func(v int) bool { return v > 0 })
```
would yield `[]int{-1, -2, -3}`, `[]int{1, 2}`, `[]int{-1}`, and then `[]int{3, 2, 1}`. This is useful for a number of different things. For example, let's say that you have a slice of lines of output from something, some of which are to stdout and some to stderr. If you want to output those with a header to indicate which is which, being able to group the consecutive runs of lines that were to each is very useful, and a function like this can do it quite efficiently.
```go
groups := slices.ChunkFunc(outputs, func(output Output) string { return output.Destination })
for group := range groups {
fmt.Printf("%v:\n", group[0].Destination) // Like Chunk(), never yields an empty slice.
for _, output := range group {
fmt.Println(output.Text)
}
}
```
Version 2
--------------
The other possible implementation is to simply make the function always return a `bool`, and then define that each time it returns `true` is the start of a new chunk.
While this is not a function that I've personally had a use for, I'm not really stuck on either specifically simply because I'm pretty sure that both can be implemented using the other. I think it's probably easier to implement the `comparable` one using the `bool` version, though, as it would simply be something like the following untested function:
```go
func ChunkBy[T any, C comparable, S ~[]S](s S, chunker func(T) C) iter.Seq[[]T] {
return func(yield func([]T) bool) {
first := true
var prev C
chunks := ChunkFunc(s, func(v T) bool {
check := chunker(v)
if first {
prev = check
return true
}
start := check != prev
prev = check
return start
})
chunks(yield)
}
}
```
Version 3
-------------
Another alternative is to simply provide both as, say, `ChunkFunc()` for the `bool` version and `ChunkBy()` for the `comparable` one. | Proposal | low | Major |
2,493,262,875 | vscode | Updater running then got an error | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
- VS Code Version: 1.92.1
- OS Version: 10.0.22631 Build 22631
Steps to Reproduce:
1. Check for Update
2. Restart for Update
<img width="860" alt="Screenshot 2024-08-29 at 09 27 46" src="https://github.com/user-attachments/assets/e8113de2-b8fc-4741-8aa0-c40866a31469">
Log snippet:
Aug 28 01:31:35.512 INFO Get file handle: "C:\\Users\\***\\AppData\\Local\\Programs\\Microsoft VS Code\\debug.log" (attempt 14)
Aug 28 01:31:45.313 INFO Get file handle: "C:\\Users\\***\\AppData\\Local\\Programs\\Microsoft VS Code\\debug.log" (attempt 15)
Aug 28 01:31:56.564 INFO Get file handle: "C:\\Users\\***\\AppData\\Local\\Programs\\Microsoft VS Code\\debug.log" (attempt 16)
Aug 29 01:16:42.908 ERRO Failed to create file handle: The process cannot access the file because it is being used by another process.
| info-needed,install-update,windows | low | Critical |
2,493,293,487 | go | runtime: binaries should fail at startup when built with a GOARM64 version not supported on the runtime hardware | ### Go version
go version go1.23.0 linux/arm64
### Output of `go env` in your module/workspace:
```shell
Not relevant
```
### What did you do?
On an arm64 host, I ran
GOARM64=v9.5 go test
on a simple hello world test.
### What did you see happen?
The test ran and passed.
### What did you expect to see?
This is on an AWS c7g instance. This is a graviton3 chip. I'm not 100% sure what the exact arm64 architecture version of that chip is (anyone know a good way to tell?), but I'm pretty sure it's v8.x. Someone on Hacker News claims that graviton3 is v8.4. In `/proc/cpuinfo` I see
```
processor : 0
BogoMIPS : 2100.00
Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm ssbs paca pacg dcpodp svei8mm svebf16 i8mm bf16 dgh rng
CPU implementer : 0x41
CPU architecture: 8
CPU variant : 0x1
CPU part : 0xd40
CPU revision : 1
```
Anyway, I'm pretty sure that it does not support arm64 v9.5. Yet, the binary I compiled with GOARM64=v9.5 does not check for CPU capabilities and exit on startup.
By contrast, on my local machine (AMD Ryzen 9 3900X) if I run
```
$ GOAMD64=v4 go test
This program can only be run on AMD64 processors with v4 microarchitecture support.
exit status 1
```
because my chip only supports GOAMD64=v3.
From the [CL 559555](https://go.dev/cl/559555) it sounds like GOARM64 support isn't being used in the compiler to do anything yet. However, it's still important that we fix this, because the `arm64.vX.Y` build tags are available to user code. I am able to write code today using (for example) the `arm64.v9.3` build tag to guard some v9.3-specific feature; when I run this binary on a v8.0 CPU it won't crash on startup but will hit some invalid instruction later.
P.S. It would be good if someone would update some of the wiki pages for GOARM64; in particular, [MinimumRequirements](https://go.dev/wiki/MinimumRequirements#amd64) and [GoArm](https://go.dev/wiki/GoArm). I found it a little hard to get info about GOARM64; as far as I can tell, besides the Go 1.23 release notes, the only places it's mentioned are `go help environment` and `go help buildconstraint`.
/cc @andreybokhanko @cherrymui
| NeedsFix,arch-arm64,compiler/runtime,FixPending | low | Critical |
2,493,341,452 | next.js | How to update current route data by using revalidateTag in the server action?? | ### Link to the code that reproduces this issue
https://github.com/takakikasuga/revalidate-app
### To Reproduce
Now, I study app router and the next cahce like data cache and so on...
I try to do some option of fetch and revalidate tag and path.
today, I use [revalidateTag](https://nextjs.org/docs/app/api-reference/functions/revalidateTag) in the server action, and I wanna update data of the tag and I don't expect to update data other than tag.
In this below video, I updated `layout tag` in the ActionLayout, but after revalidate `layout tag`, other tag data like `page tag` in the ActionPage and `root tag` int the RootLayout was updated.
I don't know this action happend...
additionally, after updated data related to all tags and reload the browser, `layout tag` was updated correctly but, other than tag like `page tag` and `root tag` wasn't updated and display old cached data...
https://github.com/user-attachments/assets/357a2961-82d1-4e79-913e-c1d6193a1043
How can I update current data by revalidateTag???👀
but,,, there is one thing, I succeeded in the specified tag's data.
I inject [redirect function](https://nextjs.org/docs/app/building-your-application/routing/redirecting#redirect-function) after revalidate tag and redirect to current route, I can found the my expectation that `layout tag` was updated and other than tag like `page tag` and `root tag` wasn't updated like below the video.
but I don't know this implementation is correct because this implementation example don't be writtern in the nextjs official document.
(I refered to the this [youtube](https://www.youtube.com/watch?v=-mPm2IRkacM))
https://github.com/user-attachments/assets/142c9c49-284e-4c42-92cb-b57dae325743
I love the nextjs, thank you for your project and the best effort for software enginner!!
### Current vs. Expected behavior
I wanna update data of the tag and I don't expect to update data other than tag by using only revalidateTag, it means I don't use redirect.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: x64
Version: Darwin Kernel Version 23.5.0: Wed May 1 20:12:58 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6000
Available memory (MB): 65536
Available CPU cores: 10
Binaries:
Node: 20.14.0
npm: 10.7.0
Yarn: 1.22.22
pnpm: N/A
Relevant Packages:
next: 14.2.7 // Latest available version is detected (14.2.7).
eslint-config-next: 14.2.7
react: 18.3.1
react-dom: 18.3.1
typescript: 5.5.4
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
create-next-app
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
_No response_ | create-next-app,bug | low | Minor |
2,493,348,789 | rust | Large amounts of repeated data in debug info | In https://github.com/rust-lang/rust/pull/128861#issuecomment-2292401581 I teased @nnethercote with the promise of more debug info inefficiency. One example is that the .debug_ranges section that specifies the ranges over which DWARF constructs such as functions, variables, etc are valid contains large amounts of repeated data. Rust's love of inlining and zero cost abstractions tends to produce repeated ranges.
When built with debuginfo-level = 2, tip Rust's librustc_driver.so has approximately 2.1 million entries in .debug_ranges. There are only approximately 1.1 million unique entries though. Doing the dumbest possible thing in LLVM (checking in DwarfFile::add_range to see if the new range is exactly equal to the last range, and not adding a new entry if it is) eliminates virtually all duplicated ranges (less than 1k remain) and results in a 43% reduction in the size of the .debug_ranges section, or a roughly 1.75% reduction in the size of the .so
@rustbot label A-debuginfo A-llvm I-heavy | A-LLVM,A-debuginfo,T-compiler,C-bug,C-tracking-issue,I-heavy | low | Critical |
2,493,353,937 | transformers | Is it possible to infer the model separately through encoder.onnx and decoder.onnx | ### Feature request
Is it possible to infer the model separately through encoder.onnx and decoder.onnx
### Motivation
Is it possible to infer the model separately through encoder.onnx and decoder.onnx
### Your contribution
Is it possible to infer the model separately through encoder.onnx and decoder.onnx | Feature request | low | Major |
2,493,381,646 | pytorch | DISABLED test_record_stream (__main__.TestCuda) | Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_record_stream&suite=TestCuda&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/29396070465).
Over the past 3 hours, it has been determined flaky in 19 workflow(s) with 57 failures and 19 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_record_stream`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/test/test_cuda.py", line 668, in test_record_stream
t = torch.FloatTensor([1, 2, 3, 4]).pin_memory()
RuntimeError: Host and device pointer dont match with cudaHostRegister. Please dont use this feature by setting PYTORCH_CUDA_ALLOC_CONF=use_cuda_host_register:False (default)
To execute this test, run the following from the base repo dir:
python test/test_cuda.py TestCuda.test_record_stream
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `test_cuda.py`
cc @ptrblck @msaroufim @clee2000 | module: cuda,triaged,module: flaky-tests,skipped | low | Critical |
2,493,391,112 | PowerToys | eliminate black border in a program | ### Description of the new feature / enhancement

### Scenario when this would be used?
For example, I'm watching a movie or even playing a game, but the edges always appear. It's not a wrong configuration, but rather some programs that are not adapted to eliminate these edges.
Then you would do something like identify the black edges and stretch them to fit, taking up the entire monitor screen
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,493,400,590 | material-ui | [Joy-ui][Button] The default cursor style is incorrect when the button is disabled | ### Steps to reproduce
Link to live example: (required) https://stackblitz.com/edit/vitejs-vite-p6sbgy
A very small detail but it makes people uncomfortable
Steps:
1.
2.
3.
### Current behavior
_No response_
### Expected behavior
_No response_
### Context
_No response_
### Your environment
<details>
<summary><code>npx @mui/envinfo</code></summary>
```
Don't forget to mention which browser you used.
Output from `npx @mui/envinfo` goes here.
```
</details>
**Search keywords**: button cursor | on hold,component: button,design | low | Minor |
2,493,436,952 | neovim | Windows: crash on exit, leaving the editor screen on the terminal | ### Problem
Platform: Windows 10
Neovim version: v0.10.1
Terminal: Windows Terminal
Shell: Powershell v7.4.5, git bash on Windows, cmd.exe
Crashes on exit occasionally happen on Windows since v0.10.0. It leaves the neovim editor screen on the terminal (as an indication of crashing), though `clear` does clear the crashing editor. The following consequences always happen after neovim crashes:
- Windows terminal cannot scroll
- creates `stdpath('data')/shada/main.tmp.shada`
- creates `stdpath('data')/swap/%C%...%file_edited_when_crash.swp`
### Steps to reproduce
1. `nvim`
2. open telescope and select a file or :e some_file
3. exit neovim by :q
4. neovim crashes and leave the file content (entire editor) on the terminal screen
Here are two verbose logs when crashes happen after running `nvim -Vlog` and then `nvim -Vanotherlog` for the same file:
[log.txt](https://github.com/user-attachments/files/16792118/log.txt)
[anotherlog.txt](https://github.com/user-attachments/files/16792119/anotherlog.txt)
### Expected behavior
Do not crash
### Neovim version (nvim -v)
v0.10.1
### Vim (not Nvim) behaves the same?
does not use vim
### Operating system/version
Windows 10
### Terminal name/version
Windows Terminal 1.21.1772.0
### $TERM environment variable
xterm-256color
### Installation
scoop | platform:windows,bug-crash | low | Critical |
2,493,449,589 | flutter | Android native view scroll flickering using `PlatformViewLink` | ### Steps to reproduce
Embedding a native android view via `AndroidViewSurface` and alternatively `AndroidView`.
### Expected results
Scrolling is smooth and flicker free.
### Actual results
When scrolling inside a native android view using `PlatformViewLink` the content starts flickering and sometimes even completely disappears. This is especially noticeable when expanding the keyboard, or slow scrolling.
When using `AndroidView` the scrolling is smooth at any given moment. No flickering is noticeable. No issues with `AndroidView`.
### Code sample
<details open><summary>Code sample</summary>
Implementation via `PlatformViewLink`:
```dart
PlatformViewLink(
viewType: 'gecko-view',
surfaceFactory: (
context,
controller,
) {
return AndroidViewSurface(
controller: controller as AndroidViewController,
gestureRecognizers: const <Factory<
OneSequenceGestureRecognizer>>{},
hitTestBehavior: PlatformViewHitTestBehavior.opaque,
);
},
onCreatePlatformView: (PlatformViewCreationParams params) {
return PlatformViewsService.initExpensiveAndroidView(
id: params.id,
viewType: 'gecko-view',
layoutDirection: TextDirection.ltr,
creationParams: {},
creationParamsCodec: const StandardMessageCodec(),
)
..addOnPlatformViewCreatedListener(params.onPlatformViewCreated)
..create();
},
),
```
Implementation via `AndroidView`:
```dart
body: AndroidView(
viewType: 'gecko-view',
creationParams: null,
creationParamsCodec: const StandardMessageCodec(),
gestureRecognizers: const <Factory<OneSequenceGestureRecognizer>>{},
),
```
Complete sample application:
https://github.com/FaFre/geckoview_example
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
Scrolling inside `PlatformViewLink`:
https://github.com/user-attachments/assets/c3bfdf15-763c-498f-b385-333504efb9d4
Scrolling inside `AndroidView`:
https://github.com/user-attachments/assets/9e263c78-2257-4bc2-b1ec-41f1432dd140
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.24.1, on Manjaro Linux 6.6.46-1-MANJARO, locale en_US.UTF-8)
• Flutter version 3.24.1 on channel stable at /home/fafre/Software/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 5874a72aa4 (8 days ago), 2024-08-20 16:46:00 -0500
• Engine revision c9b9d5780d
• Dart version 3.5.1
• DevTools version 2.37.2
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /opt/android-sdk
• Platform android-34, build-tools 34.0.0
• ANDROID_HOME = /opt/android-sdk
• ANDROID_SDK_ROOT = /opt/android-sdk
• Java binary at: /opt/android-studio/jbr/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
• All Android licenses accepted.
[✓] Chrome - develop for the web
• CHROME_EXECUTABLE = chromium
[✓] Linux toolchain - develop for Linux desktop
• clang version 18.1.8
• cmake version 3.30.2
• ninja version 1.12.1
• pkg-config version 2.1.1
[✓] Android Studio (version 2023.3)
• Android Studio at /opt/android-studio
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
[✓] Connected device (3 available)
• M2012K11AG (mobile) • 859ef07c • android-arm64 • Android 14 (API 34)
• Linux (desktop) • linux • linux-x64 • Manjaro Linux 6.6.46-1-MANJARO
• Chrome (web) • chrome • web-javascript • Chromium 128.0.6613.84 Arch Linux
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| e: device-specific,platform-android,framework,f: scrolling,a: platform-views,P3,team-android,triaged-android | low | Major |
2,493,502,189 | PowerToys | Always on top doesnt deactivate if there are multiple windows of the same application | ### Microsoft PowerToys version
0.83.0
### Installation method
GitHub
### Running as admin
Yes
### Area(s) with issue?
Always on Top
### Steps to reproduce
have multiple tabs of opera gx open then turn on alwyas on top for it then try to deactivate using the shortcut
### ✔️ Expected Behavior
when i do the shortcut it should turn off always on top
### ❌ Actual Behavior
the always on top just shuffles through different windows of the same app
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,493,542,042 | svelte | Svelte 5: export generic `EventHandler` type | ### Describe the problem
Since [DOM event forwarding is removed](https://svelte-5-preview.vercel.app/docs/event-handlers#bubbling-events), event handlers must be typed in props:
> Instead of doing `<button on:click>` to 'forward' the event from the element to the component, the component should accept an `onclick` callback prop.
Svelte [exports multiple event handlers](https://github.com/sveltejs/svelte/blob/svelte%405.0.0-next.241/packages/svelte/elements.d.ts#L46-L62), but the generic `EventHandler` is not one of them.
However, it is required to define these event handlers:
```ts
// Document Events
onvisibilitychange?: EventHandler<Event, T> | undefined | null;
// Global Events
onclose?: EventHandler<Event, T> | undefined | null;
oncancel?: EventHandler<Event, T> | undefined | null;
```
### Describe the proposed solution
1. Export `EventHandler` as is
2. Export `GenericEventHandler` as `EventHandler<Event, T>`
```ts
// Export this,
type EventHandler<E extends Event = Event, T extends EventTarget = Element> = (
event: E & { currentTarget: EventTarget & T }
) => any;
// Or export this
export type GenericEventHandler<T extends EventTarget> = EventHandler<Event, T>;
// Existing EventHandlers with generic type:
export type FormEventHandler<T extends EventTarget> = EventHandler<Event, T>;
export type ChangeEventHandler<T extends EventTarget> = EventHandler<Event, T>;
```
### Importance
nice to have | types / typescript | low | Major |
2,493,606,329 | TypeScript | `@deprecated` nested namespace handling is buggy | ### 🔎 Search Terms
deprecated nested namespace
### 🕗 Version & Regression Information
- This is the behavior in every version I tried
### ⏯ Playground Link
https://www.typescriptlang.org/play/?#code/FAegVGAEACAmCmAHATvAxgQwC71pMIwAdhgLbwDOiGa8kAYgPaMCMAdAEIbKQDewkSPAAeiRsiyQ0jIhUnDIAXkgsA3MAC+wYmUrVaDZuy4AvPgKGjxk6bPlKV6rcCatO3NsPWvjGE5-VtUBAQoJ1yKho6VwAmdzN+QRExCSkZOUgFZTVNbXAoOCRUTBw8AnC9KMNGOK4eRMsUm3T7bKcg2PdkAJdmWr8e7SA
### 💻 Code
```ts
/** @deprecated */
namespace Foo1.Bar {
export const x = 1;
}
namespace Foo1.Baz {
export const x = 1;
}
Foo1.Bar.x;
Foo1.Baz.x;
```
```ts
namespace Foo2.Baz {
export const x = 1;
}
/** @deprecated */
namespace Foo2.Bar {
export const x = 1;
}
Foo2.Bar.x;
Foo2.Baz.x;
```
### 🙁 Actual behavior
Usages of `Foo1` are stricken through

But usages of `Foo2` are not stricken through

Yet intellisense shows the tag in the hover for `Foo2`

### 🙂 Expected behavior
Either both `Foo1` and `Foo2` should be treated as deprecated, or neither should and instead the `Bar` in `Foo1.Bar` and `Foo2.Bar` should be treated as deprecated.
### Additional information about the issue
@typescript-eslint recently released a new lint rule [`no-deprecated`](https://typescript-eslint.io/rules/no-deprecated/) which reports a lint error whenever you use a thing marked with `@deprecated`.
A user mentioned to us that the rule reports incorrectly in certain cases (https://github.com/typescript-eslint/typescript-eslint/issues/9902).
Specifically in a case like this
https://github.com/DefinitelyTyped/DefinitelyTyped/blob/d1c172bdcfd4508405f4b233e11ccd7d8743f763/types/chrome/index.d.ts#L8553-L8556
We did some investigation and found that things can be pretty ambiguous when nested namespace declarations are marked `@deprecated` like this and TS itself struggles with it.
---
It's worth noting that the above behaviour [is consistent between nested namespace shorthand (`namespace Foo.Bar`) and the non-shorthand style](https://www.typescriptlang.org/play/?#code/FAegVGAEACAmCmAHATvAxgQwC71pMIwAdhgLbwDOiGa8kAYgPaMCMkA3sJJPAB6KNkWSCXJUadAEIZkHLtx79BwtIyIVhvSAF5ILANzyAvsBPEylarQbM2nbnwFCRF8dekAvOQsVOVajUgtXQNjU2BgJlYAOmlkaN5DKJZYjA8EwwjQEBys8zErOiiAJm9fZRcCiUhPMoclZ1V1TR09Q24TM3AoOCRUTBw8AnzLapKyxwrRUfcZOvLGgJaQ9shOrJLU+MTI5mLU9J2IoA)
Note you see similar behaviour for many declaration-merged things where TS's handling only makrs something as deprecated if the first definition is marked as deprecated, eg [Enums](https://www.typescriptlang.org/play/?#code/N4KABGCmB2CuC2YBiB7FZQQgQTAXjAEYAacMAXzIHoAqGsAAQBNIAHAJ0gGMBDAF0hMwNKmRgJkaDGQgAhfGABMpCJTJcU0AM58wADwWoUAOmwBudZp1gAnobTHZFtVVcgQmMLXrM2nXgJCImJwiEbSWGC4BCRklBDiYVKecgrKce6W2roGBEamFhAa2bb2Jk4glEA).
But there are also cases where TS gets it right, eg [Interfaces](https://www.typescriptlang.org/play/?#code/N4KABGCWB2AuCmAnAZgQwMbzAMQPa7FAggEEAuMARgBpwwBfOgegCoWwABAE3gAdF46VAi5gWTOjAQoMWPASLEAQhQBMtCIzqwAnrywAPMAF4c+ANoByEpYC6Abm16sOk2dxWldx1qZ+QIIqs7Nx8AkIiYhIQUkhomO6EdKQUNHSMMXBxsomKECpg6ukBTvpgRqbyVjYOpS5uVZZetYxAA)
Then there are weird cases like [type/value shadowing](https://www.typescriptlang.org/play/?#code/N4KABGDGD2B2DOAXMAxa0wF4yghAggFxgCMANOGAL4DclA9AFSNgACAJgKYAOATp5ACGiTuzCN6lAJawRvAGaDInVOhyUIAIWIAmChCohKMBMgAeWVdADaAcny2AunQiIAntxVvLaG7c1OdIYg9KFGuGBMLBw8-EIiYhLGcEhWlhEExOSUtNKynApKKr7qeGDaYHo5RsmmYBbYvnYOzpTunmDejeh2Aa2GQA) where I'm not sure if TS is right or wrong -- depends on what you expect I guess.
---
I would love to see some clarification on what you guys think is the correct behaviour so that we can follow-suit!
| Help Wanted,Possible Improvement | low | Critical |
2,493,614,944 | godot | C++ can not expose properly typed properties that return Script classes | ### Tested versions
4.3-stable
### System information
Windows 11
### Issue description
I am currently trying to add some properties to a C++ class that return GDScript classes. While everything is linked correctly in the documentation and autocomplete is working,

trying to access the properties or casting them to the GDScript type results in

Maybe I am doing something wrong but it seems the issue is this part here:
https://github.com/godotengine/godot/blob/ce8a837aab2568f9cdc41b4b410c478b0cd711fc/modules/gdscript/gdscript_analyzer.cpp#L5710
Where `p_source` is `SCRIPT` and `p_target` is `CLASS`
When I implement the same functionality in GDScript, both `p_source` and `p_target` are `CLASS`
### Steps to reproduce
-
### Minimal reproduction project (MRP)
```
class ManagerMethodBind : public MethodBind
{
protected:
Variant::Type _gen_argument_type(int p_arg) const override
{
return Variant::OBJECT;
}
PropertyInfo _gen_argument_type_info(int p_arg) const override
{
return PropertyInfo(m_class);
}
public:
#ifdef DEBUG_METHODS_ENABLED
GodotTypeInfo::Metadata get_argument_meta(int p_arg) const override
{
return GodotTypeInfo::METADATA_NONE;
}
#endif
Variant call(Object* p_object, const Variant** p_args, int p_arg_count, Callable::CallError& r_error) const override
{
return static_cast<Manager*>(p_object)->get_manager(m_name);
}
void validated_call(Object* p_object, const Variant** p_args, Variant* r_ret) const override
{
*r_ret = static_cast<Manager*>(p_object)->get_manager(m_name);
}
void ptrcall(Object* p_object, const void** p_args, void* r_ret) const override
{
*static_cast<Object**>(r_ret) = static_cast<Manager*>(p_object)->get_manager(m_name);
}
ManagerMethodBind(StringName name, StringName cls)
: m_class(cls)
{
m_name =Manager::get_manager_name(cls);
set_name(name);
_set_returns(true);
_generate_argument_types(0);
set_argument_count(0);
}
private:
String m_name;
StringName m_class;
};
void get_script_managers(const String& p_class, List<StringName>& r_inheriters)
{
Array script_classes = ProjectSettings::get_singleton()->get_global_class_list();
for (int i = 0; i < script_classes.size(); i++)
{
Dictionary c = script_classes[i];
if (!c.has("class") || !c.has("language") || !c.has("path") || !c.has("base"))
{
continue;
}
if (String(c["base"]) == p_class)
{
r_inheriters.push_back(c["class"]);
}
}
}
void Manager::_bind_methods()
{
auto bind_manager = [](const StringName& mgr)
{
const auto method_name = "get_" + get_manager_name(mgr);
ClassDB::bind_method_custom(get_class_static(), memnew(ManagerMethodBind(method_name, mgr)));
ADD_PROPERTY(PropertyInfo(Variant::OBJECT, get_manager_name(mgr),
PROPERTY_HINT_RESOURCE_TYPE, mgr, PROPERTY_USAGE_READ_ONLY), "", method_name.utf8());
};
List<StringName> game;
ClassDB::get_inheriters_from_class("GameManager", &game);
get_script_managers("GameManager", game);
ADD_GROUP("Game", "");
for (const auto& mgr : game)
{
bind_manager(mgr);
}
}
| bug,topic:gdscript,needs testing,topic:gdextension | low | Critical |
2,493,623,777 | PowerToys | Easy Access to Cursor Workspaces via Shortcut or Menu Option | ### Description of the new feature / enhancement
I would like to request a feature that allows users to quickly open Cursor Workspaces through a dedicated shortcut or a context menu option within the application. This feature would streamline the process of switching between workspaces, enhancing the efficiency of using Cursor for various development projects.
### Scenario when this would be used?
This feature would be particularly useful for developers who manage multiple projects or workspaces in Cursor. By having an easy way to switch or open workspaces, users can save time and maintain focus on their work without navigating through multiple menus or options. It’s crucial for power users who handle complex or concurrent development tasks across different workspaces.
### Supporting information
Cursor is an AI-powered code editor built on top of VSCode. The requested feature is essentially the same as the existing VSCode Workspaces plugin, but adapted for Cursor. Providing this feature would align Cursor with other modern code editors that offer quick access to workspaces, further solidifying Cursor’s reputation as a versatile and developer-friendly tool. Evidence of similar functionality in other editors like VSCode has proven to increase productivity and user satisfaction. | Needs-Triage | low | Minor |
2,493,629,846 | pytorch | "Attempted to send CUDA tensor received from another process" even though that was not the case | ### 🐛 Describe the bug
Consider the simple example, written by following all the guidelines regarding multiprocessing and torch:
```python
import torch.multiprocessing as mp
import logging
from logging.handlers import QueueHandler, QueueListener
import traceback
import torch
def image_writer(queue, i, logging_queue):
logger = logging.getLogger("foobar")
logger.setLevel(logging.INFO)
handler = QueueHandler(logging_queue)
handler.setLevel(logging.INFO)
logger.addHandler(handler)
n = 0
try:
logger.info(f"[{i}] Starting write loop")
while True:
item = queue.get()
if item is None:
logger.info(f"[{i}] Exiting write loop")
break
logger.info(f"[{i}] got item {item}")
n += 1
except Exception:
logger.error(f"[{i}] {traceback.format_exc()}")
raise
finally:
logger.debug(f"[{i}] Exiting write loop")
return i, n
if __name__ == '__main__':
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
h2 = logging.StreamHandler()
f2 = logging.Formatter('MAIN: %(message)s')
h2.setFormatter(f2)
logger.addHandler(h2)
logger.info("Start")
manager = mp.get_context('spawn').Manager()
_logging_queue = manager.Queue()
_logging_listener = QueueListener(_logging_queue, *logger.handlers)
_logging_listener.start()
queue = manager.Queue()
_n_writers = 2
pool = manager.Pool(_n_writers)
pool_results = []
for i in range(_n_writers):
pool_results.append(pool.apply_async(image_writer, (queue, i, _logging_queue)))
for i in range(10):
t = torch.tensor([42+i]).cuda()
# clone = t.cpu().share_memory_() # This works
clone = t.clone() # This doesn't
queue.put(clone)
for i in range(_n_writers):
queue.put(None)
for r in pool_results:
i, n = r.get(10)
logger.info(f"Process {i} Received {n} items")
logger.info("Stopping")
_logging_listener.stop()
manager.shutdown()
```
Running it I get something like this:
```shell
MAIN: Start
MAIN: [0] Starting write loop
MAIN: [1] Starting write loop
MAIN: [0] Traceback (most recent call last):
File "/tmp/test.py", line 19, in image_writer
item = queue.get()
^^^^^^^^^^^
File "<string>", line 2, in get
File "/home/paperspace/.conda/envs/EDVR/lib/python3.12/multiprocessing/managers.py", line 836, in _callmethod
raise convert_to_error(kind, result)
multiprocessing.managers.RemoteError:
---------------------------------------------------------------------------
Unserializable message: Traceback (most recent call last):
File "/home/paperspace/.conda/envs/EDVR/lib/python3.12/multiprocessing/managers.py", line 308, in serve_client
send(msg)
File "/home/paperspace/.conda/envs/EDVR/lib/python3.12/multiprocessing/connection.py", line 206, in send
self._send_bytes(_ForkingPickler.dumps(obj))
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/paperspace/.conda/envs/EDVR/lib/python3.12/multiprocessing/reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
File "/home/paperspace/.conda/envs/EDVR/lib/python3.12/site-packages/torch/multiprocessing/reductions.py", line 322, in reduce_tensor
) = storage._share_cuda_()
^^^^^^^^^^^^^^^^^^^^^^
File "/home/paperspace/.conda/envs/EDVR/lib/python3.12/site-packages/torch/storage.py", line 1200, in _share_cuda_
return self._untyped_storage._share_cuda_(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Attempted to send CUDA tensor received from another process; this is not currently supported. Consider cloning before sending.
---------------------------------------------------------------------------
Traceback (most recent call last):
File "/tmp/test.py", line 68, in <module>
i, n = r.get(10)
^^^^^^^^^
File "<string>", line 2, in get
File "/home/paperspace/.conda/envs/EDVR/lib/python3.12/multiprocessing/managers.py", line 836, in _callmethod
raise convert_to_error(kind, result)
multiprocessing.managers.RemoteError:
---------------------------------------------------------------------------
Unserializable message: Traceback (most recent call last):
File "/home/paperspace/.conda/envs/EDVR/lib/python3.12/multiprocessing/managers.py", line 308, in serve_client
send(msg)
File "/home/paperspace/.conda/envs/EDVR/lib/python3.12/multiprocessing/connection.py", line 206, in send
self._send_bytes(_ForkingPickler.dumps(obj))
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/paperspace/.conda/envs/EDVR/lib/python3.12/multiprocessing/reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
File "/home/paperspace/.conda/envs/EDVR/lib/python3.12/site-packages/torch/multiprocessing/reductions.py", line 322, in reduce_tensor
) = storage._share_cuda_()
^^^^^^^^^^^^^^^^^^^^^^
File "/home/paperspace/.conda/envs/EDVR/lib/python3.12/site-packages/torch/storage.py", line 1200, in _share_cuda_
return self._untyped_storage._share_cuda_(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Attempted to send CUDA tensor received from another process; this is not currently supported. Consider cloning before sending.
---------------------------------------------------------------------------
MAIN: [1] Traceback (most recent call last):
File "/tmp/test.py", line 19, in image_writer
item = queue.get()
^^^^^^^^^^^
File "<string>", line 2, in get
File "/home/paperspace/.conda/envs/EDVR/lib/python3.12/multiprocessing/managers.py", line 836, in _callmethod
raise convert_to_error(kind, result)
multiprocessing.managers.RemoteError:
---------------------------------------------------------------------------
Unserializable message: Traceback (most recent call last):
File "/home/paperspace/.conda/envs/EDVR/lib/python3.12/multiprocessing/managers.py", line 308, in serve_client
send(msg)
File "/home/paperspace/.conda/envs/EDVR/lib/python3.12/multiprocessing/connection.py", line 206, in send
self._send_bytes(_ForkingPickler.dumps(obj))
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/paperspace/.conda/envs/EDVR/lib/python3.12/multiprocessing/reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
File "/home/paperspace/.conda/envs/EDVR/lib/python3.12/site-packages/torch/multiprocessing/reductions.py", line 322, in reduce_tensor
) = storage._share_cuda_()
^^^^^^^^^^^^^^^^^^^^^^
File "/home/paperspace/.conda/envs/EDVR/lib/python3.12/site-packages/torch/storage.py", line 1200, in _share_cuda_
return self._untyped_storage._share_cuda_(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Attempted to send CUDA tensor received from another process; this is not currently supported. Consider cloning before sending.
---------------------------------------------------------------------------
Exception in thread Thread-1 (_monitor):
Traceback (most recent call last):
File "/home/paperspace/.conda/envs/EDVR/lib/python3.12/threading.py", line 1073, in _bootstrap_inner
self.run()
File "/home/paperspace/.conda/envs/EDVR/lib/python3.12/threading.py", line 1010, in run
self._target(*self._args, **self._kwargs)
File "/home/paperspace/.conda/envs/EDVR/lib/python3.12/logging/handlers.py", line 1574, in _monitor
record = self.dequeue(True)
^^^^^^^^^^^^^^^^^^
File "/home/paperspace/.conda/envs/EDVR/lib/python3.12/logging/handlers.py", line 1523, in dequeue
return self.queue.get(block)
^^^^^^^^^^^^^^^^^^^^^
File "<string>", line 2, in get
File "/home/paperspace/.conda/envs/EDVR/lib/python3.12/multiprocessing/managers.py", line 821, in _callmethod
kind, result = conn.recv()
^^^^^^^^^^^
File "/home/paperspace/.conda/envs/EDVR/lib/python3.12/multiprocessing/connection.py", line 250, in recv
buf = self._recv_bytes()
^^^^^^^^^^^^^^^^^^
File "/home/paperspace/.conda/envs/EDVR/lib/python3.12/multiprocessing/connection.py", line 430, in _recv_bytes
buf = self._recv(4)
^^^^^^^^^^^^^
File "/home/paperspace/.conda/envs/EDVR/lib/python3.12/multiprocessing/connection.py", line 399, in _recv
raise EOFError
EOFError
[W828 23:57:56.977435534 CudaIPCTypes.cpp:16] Producer process has been terminated before all shared CUDA tensors released. See Note [Sharing CUDA tensors]
```
All the tensors I am putting in the queue are clearly not from another process, and should not even require cloning. But even forcing a clone as the error message requests does not resolve the problem. What am I doing wrong there?
### Versions
Collecting environment information...
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31 Python version: 3.12.4 | packaged by conda-forge | (main, Jun 17 2024, 10:23:07) [GCC 12.3.0] (64-bit runtime) Python platform: Linux-5.15.0-117-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.6.124
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 12
On-line CPU(s) list: 0-11
Thread(s) per core: 1
Core(s) per socket: 12
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Gold 6342 CPU @ 2.80GHz
Stepping: 6
CPU MHz: 2800.210
BogoMIPS: 5600.76
Hypervisor vendor: Xen
Virtualization type: full
L1d cache: 576 KiB
L1i cache: 384 KiB
L2 cache: 15 MiB
L3 cache: 432 MiB
NUMA node0 CPU(s): 0-11
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush acpi mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single intel_ppin ssbd ibrs ib
pb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves umip pku ospke gfni vaes vpclmulqdq rdpid md_clear flush_l
1d arch_capabilities
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.0.1
[pip3] onnx==1.16.1
[pip3] torch==2.4.0
[pip3] torchmetrics==1.4.1
[pip3] torchvision==0.19.0
[pip3] triton==3.0.0
[conda] numpy 2.0.1 pypi_0 pypi
[conda] torch 2.4.0 pypi_0 pypi
[conda] torchmetrics 1.4.1 pypi_0 pypi
[conda] torchvision 0.19.0 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
cc @VitalyFedyunin | module: multiprocessing,triaged | low | Critical |
2,493,656,815 | vscode | Build failed: Segmentation fault during integration tests | Build: https://dev.azure.com/monacotools/a6d41577-0fa3-498e-af22-257312ff0545/_build/results?buildId=290397
Changes: https://github.com/Microsoft/vscode/compare/ae45c9d...ae45c9d
Also: https://dev.azure.com/monacotools/a6d41577-0fa3-498e-af22-257312ff0545/_build/results?buildId=289982
| freeze-slow-crash-leak | low | Critical |
2,493,742,918 | next.js | hydrate not finished until the whole content is loaded at Stream SSR mode | ### Link to the code that reproduces this issue
https://github.com/HomyeeKing/next-ssr
### To Reproduce
```bash
npm install
npm run dev
```
### Current vs. Expected behavior
I create a simple demo, that the first Comp wait 2s and the other two components wait 3s to hydrate
but the mounted time(useEffect) is 10s later
even run in sequently, the time should be 2 + 3 + 3 = 8s, how the 10s comes out?

expect that the element can be interactive once the suspense finished
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.6.0: Mon Jul 29 21:14:21 PDT 2024; root:xnu-10063.141.2~1/RELEASE_ARM64_T8103
Available memory (MB): 16384
Available CPU cores: 8
Binaries:
Node: 20.16.0
npm: 10.8.2
Yarn: 1.22.22
pnpm: 9.7.1
Relevant Packages:
next: 14.2.7 // Latest available version is detected (14.2.7).
eslint-config-next: 14.2.7
react: 18.3.1
react-dom: 18.3.1
typescript: 5.5.4
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Partial Prerendering (PPR), Performance
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
_No response_ | bug,Performance,Partial Prerendering (PPR) | low | Major |
2,493,743,190 | pytorch | Torch.onnx.dynamo_export Failure in Pytest for BatchNorm | ### 🐛 Describe the bug
Pytest does not work with torch.onnx.dynamo_export() if the model has a batchnorm in it.
As a minimal example with efficientnet_b0 consider running the following code snippet with pytest and then as a script. It fails with pytest, even though it succeeds when ran as a script.
```python
import torch
from torchvision.models import efficientnet_b0
def test_onnx_export():
test_data = torch.ones(1, 3, 360, 640).to(torch.float32).cuda()
model = efficientnet_b0(pretrained=True).cuda()
model.eval()
torch.onnx.dynamo_export(model, test_data)
if __name__ == "__main__":
test_onnx_export()
```
The second example is a simple CNN. If the batch norm is included in the forward pass, the same error happens as in the first minimal example. If not, the pytest succeeds.
```
import torch
class SimpleCNN(torch.nn.Module):
def __init__(self, num_classes):
super(SimpleCNN, self).__init__()
self.conv1 = torch.nn.Conv2d(3, 16, 3, padding=1)
self.bn = torch.nn.BatchNorm2d(16)
self.relu1 = torch.nn.ReLU()
self.pool1 = torch.nn.MaxPool2d(2, 2)
self.fc = torch.nn.Linear(16 * 180 * 320, num_classes)
def forward(self, x):
x = self.conv1(x)
x = self.relu1(x)
x = self.bn(x)
x = self.pool1(x)
x = torch.flatten(x, 1)
x = self.fc(x)
return x
def test_onnx_export():
test_data = torch.ones(1, 3, 360, 640).to(torch.float32).cuda()
model = SimpleCNN(10).cuda()
model.eval()
torch.onnx.dynamo_export(model, test_data)
if __name__ == "__main__":
test_onnx_export()
```
This is the stacktrace I get when the pytests are failing:
```
I0829 08:05:29.460000 137854302897984 torch/onnx/_internal/fx/diagnostics.py:219] [__onnx_diagnostics] For detailed logging of graph modifications by this pass, either set `DiagnosticOptions.verbosity_level` to `logging.DEBUG` or use the environment variable `TORCH_LOGS='onnx_diagnostics'`.
I0829 08:05:29.464000 137854302897984 torch/onnx/_internal/fx/diagnostics.py:219] [__onnx_diagnostics] Renamed 'self._param_constant0' to 'self.conv1/weight', normalized from original parameter name 'conv1.weight'.
I0829 08:05:29.465000 137854302897984 torch/onnx/_internal/fx/diagnostics.py:219] [__onnx_diagnostics] Renamed 'self._param_constant1' to 'self.conv1/bias', normalized from original parameter name 'conv1.bias'.
I0829 08:05:29.465000 137854302897984 torch/onnx/_internal/fx/diagnostics.py:219] [__onnx_diagnostics] Renamed 'self._param_constant2' to 'self.bn/weight', normalized from original parameter name 'bn.weight'.
I0829 08:05:29.465000 137854302897984 torch/onnx/_internal/fx/diagnostics.py:219] [__onnx_diagnostics] Renamed 'self._param_constant3' to 'self.bn/bias', normalized from original parameter name 'bn.bias'.
E0829 08:05:29.466000 137854302897984 torch/onnx/_internal/fx/diagnostics.py:219] [__onnx_diagnostics] ## Exception log
E0829 08:05:29.466000 137854302897984 torch/onnx/_internal/fx/diagnostics.py:219] [__onnx_diagnostics] ```
E0829 08:05:29.466000 137854302897984 torch/onnx/_internal/fx/diagnostics.py:219] [__onnx_diagnostics] Traceback (most recent call last):
E0829 08:05:29.466000 137854302897984 torch/onnx/_internal/fx/diagnostics.py:219] [__onnx_diagnostics]
E0829 08:05:29.466000 137854302897984 torch/onnx/_internal/fx/diagnostics.py:219] [__onnx_diagnostics] File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/diagnostics/infra/decorator.py", line 135, in wrapper
E0829 08:05:29.466000 137854302897984 torch/onnx/_internal/fx/diagnostics.py:219] [__onnx_diagnostics] return_values = fn(*args, **kwargs)
E0829 08:05:29.466000 137854302897984 torch/onnx/_internal/fx/diagnostics.py:219] [__onnx_diagnostics]
E0829 08:05:29.466000 137854302897984 torch/onnx/_internal/fx/diagnostics.py:219] [__onnx_diagnostics] File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/fx/_pass.py", line 275, in run
E0829 08:05:29.466000 137854302897984 torch/onnx/_internal/fx/diagnostics.py:219] [__onnx_diagnostics] module = self._run(*args, **kwargs)
E0829 08:05:29.466000 137854302897984 torch/onnx/_internal/fx/diagnostics.py:219] [__onnx_diagnostics]
E0829 08:05:29.466000 137854302897984 torch/onnx/_internal/fx/diagnostics.py:219] [__onnx_diagnostics] File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/fx/passes/readability.py", line 129, in _run
E0829 08:05:29.466000 137854302897984 torch/onnx/_internal/fx/diagnostics.py:219] [__onnx_diagnostics] self._rename_param_and_buffer(diagnostic, nodes, new_name)
E0829 08:05:29.466000 137854302897984 torch/onnx/_internal/fx/diagnostics.py:219] [__onnx_diagnostics]
E0829 08:05:29.466000 137854302897984 torch/onnx/_internal/fx/diagnostics.py:219] [__onnx_diagnostics] File "/usr/local/lib/python3.10/dist-packages/torch/onnx/_internal/fx/passes/readability.py", line 53, in _rename_param_and_buffer
E0829 08:05:29.466000 137854302897984 torch/onnx/_internal/fx/diagnostics.py:219] [__onnx_diagnostics] new_node = self.module.graph.get_attr(normalized_name)
E0829 08:05:29.466000 137854302897984 torch/onnx/_internal/fx/diagnostics.py:219] [__onnx_diagnostics]
E0829 08:05:29.466000 137854302897984 torch/onnx/_internal/fx/diagnostics.py:219] [__onnx_diagnostics] File "/usr/local/lib/python3.10/dist-packages/torch/fx/graph.py", line 1102, in get_attr
E0829 08:05:29.466000 137854302897984 torch/onnx/_internal/fx/diagnostics.py:219] [__onnx_diagnostics] warnings.warn("Attempted to insert a get_attr Node with no "
E0829 08:05:29.466000 137854302897984 torch/onnx/_internal/fx/diagnostics.py:219] [__onnx_diagnostics]
E0829 08:05:29.466000 137854302897984 torch/onnx/_internal/fx/diagnostics.py:219] [__onnx_diagnostics] UserWarning: Attempted to insert a get_attr Node with no underlying reference in the owning GraphModule! Call GraphModule.add_submodule to add the necessary submodule, GraphModule.add_parameter to add the necessary Parameter, or nn.Module.register_buffer to add the necessary buffer
E0829 08:05:29.466000 137854302897984 torch/onnx/_internal/fx/diagnostics.py:219] [__onnx_diagnostics]
E0829 08:05:29.466000 137854302897984 torch/onnx/_internal/fx/diagnostics.py:219] [__onnx_diagnostics] ```
```
### Versions
```
Collecting environment information...
PyTorch version: 2.3.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.2
Libc version: glibc-2.35
Python version: 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.8.0-40-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA T500
Nvidia driver version: 535.183.01
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Vendor ID: GenuineIntel
Model name: 11th Gen Intel(R) Core(TM) i7-1165G7 @ 2.80GHz
CPU family: 6
Model: 140
Thread(s) per core: 2
Core(s) per socket: 4
Socket(s): 1
Stepping: 1
CPU max MHz: 4700.0000
CPU min MHz: 400.0000
BogoMIPS: 5606.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l2 cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid movdiri movdir64b fsrm avx512_vp2intersect md_clear ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 192 KiB (4 instances)
L1i cache: 128 KiB (4 instances)
L2 cache: 5 MiB (4 instances)
L3 cache: 12 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-7
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] ament-flake8==0.12.11
[pip3] flake8==4.0.1
[pip3] mypy==1.9.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] onnx==1.16.0
[pip3] onnx-graphsurgeon==0.5.2
[pip3] onnxruntime==1.17.3
[pip3] onnxruntime-gpu==1.17.1
[pip3] onnxscript==0.1.0.dev20240524
[pip3] torch==2.3.1+cu121
[pip3] torch-tb-profiler==0.4.3
[pip3] torchinfo==1.8.0
[pip3] torchtyping==0.1.4
[pip3] torchvision==0.18.1+cu121
[pip3] triton==2.3.1
[conda] Could not collect
``` | module: onnx,triaged,onnx-needs-info | low | Critical |
2,493,757,306 | yt-dlp | Unsupported Tiktok URL including photo with sound | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
_No response_
### Provide a description that is worded well enough to be understood
I tried to download a link from tiktok witch which is a photo with music and not a normal video and I got an error message.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
c:\>yt-dlp.exe -vU https://www.tiktok.com/@croatian_memes/photo/7406655666472979745?_d=secCgYIASAHKAESPgo8MVFrKEFm1YROLgGqS3uoOc8zwKGbUfXIRJhN05eIKiY6ag6I57DMdCc2w2vmsPhsRrjRujTn6sZCjBxMGgA%3D
[debug] Command-line config: ['-vU', 'https://www.tiktok.com/@croatian_memes/photo/7406655666472979745?_d=secCgYIASAHKAESPgo8MVFrKEFm1YROLgGqS3uoOc8zwKGbUfXIRJhN05eIKiY6ag6I57DMdCc2w2vmsPhsRrjRujTn6sZCjBxMGgA%3D']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.08.06 from yt-dlp/yt-dlp [4d9231208] (win_exe)
[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1k 25 Mar 2021)
[debug] exe versions: none
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.07.04, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.35.5, urllib3-2.2.2, websockets-12.0
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1830 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.08.06 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.08.06 from yt-dlp/yt-dlp)
[generic] Extracting URL: https://www.tiktok.com/@croatian_memes/photo/7406655666472979745?_d=secCgYIASAHKAESPgo8MVFrKEFm1YROLgGqS3uoOc8zwKGbUfXIRJhN05eIKiY6ag6I57DMdCc2w2vmsPhsRrjRujTn6sZCjBxMGgA%3D
[generic] 7406655666472979745?_d=secCgYIASAHKAESPgo8MVFrKEFm1YROLgGqS3uoOc8zwKGbUfXIRJhN05eIKiY6ag6I57DMdCc2w2vmsPhsRrjRujTn6sZCjBxMGgA=: Downloading webpage
WARNING: [generic] Falling back on generic information extractor
[generic] 7406655666472979745?_d=secCgYIASAHKAESPgo8MVFrKEFm1YROLgGqS3uoOc8zwKGbUfXIRJhN05eIKiY6ag6I57DMdCc2w2vmsPhsRrjRujTn6sZCjBxMGgA=: Extracting information
[debug] Looking for embeds
ERROR: Unsupported URL: https://www.tiktok.com/@croatian_memes/photo/7406655666472979745?_d=secCgYIASAHKAESPgo8MVFrKEFm1YROLgGqS3uoOc8zwKGbUfXIRJhN05eIKiY6ag6I57DMdCc2w2vmsPhsRrjRujTn6sZCjBxMGgA%3D
Traceback (most recent call last):
File "yt_dlp\YoutubeDL.py", line 1626, in wrapper
File "yt_dlp\YoutubeDL.py", line 1761, in __extract_info
File "yt_dlp\extractor\common.py", line 740, in extract
File "yt_dlp\extractor\generic.py", line 2526, in _real_extract
yt_dlp.utils.UnsupportedError: Unsupported URL: https://www.tiktok.com/@croatian_memes/photo/7406655666472979745?_d=secCgYIASAHKAESPgo8MVFrKEFm1YROLgGqS3uoOc8zwKGbUfXIRJhN05eIKiY6ag6I57DMdCc2w2vmsPhsRrjRujTn6sZCjBxMGgA%3D
```
| site-enhancement,triage | low | Critical |
2,493,762,508 | pytorch | Need handle `march` config when AoT mode. | https://github.com/pytorch/pytorch/blob/578b8d75e5220a8bad3b4c94e3385f9bf721c1dc/torch/_inductor/cpp_builder.py#L475-L478
Discussed with @jgong5 , we need handle `march` config when AoT mode.
cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 | module: cpu,triaged,oncall: pt2,module: inductor,oncall: export,module: aotinductor | low | Minor |
2,493,814,128 | vscode | Allow remote ssh machine to be included/captured in VSCode's "Export profile" feature. | I'd like to share my profile with colleagues, but it doesn't capture settings.json, extensions, and maybe a few other things from my remote development VM; only local VSCode profile data. Maybe the VSCode export/import profile feature could be enhanced to capture a remote ssh machines settings also. Or maybe remote ssh extension could manage profiles separately, since a local VSCode session could have multiple remotes, and another person's remote machines might have different names/addresses, etc. | feature-request,remote,user-profiles | low | Minor |
2,493,824,765 | flutter | [iOS] The `DART_OBFUSCATION` in Flutter Module is always set to `false`. | ### Steps to reproduce
I have an existing iOS project integrated with Flutter code, created using a Flutter Module. I'm not sure if the project created with the Flutter Module supports setting `DART_OBFUSCATION` because every time I run `flutter pub get`, a new `flutter_export_environment.sh` file is generated, and the environment variables in it are fixed. I have no way to change them.
Many solutions for setting `DART_OBFUSCATION` suggest using `flutter build ios --obfuscate`, but my project is a Flutter Module that is integrated as a Pod dependency, and it does not directly depend on the artifacts generated by `flutter build`.
Here are my steps:
1. `flutter pub get`
2. `pod install`
3. Add a Build Phase script in Xcode: `export DART_OBFUSCATION=true`
4. Run in Xcode
### Expected results
`flutter --verbose assemble -dDartObfuscation=true`
### Actual results
The following is the result I see after running Xcode:
> /Users/xxx/flutter/bin/flutter --verbose assemble --no-version-check --output=/Users/xxx/Library/Developer/Xcode/DerivedData/elecq-bucgsyiksxemgddergcycvsvkqby/Build/Products/Debug-iphoneos/ -dTargetPlatform=ios -dTargetFile=lib/main.dart -dBuildMode=debug -dIosArchs=arm64 -dSdkRoot=/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOSxx.2.sdk -dSplitDebugInfo= -dTreeShakeIcons=false -dTrackWidgetCreation=true -dDartObfuscation=false -dAction=build -dFrontendServerStarterPath= --ExtraGenSnapshotOptions= --DartDefines= --ExtraFrontEndOptions= -dCodesignIdentity=F2FF21Bxxxxx9C79E90632DD92BF8473B28 debug_ios_bundle_flutter_assets
As you can see, the result for `-dDartObfuscation` remains `false` and there are no changes.
Upon my observation, I found that every time I run `pod install`, two scripts are generated in the Build Phase. The content of these scripts is already defined in `podhelper.rb`, and I do not have the opportunity to add `DART_OBFUSCATION=true`.
Here is the script code in `podhelper.rb` that sets up the scripts:
```ruby
def install_flutter_application_pod(flutter_application_path)
flutter_application_path ||= File.join('..', '..')
export_script_directory = File.join(flutter_application_path, '.ios', 'Flutter')
# Keep script phase paths relative so they can be checked into source control.
relative = flutter_relative_path_from_podfile(export_script_directory)
flutter_export_environment_path = File.join('${SRCROOT}', relative, 'flutter_export_environment.sh')
# Compile App.framework and move it and Flutter.framework to "BUILT_PRODUCTS_DIR"
script_phase name: 'Run Flutter Build elecq_flutter Script',
script: "set -e\nset -u\nsource \"#{flutter_export_environment_path}\"\nexport VERBOSE_SCRIPT_LOGGING=1 && \"$FLUTTER_ROOT\"/packages/flutter_tools/bin/xcode_backend.sh build",
execution_position: :before_compile
# Embed App.framework AND Flutter.framework.
script_phase name: 'Embed Flutter Build elecq_flutter Script',
script: "set -e\nset -u\nsource \"#{flutter_export_environment_path}\"\nexport VERBOSE_SCRIPT_LOGGING=1 && \"$FLUTTER_ROOT\"/packages/flutter_tools/bin/xcode_backend.sh embed_and_thin",
execution_position: :after_compile
end
```
### Code sample
<details open><summary>Code sample</summary>
```dart
[Paste your code here]
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
Doctor summary (to see all details, run flutter doctor -v):
[✓] Flutter (Channel stable, 3.22.1, on macOS 14.2.1 23C71 darwin-arm64, locale zh-Hans-CN)
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
[✓] Xcode - develop for iOS and macOS (Xcode 15.2)
[✓] Chrome - develop for the web
[✓] Android Studio (version 2022.3)
[✓] VS Code (version 1.90.2)
[✓] Proxy Configuration
[✓] Connected device (4 available)
[✓] Network resources
• No issues found!
```
</details>
| platform-ios,tool,a: existing-apps,P2,team-ios,triaged-ios | low | Critical |
2,493,829,111 | react-native | NullPointerException android.widget.ScrollView in onTouchEvent | ### Description
Attempt to invoke virtual method 'void android.view.VelocityTracker.clear()' on a null object reference
### Steps to reproduce
1. Install the application with `yarn android`
2. I started scrolling up and down
3. Notice the crash
### React Native Version
0.75.2
### Affected Platforms
Runtime - Android
### Output of `npx react-native info`
```text
info Fetching system and libraries information...
System:
OS: macOS 14.6.1
CPU: (10) arm64 Apple M1 Pro
Memory: 93.86 MB / 16.00 GB
Shell:
version: "5.9"
path: /bin/zsh
Binaries:
Node:
version: 20.10.0
path: ~/.nvm/versions/node/v20.10.0/bin/node
Yarn:
version: 3.6.4
path: ~/.nvm/versions/node/v20.10.0/bin/yarn
npm:
version: 10.2.3
path: ~/.nvm/versions/node/v20.10.0/bin/npm
Watchman:
version: 2024.08.19.00
path: /opt/homebrew/bin/watchman
Managers:
CocoaPods:
version: 1.15.2
path: /usr/local/bin/pod
SDKs:
iOS SDK:
Platforms:
- DriverKit 23.5
- iOS 17.5
- macOS 14.5
- tvOS 17.5
- visionOS 1.2
- watchOS 10.5
Android SDK:
API Levels:
- "31"
- "33"
- "34"
Build Tools:
- 30.0.3
- 31.0.0
- 32.0.0
- 33.0.0
- 33.0.1
- 33.0.2
- 34.0.0
System Images:
- android-33 | Google APIs ARM 64 v8a
- android-34 | Google APIs ARM 64 v8a
Android NDK: Not Found
IDEs:
Android Studio: 2024.1 AI-241.18034.62.2411.12169540
Xcode:
version: 15.4/15F31d
path: /usr/bin/xcodebuild
Languages:
Java:
version: 17.0.12
path: /opt/homebrew/opt/openjdk@17/bin/javac
Ruby:
version: 2.6.10
path: /usr/bin/ruby
npmPackages:
"@react-native-community/cli": Not Found
react:
installed: 18.3.1
wanted: 18.3.1
react-native:
installed: 0.75.2
wanted: 0.75.2
react-native-macos: Not Found
npmGlobalPackages:
"*react-native*": Not Found
Android:
hermesEnabled: true
newArchEnabled: true
iOS:
hermesEnabled: true
newArchEnabled: false
```
### Stacktrace or Logs
```text
android.widget.ScrollView in onTouchEvent at line 831
com.facebook.react.views.scroll.c in onTouchEvent at line 70
android.view.View in dispatchTouchEvent at line 15050
android.view.ViewGroup in dispatchTransformedTouchEvent at line 3115
android.view.ViewGroup in dispatchTouchEvent at line 2788
android.view.ViewGroup in dispatchTransformedTouchEvent at line 3121
android.view.ViewGroup in dispatchTouchEvent at line 2802
android.view.ViewGroup in dispatchTransformedTouchEvent at line 3121
android.view.ViewGroup in dispatchTouchEvent at line 2802
android.view.ViewGroup in dispatchTransformedTouchEvent at line 3121
android.view.ViewGroup in dispatchTouchEvent at line 2802
android.view.ViewGroup in dispatchTransformedTouchEvent at line 3121
android.view.ViewGroup in dispatchTouchEvent at line 2802
android.view.ViewGroup in dispatchTransformedTouchEvent at line 3121
android.view.ViewGroup in dispatchTouchEvent at line 2802
android.view.ViewGroup in dispatchTransformedTouchEvent at line 3121
android.view.ViewGroup in dispatchTouchEvent at line 2802
android.view.ViewGroup in dispatchTransformedTouchEvent at line 3121
android.view.ViewGroup in dispatchTouchEvent at line 2802
android.view.ViewGroup in dispatchTransformedTouchEvent at line 3121
android.view.ViewGroup in dispatchTouchEvent at line 2802
com.microsoft.clarity.c6.i in dispatchTouchEvent at line 22
android.view.ViewGroup in dispatchTransformedTouchEvent at line 3121
android.view.ViewGroup in dispatchTouchEvent at line 2802
android.view.ViewGroup in dispatchTransformedTouchEvent at line 3121
android.view.ViewGroup in dispatchTouchEvent at line 2802
android.view.ViewGroup in dispatchTransformedTouchEvent at line 3121
android.view.ViewGroup in dispatchTouchEvent at line 2802
android.view.ViewGroup in dispatchTransformedTouchEvent at line 3121
android.view.ViewGroup in dispatchTouchEvent at line 2802
android.view.ViewGroup in dispatchTransformedTouchEvent at line 3121
android.view.ViewGroup in dispatchTouchEvent at line 2802
android.view.ViewGroup in dispatchTransformedTouchEvent at line 3121
android.view.ViewGroup in dispatchTouchEvent at line 2802
android.view.ViewGroup in dispatchTransformedTouchEvent at line 3121
android.view.ViewGroup in dispatchTouchEvent at line 2802
android.view.ViewGroup in dispatchTransformedTouchEvent at line 3121
android.view.ViewGroup in dispatchTouchEvent at line 2802
android.view.ViewGroup in dispatchTransformedTouchEvent at line 3121
android.view.ViewGroup in dispatchTouchEvent at line 2802
com.microsoft.clarity.c6.i in dispatchTouchEvent at line 22
android.view.ViewGroup in dispatchTransformedTouchEvent at line 3121
android.view.ViewGroup in dispatchTouchEvent at line 2802
android.view.ViewGroup in dispatchTransformedTouchEvent at line 3121
android.view.ViewGroup in dispatchTouchEvent at line 2802
android.view.ViewGroup in dispatchTransformedTouchEvent at line 3121
android.view.ViewGroup in dispatchTouchEvent at line 2802
android.view.ViewGroup in dispatchTransformedTouchEvent at line 3121
android.view.ViewGroup in dispatchTouchEvent at line 2802
android.view.ViewGroup in dispatchTransformedTouchEvent at line 3121
android.view.ViewGroup in dispatchTouchEvent at line 2802
android.view.ViewGroup in dispatchTransformedTouchEvent at line 3121
android.view.ViewGroup in dispatchTouchEvent at line 2802
android.view.ViewGroup in dispatchTransformedTouchEvent at line 3121
android.view.ViewGroup in dispatchTouchEvent at line 2802
android.view.ViewGroup in dispatchTransformedTouchEvent at line 3121
android.view.ViewGroup in dispatchTouchEvent at line 2802
com.microsoft.clarity.c6.i in dispatchTouchEvent at line 22
android.view.ViewGroup in dispatchTransformedTouchEvent at line 3121
android.view.ViewGroup in dispatchTouchEvent at line 2802
android.view.ViewGroup in dispatchTransformedTouchEvent at line 3121
android.view.ViewGroup in dispatchTouchEvent at line 2802
com.microsoft.clarity.c6.i in dispatchTouchEvent at line 22
android.view.ViewGroup in dispatchTransformedTouchEvent at line 3121
android.view.ViewGroup in dispatchTouchEvent at line 2802
android.view.ViewGroup in dispatchTransformedTouchEvent at line 3121
android.view.ViewGroup in dispatchTouchEvent at line 2802
android.view.ViewGroup in dispatchTransformedTouchEvent at line 3121
android.view.ViewGroup in dispatchTouchEvent at line 2802
android.view.ViewGroup in dispatchTransformedTouchEvent at line 3121
android.view.ViewGroup in dispatchTouchEvent at line 2802
android.view.ViewGroup in dispatchTransformedTouchEvent at line 3121
android.view.ViewGroup in dispatchTouchEvent at line 2802
android.view.ViewGroup in dispatchTransformedTouchEvent at line 3121
android.view.ViewGroup in dispatchTouchEvent at line 2802
com.android.internal.policy.DecorView in superDispatchTouchEvent at line 500
com.android.internal.policy.PhoneWindow in superDispatchTouchEvent at line 1912
android.app.Activity in dispatchTouchEvent at line 4299
com.microsoft.clarity.m.i in dispatchTouchEvent at line 2
com.microsoft.clarity.C5.i$b in dispatchTouchEvent at line 31
com.android.internal.policy.DecorView in dispatchTouchEvent at line 458
android.view.View in dispatchPointerEvent at line 15309
android.view.ViewRootImpl$ViewPostImeInputStage in processPointerEvent at line 6778
android.view.ViewRootImpl$ViewPostImeInputStage in onProcess at line 6578
android.view.ViewRootImpl$InputStage in deliver at line 6034
android.view.ViewRootImpl$InputStage in onDeliverToNext at line 6091
android.view.ViewRootImpl$InputStage in forward at line 6057
android.view.ViewRootImpl$AsyncInputStage in forward at line 6222
android.view.ViewRootImpl$InputStage in apply at line 6065
android.view.ViewRootImpl$AsyncInputStage in apply at line 6279
android.view.ViewRootImpl$InputStage in deliver at line 6038
android.view.ViewRootImpl$InputStage in onDeliverToNext at line 6091
android.view.ViewRootImpl$InputStage in forward at line 6057
android.view.ViewRootImpl$InputStage in apply at line 6065
android.view.ViewRootImpl$InputStage in deliver at line 6038
android.view.ViewRootImpl in deliverInputEvent at line 9206
android.view.ViewRootImpl in doProcessInputEvents at line 9157
android.view.ViewRootImpl in enqueueInputEvent at line 9126
android.view.ViewRootImpl$WindowInputEventReceiver in onInputEvent at line 9329
android.view.InputEventReceiver in dispatchInputEvent at line 267
```
### Reproducer
test
### Screenshots and Videos
_No response_ | Platform: Android,Component: ScrollView,Needs: Repro,Needs: Attention | low | Critical |
2,493,833,965 | ant-design | 【Select】select组件的showSearch功能在mode是否为multiple时下拉选项表现不一致 | ### What problem does this feature solve?
当mode= multiple时
1: 进行sug
2: 失去焦点
3:聚焦,不会请求sug接口
当mode为单选时步骤3会再次请求sug接口
### What does the proposed API look like?
希望步骤3的行为表现一致,都不再请求sug接口
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | 🐛 Bug,Inactive | low | Major |
2,493,914,607 | next.js | Draft mode cookie no being set on edge runtime | ### Link to the code that reproduces this issue
https://github.com/otoxiep95/next-draft-edge-issue-reproduction
### To Reproduce
1. Clone the repo
2. Run `npm install`
3. Run `npm run dev`
4. Open the browser at `http://localhost:3000/api`
5. Open and edit `app/api/route.ts` and remove `export const runtime = "edge";`
6. Retest the issue
### Current vs. Expected behavior
Cookie header is set, but the value it wrong and results in draft mode not being enabled.
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 23.2.0: Wed Nov 15 21:55:06 PST 2023; root:xnu-10002.61.3~2/RELEASE_ARM64_T6020
Available memory (MB): 32768
Available CPU cores: 10
Binaries:
Node: 20.17.0
npm: 10.8.2
Yarn: N/A
pnpm: N/A
Relevant Packages:
next: 15.0.0-canary.134 // Latest available version is detected (15.0.0-canary.134).
eslint-config-next: N/A
react: 19.0.0-rc-7771d3a7-20240827
react-dom: 19.0.0-rc-7771d3a7-20240827
typescript: 5.3.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Runtime
### Which stage(s) are affected? (Select all that apply)
next dev (local), Vercel (Deployed)
### Additional context



| bug,Runtime | low | Minor |
2,493,985,249 | transformers | MultiTask Classification and label_names on Trainer | ### System Info
Transformers: 4.40.2
### Who can help?
@muellerzr @SunMarc @ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I’m working on a multi task classification with DistilBert with 4 labels.
I started training the model and it finished the first epoch, then it starts evaluation and throws the error below at the end of the evaluation. If I take the` load_best_model_at_end ` out of the trainer args it runs the eval, but i get no eval loss. I also ran predict and I found out that I got `label_ids=None`. It seems that labels_names is not correctly working and is not passed to the predict.
I ran:
`for batch in trainer.get_eval_dataloader(data['test']):
print(batch)
break
`
And got the follwoing:
```
`{'input_ids': tensor([[ 101, 67618, 10671, ..., 0, 0, 0],
[ 101, 67618, 10671, ..., 169, 12211, 102],
[ 101, 27746, 13386, ..., 0, 0, 0],
[ 101, 73219, 14002, ..., 0, 0, 0]], device='cuda:0'), 'attention_mask': tensor([[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 1, 1, 1],
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0]], device='cuda:0'), 'line_labels': tensor([3, 1, 1, 1], device='cuda:0'), 'cat_labels': tensor([ 9, 16, 16, 16], device='cuda:0'), 'sub_cat_labels': tensor([77, 48, 48, 48], device='cuda:0'), 'motive_labels': tensor([ 2, 34, 34, 34], device='cuda:0')}`
```
I really need help figuring out what is going on here I out of options I can’t understand what is going on. If you could shed a light I would appreciate it. I'm really at the point where I feel I need to use a custom trainer, because I have no more solution to try.
Code:
```
# Defining the metrics
LINE_METRIC = evaluate.load("f1")
CAT_METRIC = evaluate.load("f1")
SUB_CAT_METRIC = evaluate.load("f1")
MOTIVE_METRIC = evaluate.load("f1")
def compute_metrics(eval_pred):
print(eval_pred)
all_logits, all_labels = eval_pred
logits_line, logits_cat, logits_sub_cat, logits_motive = all_logits
line_labels, cat_labels, sub_cat_labels, motive_labels = all_labels
line_predictions = np.argmax(logits_line, axis=-1)
cat_predictions = np.argmax(logits_cat, axis=-1)
sub_cat_predictions = np.argmax(logits_sub_cat, axis=-1)
motive_predictions = np.argmax(logits_motive, axis=-1)
print("PRED")
print(line_predictions, cat_predictions, sub_cat_predictions, motive_predictions)
line_computed_metrics = LINE_METRIC.compute(predictions=line_predictions, references=line_labels, average='weighted')
cat_computed_metrics = CAT_METRIC.compute(predictions=cat_predictions, references=cat_labels, average='weighted')
sub_cat_computed_metrics = SUB_CAT_METRIC.compute(predictions=sub_cat_predictions, references=sub_cat_labels, average='weighted')
motive_computed_metrics = MOTIVE_METRIC.compute(predictions=motive_predictions, references=motive_labels, average='weighted')
print("SCORE")
print(line_computed_metrics, cat_computed_metrics, sub_cat_computed_metrics, motive_computed_metrics)
return {
'f1_line': line_computed_metrics['f1'],
'f1_cat': cat_computed_metrics['f1'],
'f1_sub_cat': sub_cat_computed_metrics['f1'],
'f1_motive': motive_computed_metrics['f1'],
}
`
`
output_directory = RESULTS_DIRECTORY
evaluation_strategy = 'epoch'
per_device_train_batch_size = 4
per_device_eval_batch_size = 4
gradint_accumulation_steps = 2
learning_rate = 2e-5
weight_decay = 0.01
max_grad_norm = 1
num_train_epochs = NUM_TRAIN_EPOCHS
lr_scheduler_type = 'linear'
warmup_ratio = 0.05
logging_dir = LOGGING_DIRECTORY
logging_strategy = 'epoch'
save_strategy = 'epoch'
save_total_limit = 1
**label_names = ['line_labels', 'cat_labels', 'sub_cal_label','motive_labels']**
load_best_model_at_end = True
metric_for_best_model = 'eval_f1_cat'
greater_is_better = True
label_smoothing_factor = 0
#report_to = 'tensorboard'
gradient_checkpointing = False
`
`
# Setup training arguments
training_args = TrainingArguments(
output_dir=output_directory,
evaluation_strategy=evaluation_strategy,
learning_rate=learning_rate,
per_device_train_batch_size=per_device_train_batch_size,
per_device_eval_batch_size=per_device_eval_batch_size,
num_train_epochs=num_train_epochs,
weight_decay=weight_decay,
logging_dir=logging_dir,
label_names=label_names,
max_grad_norm=max_grad_norm,
lr_scheduler_type=lr_scheduler_type,
warmup_ratio=warmup_ratio,
logging_strategy=logging_strategy,
save_strategy=save_strategy,
save_total_limit=save_total_limit,
load_best_model_at_end=load_best_model_at_end,
#metric_for_best_model=metric_for_best_model,
#greater_is_better=greater_is_better,
label_smoothing_factor=label_smoothing_factor,
#report_to=report_to,
gradient_checkpointing=gradient_checkpointing
)
#early_stop_callback = EarlyStoppingCallback(3)
# Initialize the Trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=data['train'],
eval_dataset=data['test'],
#tokenizer=tokenizer,
compute_metrics=compute_metrics,
data_collator=data_collator,
#callbacks=[early_stop_callback])
`
### Expected behavior
Error:
`
KeyError Traceback (most recent call last)
Cell In[36], line 1
----> 1 trainer.train()
File /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:1859, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1857 hf_hub_utils.enable_progress_bars()
1858 else:
-> 1859 return inner_training_loop(
1860 args=args,
1861 resume_from_checkpoint=resume_from_checkpoint,
1862 trial=trial,
1863 ignore_keys_for_eval=ignore_keys_for_eval,
1864 )
File /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:2298, in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
2295 self.control.should_training_stop = True
2297 self.control = self.callback_handler.on_epoch_end(args, self.state, self.control)
-> 2298 self._maybe_log_save_evaluate(tr_loss, grad_norm, model, trial, epoch, ignore_keys_for_eval)
2300 if DebugOption.TPU_METRICS_DEBUG in self.args.debug:
2301 if is_torch_xla_available():
2302 # tpu-comment: Logging debug metrics for PyTorch/XLA (compile, execute times, ops, etc.)
File /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:2673, in Trainer._maybe_log_save_evaluate(self, tr_loss, grad_norm, model, trial, epoch, ignore_keys_for_eval)
2670 self.lr_scheduler.step(metrics[metric_to_check])
2672 if self.control.should_save:
-> 2673 self._save_checkpoint(model, trial, metrics=metrics)
2674 self.control = self.callback_handler.on_save(self.args, self.state, self.control)
File /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:2765, in Trainer._save_checkpoint(self, model, trial, metrics)
2763 if not metric_to_check.startswith("eval_"):
2764 metric_to_check = f"eval_{metric_to_check}"
-> 2765 metric_value = metrics[metric_to_check]
2767 operator = np.greater if self.args.greater_is_better else np.less
2768 if (
2769 self.state.best_metric is None
2770 or self.state.best_model_checkpoint is None
2771 or operator(metric_value, self.state.best_metric)
2772 ):
KeyError: 'eval_loss'
``` | Good First Issue,bug | low | Critical |
2,493,992,060 | godot | Light2Ds do not affect CanvasItems that are inside their same CanvasLayer | ### Tested versions
- Reproducible in 4.3
- Reproducible in 4.2.2
### System information
Windows 10
### Issue description
There's a possibility this is a misunderstanding, an intentional limitation, but I'm doubtful. There's no mention of this in the documentation.
This is reproducible on **all** rendering methods.
In short, 2D lights do not work **at all** on **CanvasItem**s that are children of a **CanvasLayer** node, even though the lights are inside the same **CanvasLayer**:
<img src=https://github.com/user-attachments/assets/92669c3c-4246-420b-8eb8-428e6c3c6f73 width=50%>
-------
It also means that CanvasItemMaterial's `light_mode` is entirely useless outside in these cases:
<img src=https://github.com/user-attachments/assets/27e5679d-3004-4243-999e-1cc69e82587e width=50%>
-------
Very importantly, note that **Light2D**s inside a **CanvasLayer** **DO** work:
<img src=https://github.com/user-attachments/assets/31f1c7d7-faf2-48f7-b0a3-1f92741dbfba width=50%>
### Steps to reproduce
Multiple ways, but the simplest is to create a **CanvasLayer** and insert a **Sprite2D** and a **PointLight2D** as children. Make sure to include textures for both.
### Minimal reproduction project (MRP)
[mrp-canvaslayer-light2d-do-not-work.zip](https://github.com/user-attachments/files/16796973/mrp-canvaslayer-light2d-do-not-work.zip)
| bug,topic:rendering,topic:2d | low | Minor |
2,493,995,762 | tensorflow | With the same input and parameter settings, there is a large difference in the output of Dense layer on GPU and CPU. | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
binary
### TensorFlow version
tf 2.12.0
### Custom code
Yes
### OS platform and distribution
Ubuntu 20.04
### Mobile device
_No response_
### Python version
3.10
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
We found that when training the model, there is a large difference in the output of the DENSE layer on the cpu and gpu when using the same input tensor and parameter settings.
```
CPU output: [[3.4838054e+34]]
GPU output: [[3.4838057e+34]]
2.4758800785707605e+27
```
We have tried some similar inputs, but no such problems have occurred.
### Standalone code to reproduce the issue
```shell
import tensorflow as tf
import numpy as np
import pickle
import h5py
tf.random.set_seed(42)
tf.config.experimental.enable_op_determinism()
def chebyshev_distance(A: np.ndarray, B: np.ndarray):
if A is None or B is None:
return 0.0
if A.shape != B.shape:
return 9999999
else:
return float(np.max(np.abs(A - B)))
tf.random.set_seed(42)
x_input = np.array([[37.63115]])
# x_input = np.array([[38.63115]])
# x_input = np.array([[3.763115]])
print(x_input)
dense_layer = tf.keras.layers.Dense(units=1, activation='exponential', use_bias=True, activity_regularizer=None, bias_constraint=None, bias_initializer='random_normal', kernel_initializer='he_normal', bias_regularizer=None, kernel_regularizer=None, kernel_constraint=None)
weights = [np.array([[2.112561]], dtype=np.float32), np.array([0.03791478], dtype=np.float32)]
dense_layer.build((1,))
dense_layer.set_weights(weights)
with tf.device('/CPU:0'):
x_cpu = tf.constant(x_input, dtype=tf.float32)
output_cpu = dense_layer(x_cpu)
print("CPU output:", output_cpu.numpy())
if tf.config.list_physical_devices('GPU'):
with tf.device('/GPU:0'):
x_gpu = tf.constant(x_input, dtype=tf.float32)
output_gpu = dense_layer(x_gpu)
print("GPU output:", output_gpu.numpy())
else:
print("GPU not available.")
output_diff = chebyshev_distance(output_cpu.numpy(), output_gpu.numpy())
print(output_diff)
```
### Relevant log output
_No response_ | stat:awaiting tensorflower,type:bug,comp:ops,TF 2.12 | medium | Critical |
2,494,015,727 | vscode | Zoom in and Zoom out with Ctrl and mouse | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
Zoom in and Zoom out should be possible with Control-Key and Mouse Wheel. | feature-request,zoom | low | Minor |
2,494,022,615 | godot | Black screen frozen and console with infinite errors when launching edit project | ### Tested versions
GoDot Engine 4.3
x86_64 · 64 bit · 15 August 2024
### System information
Windows 10
### Issue description
This is the 1st time I use Godot, I never installed it before.
I'm on a laptop Intel(R) Core(TM) i7-2670QM CPU @ 2.20GHz 2.20 GHz, GTX 560m, 8Go RAM, Win 10 pro.
I installed latest version on August 29th.
I have copied the extracted *.exe in a folder, launched it. I have the window to create a project.
When I clic on edit this new project, I have a black window, never opening the edit Godot software.
When I try to launch the console *.exe, I see multiple errors
Example of errors in enclosed files and see some here :

[godot_errors.txt](https://github.com/user-attachments/files/16797355/godot_errors.txt)
ERROR: No render pipeline was set before attempting to draw.
at: (servers/rendering/rendering_device.cpp:4050)
ERROR: Attempted to use an unused shader variant (shader is null),
at: (./servers/rendering/renderer_rd/pipeline_cache_rd.h:74)
ERROR: Parameter "pipeline" is null.
at: draw_list_bind_render_pipeline (servers/rendering/rendering_device.cpp:3840)
ERROR: This render pipeline requires (0) bytes of push constant data, supplied: (128)
at: (servers/rendering/rendering_device.cpp:4031)
ERROR: No render pipeline was set before attempting to draw.
at: (servers/rendering/rendering_device.cpp:4050)
ERROR: Attempted to use an unused shader variant (shader is null),
at: (./servers/rendering/renderer_rd/pipeline_cache_rd.h:74)
ERROR: Parameter "pipeline" is null.
at: draw_list_bind_render_pipeline (servers/rendering/rendering_device.cpp:3840)
ERROR: Parameter "shader" is null.
at: uniform_set_create (servers/rendering/rendering_device.cpp:2799)
ERROR: Parameter "uniform_set" is null.
at: draw_list_bind_uniform_set (servers/rendering/rendering_device.cpp:3928)
ERROR: This render pipeline requires (0) bytes of push constant data, supplied: (128)
at: (servers/rendering/rendering_device.cpp:4031)
ERROR: No render pipeline was set before attempting to draw.
at: (servers/rendering/rendering_device.cpp:4050)
ERROR: Attempted to use an unused shader variant (shader is null),
at: (./servers/rendering/renderer_rd/pipeline_cache_rd.h:74)
ERROR: Parameter "pipeline" is null.
at: draw_list_bind_render_pipeline (servers/rendering/rendering_device.cpp:3840)
ERROR: Parameter "shader" is null.
at: uniform_set_create (servers/rendering/rendering_device.cpp:2799)
ERROR: Parameter "uniform_set" is null.
at: draw_list_bind_uniform_set (servers/rendering/rendering_device.cpp:3928)
ERROR: This render pipeline requires (0) bytes of push constant data, supplied: (128)
at: (servers/rendering/rendering_device.cpp:4031)
### Steps to reproduce
clic on exe
create new project
ok
### Minimal reproduction project (MRP)
none | needs testing | low | Critical |
2,494,029,350 | opencv | blobRectsToImageRects,This function has an error | ### System Information
// example for c++ user
OpenCV version: 4.10.0
Operating System / Platform: windows 11
Compiler & compiler version: Microsoft Visual Studio Enterprise 2019
### Detailed description
blobRectsToImageRects,This function has an error。
### Steps to reproduce
// covert Rect2d to Rect
//![draw_boxes]
for (auto box : keep_boxes)
{
boxes.push_back(Rect(cvFloor(box.x), cvFloor(box.y), cvFloor(box.width - box.x), cvFloor(box.height - box.y)));
}
paramNet.blobRectsToImageRects(boxes, boxes, img.size());
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [ ] I updated to the latest OpenCV version and the issue is still there
- [ ] There is reproducer code and related data files (videos, images, onnx, etc) | bug,incomplete,needs reproducer | low | Critical |
2,494,146,437 | create-react-app | vscode inspection , please look at here | `npx create-react-app ` install JavaScriptreact
- *but vscode can't parse module correctly*

| needs triage,issue: bug report | low | Minor |
2,494,148,817 | yt-dlp | NBC extractor doesn't recognize some URLs | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
United States
### Provide a description that is worded well enough to be understood
The nbc.py extractor doesn't recognize some video URLs. I'm not sure how to handle this one:
https://www.nbc.com/paris2024/video/ali-truwit-found-purpose-pool-after-her-life-changed/para24_sww_alitruwittodayshow_240823
Others can be fixed by changing line 35 of nbc.py from
_VALID_URL = r'https?(?P<permalink>://(?:www\.)?nbc\.com/(?:classic-tv/)?[^/]+/video/[^/]+/(?P<id>(?:NBCE|n)?\d+))'
to
_VALID_URL = r'https?(?P<permalink>://(?:www\.)?nbc\.com/(?:classic-tv/)?[^/]+/video/[^/]+/(?P<id>(?:NBCE|BRVN|OXYN|SYFY|USAN|n)?\d+))'
I'm surprised nobody's noticed this.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', 'https://www.nbc.com/paris2024/video/ali-truwit-found-purpose-pool-after-her-life-changed/para24_sww_alitruwittodayshow_240823']
[debug] User config "PRIVATE": ['--write-info-json', '--console-title', '-c', '-i', '-w', '--write-sub', '--write-auto-sub', '--all-subs', '--write-description', '--write-all-thumbnails', '--add-metadata', '--write-annotations', '--no-post-overwrites', '-o', '%(title)s [%(id)s, %(format_id)s, %(width)sx%(height)s].%(ext)s', '--abort-on-unavailable-fragment', '--retries', '20', '--fragment-retries', '20', '--hls-prefer-native', '--no-cache-dir', '-S', 'tbr,vbr,abr', '--ap-mso', 'PRIVATE', '--ap-username', 'PRIVATE', '--ap-password', 'PRIVATE']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out cp1252 (No VT), error cp1252 (No VT), screen cp1252 (No VT)
[debug] yt-dlp version stable@2024.08.06 from yt-dlp/yt-dlp [4d9231208] (pip)
[debug] Python 3.12.2 (CPython AMD64 64bit) - Windows-PRIVATE (OpenSSL 3.0.13 30 Jan 2024)
[debug] exe versions: ffmpeg 6.1.1 (setts), ffprobe 6.1.1, phantomjs 2.1.1, rtmpdump 2.4
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.02.02, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.43.1, urllib3-2.2.0, websockets-12.0
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1830 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.08.06 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.08.06 from yt-dlp/yt-dlp)
[generic] Extracting URL: https://www.nbc.com/paris2024/video/ali-truwit-found-purpose-pool-after-her-life-changed/para24_sww_alitruwittodayshow_240823
[generic] para24_sww_alitruwittodayshow_240823: Downloading webpage
WARNING: [generic] Falling back on generic information extractor
[generic] para24_sww_alitruwittodayshow_240823: Extracting information
[debug] Looking for embeds
ERROR: Unsupported URL: https://www.nbc.com/paris2024/video/ali-truwit-found-purpose-pool-after-her-life-changed/para24_sww_alitruwittodayshow_240823
Traceback (most recent call last):
File "PRIVATE\YoutubeDL.py", line 1626, in wrapper
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "PRIVATE\YoutubeDL.py", line 1761, in __extract_info
ie_result = ie.extract(url)
^^^^^^^^^^^^^^^
File "PRIVATE\yt_dlp\extractor\common.py", line 740, in extract
ie_result = self._real_extract(url)
^^^^^^^^^^^^^^^^^^^^^^^
File "PRIVATE\yt_dlp\extractor\generic.py", line 2526, in _real_extract
raise UnsupportedError(url)
yt_dlp.utils.UnsupportedError: Unsupported URL: https://www.nbc.com/paris2024/video/ali-truwit-found-purpose-pool-after-her-life-changed/para24_sww_alitruwittodayshow_240823
```
| site-bug,triage | low | Critical |
2,494,161,464 | terminal | Typed input is sometimes wrongly ordered | ### Windows Terminal version
1.20.11781.0
### Windows build number
10.0.19045.3930
### Other Software
Bash from Git for Windows
### Steps to reproduce
I cannot tell exactly how to reproduce.
But it happens regularly.
The prompt is not ready yet because it still executes the last command or evaluates the `PROMPT_COMMAND` of Bash.
Then I already type the next command like `git b -a` (`b` here is a Git alias for `branch -vv`).
Now regularly it happens that once the prompt is ready what is inserted is misordered, like `agit b -`, so the characters typed later are input before characters typed earlier.
I first thought I just mistyped, but I'm absolutely sure now, that I did not mistype, but the characters were not inserted in the order I typed them.
I guess there is some race condition somewhere that causes this.
### Expected Behavior
Input appears in the order I typed it.
### Actual Behavior
Input is mixed up. | Area-Output,Issue-Bug,Product-Terminal | low | Major |
2,494,202,601 | ollama | Ollama run codestral gives Error: llama runner process has terminated | ### What is the issue?
Trying to run codestral:22b on a 6800xt but get this error everytime :
Error: llama runner process has terminated: signal: segmentation fault (core dumped)
I have 16G RAM and 16G VRAM. What is the issue here? i was able to successfully run other models like starcoder2:3b
### OS
Linux
### GPU
AMD
### CPU
Intel
### Ollama version
0.3.8 | bug,amd | low | Critical |
2,494,291,485 | tensorflow | tf.raw_ops.Round outputs zeros for any integer tensor | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
2.17.0
### Custom code
Yes
### OS platform and distribution
_No response_
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
Expects the same tensor as the input according to the specification https://www.tensorflow.org/api_docs/python/tf/raw_ops/Round
### Standalone code to reproduce the issue
```shell
import tensorflow as tf
x = tf.constant([-2, -1, 1, 2, 3])
tf.raw_ops.Round(x=x)
```
### Relevant log output
```shell
>>> import tensorflow as tf
>>> x = tf.constant([-2, -1, 1, 2, 3], dtype=tf.int32)
>>> tf.raw_ops.Round(x=x)
<tf.Tensor: shape=(5,), dtype=int32, numpy=array([0, 0, 0, 0, 0])>
```
| stat:awaiting tensorflower,type:bug,comp:ops,regression issue,2.17 | medium | Critical |
2,494,311,645 | flutter | Change the default look of CupertinoListSection to the new default | From the API docs, `CupertinoListSection` is modeled after the iOS settings screen. However, there seem to be a bunch of fidelity issues when comparing the settings screen and the first example in the API docs (`flutter create --sample=cupertino.CupertinoListSection.1 mysample`):
1. Padding is missing around the list section itself
2. Rounded corners are missing on the list section itself
3. Rounded corners are missing on the leading color box
| platform-ios,framework,c: API break,a: fidelity,f: cupertino,has reproducible steps,P3,team-design,triaged-design,found in release: 3.24,found in release: 3.25 | low | Major |
2,494,333,906 | vscode | ERR Illegal value for lineNumber: Error: Illegal value for lineNumber | Sorry, not repro steps but a non-minified stacktrace ⏬
```
ERR Illegal value for lineNumber: Error: Illegal value for lineNumber
at TextModel.getLineMaxColumn (vscode-file://vscode-app/Users/jrieken/Code/vscode/out/vs/editor/common/model/textModel.js:638:23)
at CodeEditorWidget.getBottomForLineNumber (vscode-file://vscode-app/Users/jrieken/Code/vscode/out/vs/editor/browser/widget/codeEditor/codeEditorWidget.js:412:50)
at StickyScrollController.findScrollWidgetState (vscode-file://vscode-app/Users/jrieken/Code/vscode/out/vs/editor/contrib/stickyScroll/browser/stickyScrollController.js:513:62)
at StickyScrollController._updateState (vscode-file://vscode-app/Users/jrieken/Code/vscode/out/vs/editor/contrib/stickyScroll/browser/stickyScrollController.js:481:38)
at async StickyScrollController._renderStickyScroll (vscode-file://vscode-app/Users/jrieken/Code/vscode/out/vs/editor/contrib/stickyScroll/browser/stickyScrollController.js:440:21)
``` | bug,editor-sticky-scroll | low | Critical |
2,494,337,317 | PowerToys | PowerToys Awake - define actions for laptop lid state changes | ### Microsoft PowerToys version
0.83.0
### Installation method
Chocolatey
### Running as admin
No
### Area(s) with issue?
Awake
### Steps to reproduce
Turn on "Keep Awake Infedinitely"
Close laptop lid.
Put laptop in a bag.
### ✔️ Expected Behavior
System suspends.
### ❌ Actual Behavior
System stays awake and can't cool down, gets extremely hot and runs fans at max, draining the battery in under an hour and potentially posing a burn hazard, or corrupting firmware.
The Awake function's main use is for the computer open at a workstation to not suspend when the user gets up from their desk for a few minutes at a time, not usually to keep a computer active when its lid is closed. While that behavior might be desired in rare circumstances, there should be a dedicated setting to indicate that it was intentional. Users who leave this function on all the time will forget to turn it off when putting their laptop in a bag.
### Other Software
_No response_ | Idea-Enhancement,Product-Awake | low | Minor |
2,494,355,845 | PowerToys | Preview tool default allaws on top | ### Description of the new feature / enhancement
Add an option to make the preview tool automaticaly allaws on top.
### Scenario when this would be used?
- when we want to manually select an other file to preview without loose the preview window
- When we have to compare a file with an other image on a different window
### Supporting information
you can check what Quicklook app do | Needs-Triage | low | Minor |
2,494,356,732 | ui | [bug]: Incorrect Usage of <DrawerClose> in Documentation Causes HTML Button Nesting Error | ### Describe the bug
I noticed an issue in the documentation for the <Drawer> component that could cause HTML hydration errors. The current example has a <button> element nested inside another <button> element, which is invalid in HTML and leads to the following warning:
Warning: In HTML `<button>` cannot be a descendant of `<button>`. This will cause a hydration error.
The current documentation demonstrates the usage of the Drawer component as follows:
```jsx
<Drawer>
<DrawerTrigger>Open</DrawerTrigger>
<DrawerContent>
<DrawerHeader>
<DrawerTitle>Are you absolutely sure?</DrawerTitle>
<DrawerDescription>This action cannot be undone.</DrawerDescription>
</DrawerHeader>
<DrawerFooter>
<Button>Submit</Button>
<DrawerClose> // <---
<Button variant="outline">Cancel</Button>
</DrawerClose>
</DrawerFooter>
</DrawerContent>
</Drawer>
```
The recommended fix is to update the documentation to include the asChild property on <DrawerClose>, as shown below:
```jsx
<Drawer>
<DrawerTrigger>Open</DrawerTrigger>
<DrawerContent>
<DrawerHeader>
<DrawerTitle>Are you absolutely sure?</DrawerTitle>
<DrawerDescription>This action cannot be undone.</DrawerDescription>
</DrawerHeader>
<DrawerFooter>
<Button>Submit</Button>
<DrawerClose asChild> // <---
<Button variant="outline">Cancel</Button>
</DrawerClose>
</DrawerFooter>
</DrawerContent>
</Drawer>
```
### Affected component/components
DrawerClose
### How to reproduce
- Implement the code as per the current documentation.
- Observe the hydration error in the browser console.
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Operating System: Windows 11
Browser: Microsoft Edge
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,494,378,107 | PowerToys | Regression on Enviroment Variable Profile Path handling | ### Microsoft PowerToys version
0.83.0
### Installation method
WinGet
### Running as admin
Yes
### Area(s) with issue?
Environment Variables
### Steps to reproduce
- Create a Profile, in this profile set a variable named Path with value "%Path_PowerToys_ProfileName%"
- Activate the profile, it will automatically create Path_PowerToys_ProfileName holding your normal Path variable
### ✔️ Expected Behavior
Old behavior: Path_PowerToys_ProfileName would be resolved on the overloaded Path variable so printing $env:Path would show the original Path content
### ❌ Actual Behavior
New behavior: Path_PowerToys_ProfileName is not resolved on the overloaded Path variable so printing $env:Path shows only "%Path_PowerToys_ProfileName%"
This is a problem as this was the only way for adding more content to path depending on profile as the resolving of this references happens on enviroment load and adding a variable %porfile_Path% and a reference on the normal Path to %profile_Path% (which would be more intuitive) does not work. so now there is no way to add content to path depending on profile which was one of the only reasons I used this tool...
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,494,417,250 | PowerToys | File Explorer: Preview Pane - Source code file preview shrinks on a monitor with a smaller scale factor | ### Microsoft PowerToys version
0.83.0
### Installation method
GitHub
### Running as admin
None
### Area(s) with issue?
File Explorer: Preview Pane
### Steps to reproduce
1. Connect at least 2 monitors.
2. Set the scaling of the main monitor to 225% (or just something bigger than 100).
3. Set the external monitor scaling to 100%
4. Open a preview of a "source code" file
5. Move the explorer window with an open file preview between the monitors
### ✔️ Expected Behavior
The same size (visually) of the file preview.
### ❌ Actual Behavior
File preview shrinks on the screen with a lower scaling (probably proportionally to the scaling factor difference).
## Details
In my case it's:
<img src="https://github.com/user-attachments/assets/fc158579-b760-4e8a-98c3-717fae69c5fe" width="500">
1: 3840 x 2400 (scale 225%) - native Dell XPS 17 9720 screen
2: 2560 x 1440 (scale 100%) - Dell U2722D
3: 2560 x 1440 (scale 100%) - Dell U2722D
### Screen 1 (native one) preview:
<img src="https://github.com/user-attachments/assets/5e99506d-3dcf-489f-a510-ab1d91896887" width="500">
### Screen 2 and 3 (Dell) preview:
<img src="https://github.com/user-attachments/assets/47cd0f0e-3bc0-4380-9351-afe6ec8f2fb6" width="500">
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,494,433,383 | vscode | pg up/down inside Ctrl+F |
Type: <b>Bug</b>
place the cursor inside the file search (Ctrl+F)
enter a one line arbitrary search pattern
And page up / down the page in the background will scroll (1/2 a line for me)
everytime I alternate between Page up and Page down. It does not scroll if I press the same key repeatedly say page down, and it always scrolls down even when pressing page up when jumping from the end of the enter value.
VS Code version: Code 1.92.2 (fee1edb8d6d72a0ddff41e5f71a671c23ed924b9, 2024-08-14T17:29:30.058Z)
OS version: Windows_NT x64 10.0.22631
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|11th Gen Intel(R) Core(TM) i7-11850H @ 2.50GHz (16 x 2496)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|31.73GB (7.08GB free)|
|Process Argv|C:\\Users\\morgann\\source\\repos\\s0006e-1-math-library-mogggen\\textures\\cube.obj --crash-reporter-id 3220d67f-342d-4da4-84f2-e30250abe943|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (76)</summary>
Extension|Author (truncated)|Version
---|---|---
rust-bundle|1Yi|1.0.0
vscode-sql-formatter|adp|1.4.4
blazor-snippet-pack|adr|2.6.1
azurite|Azu|3.32.0
htmlplay|bia|0.0.10
markdown-mermaid|bie|1.23.1
bash-snippets|cas|1.0.0
gltf-vscode|ces|2.5.0
fzf-vscode|def|0.0.1
vscode-axe-linter|deq|4.9.2
vscode-wavefront|dmn|1.0.1
vscode-rust-syntax|dun|0.0.32
rust-syntax|dus|0.6.1
vscode-html-css|ecm|2.0.10
EditorConfig|Edi|0.16.4
syntax-highlighter|evg|0.5.0
file-icons|fil|1.1.0
dependi|fil|0.7.8
code-runner|for|0.12.2
vscode-drawio|hed|1.6.6
prettier-sql-vscode|inf|1.6.0
prettier-rust|jin|0.1.9
svg|joc|1.5.4
open-in-vim|jon|0.7.0
cmake-language-support-vscode|jos|0.0.9
rust-doc-viewer|JSc|4.2.0
aspnetcorerazor-html-css-class-completion|kev|1.0.3
vsc-python-indent|Kev|1.18.0
asm-code-lens|maz|2.6.1
start-git-bash|McC|1.2.1
vscode-docker|ms-|1.29.2
csdevkit|ms-|1.9.55
csharp|ms-|2.39.29
vscode-dotnet-runtime|ms-|2.1.5
vscodeintellicode-csharp|ms-|2.1.11
vscode-edge-devtools|ms-|2.1.5
data-workspace-vscode|ms-|0.5.0
mssql|ms-|1.23.0
sql-bindings-vscode|ms-|0.4.0
sql-database-projects-vscode|ms-|1.4.3
debugpy|ms-|2024.10.0
python|ms-|2024.12.3
vscode-pylance|ms-|2024.8.2
jupyter|ms-|2024.7.0
jupyter-keymap|ms-|1.1.2
vscode-jupyter-cell-tags|ms-|0.1.9
vscode-jupyter-slideshow|ms-|0.1.6
remote-containers|ms-|0.380.0
remote-wsl|ms-|0.88.2
anycode-rust|ms-|0.0.6
cmake-tools|ms-|1.19.49
cpptools|ms-|1.21.6
cpptools-extension-pack|ms-|1.3.0
powershell|ms-|2024.2.2
batch-runner|Nil|1.3.2
material-icon-theme|PKi|5.10.0
inline-sql-syntax|quf|2.16.0
vscode-xml|red|0.27.1
bash-debug|rog|0.3.9
rust-analyzer|rus|0.3.2089
partial-diff|ryu|1.4.3
preview-vscode|sea|2.3.7
rust-grammar|sib|0.1.0
shader|sle|1.1.5
sonarlint-vscode|Son|4.9.1
rust-pack|Swe|0.3.38
even-better-toml|tam|0.19.2
pdf|tom|1.2.2
cmake|twx|0.0.17
vscode-lldb|vad|1.10.0
learn-vim|vin|0.0.28
intellicode-api-usage-examples|Vis|0.2.8
vscodeintellicode|Vis|1.3.1
clang-format|xav|1.9.0
plsql-language|xyz|1.8.2
rust-mod-generator|Zha|1.0.10
(4 theme extensions excluded)
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805:30301674
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
vscaat:30438848
c4g48928:30535728
azure-dev_surveyone:30548225
962ge761:30959799
pythongtdpath:30769146
welcomedialog:30910333
pythonnoceb:30805159
asynctok:30898717
pythonregdiag2:30936856
pythonmypyd1:30879173
2e7ec940:31000449
pythontbext0:30879054
accentitlementst:30995554
dsvsc016:30899300
dsvsc017:30899301
dsvsc018:30899302
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
g316j359:31013175
pythoncenvpt:31062603
a69g1124:31058053
dvdeprecation:31068756
dwnewjupytercf:31046870
impr_priority:31102340
nativerepl1:31104043
refactort:31108082
pythonrstrctxt:31112756
flighttreat:31119336
wkspc-onlycs-t:31111718
wkspc-ranged-c:31125598
fje88620:31121564
aajjf12562:31125793
```
</details>
<!-- generated by issue reporter --> | editor-find,under-discussion | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.