id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k ⌀ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,637,665,946 | flutter | Mac_x64 build_tests_2_4 is 2.13% flaky | <!-- meta-tags: To be used by the automation script only, DO NOT MODIFY.
{
"name": "Mac_x64 build_tests_2_4"
}
-->
The post-submit test builder `Mac_x64 build_tests_2_4` had a flaky ratio 2.13% for the past (up to) 100 commits, which is above our 2.00% threshold.
One recent flaky example for a same commit: https://ci.chromium.org/ui/p/flutter/builders/prod/Mac_x64%20build_tests_2_4/4260
Commit: https://github.com/flutter/flutter/commit/22a7afd99aeb34eb59abb01eb01bdaf30eb7d0b1
Flaky builds:
https://ci.chromium.org/ui/p/flutter/builders/prod/Mac_x64%20build_tests_2_4/4260
https://ci.chromium.org/ui/p/flutter/builders/prod/Mac_x64%20build_tests_2_4/4188
Recent test runs:
https://flutter-dashboard.appspot.com/#/build?taskFilter=Mac_x64%20build_tests_2_4
Please follow https://github.com/flutter/flutter/blob/master/docs/infra/Reducing-Test-Flakiness.md#fixing-flaky-tests to fix the flakiness and enable the test back after validating the fix (internal dashboard to validate: go/flutter_test_flakiness).
| P2,c: flake,team-tool,triaged-tool | low | Major |
2,637,675,945 | pytorch | Allow using context managers in torch.fx | ### 🚀 The feature, motivation and pitch
I'm working on automatically adding profiling to a module graph, and as `torch.autograd.profiler.profile` returns a context manager, it is not possible to do that by wrapping the modules into `torch.fx.GraphModule`s that call the module.
It is possible to invoke most ops in https://github.com/pytorch/pytorch/blob/d622b490d62e5439295bab2b89986eefae8ee5fc/aten/src/ATen/core/interned_strings.h with one workaround or another, e.g.
```python
import builtins
import operator
# prim::GetAttr (foo.bar)
graph.call_function(builtins.getattr, (foo, "bar"))
# prim::__get_item__ (foo["bar"])
graph.call_function(operator.getitem, (foo, "bar"))
```
But there doesn't seem to be a way to call `prim::Enter` or `prim::Exit`.
### Alternatives
Alternatively this particular use case could be solved by the profiler API exposing a built-in way of wrapping a module into a wrapper module that calls the module with a profiler context. The wrappers would have to support arbitrary signatures for the modules to be useful though, a `Tensor -> Tensor` requirement like in `torch.nn.Sequential` is quite limiting.
### Additional context
_No response_
cc @ezyang @SherlockNoMad @EikanWang @jgong5 @wenzhe-nrv @albanD | triaged,module: fx,module: python frontend | low | Minor |
2,637,682,940 | kubernetes | Container using in Memory emptyDir was Evicted on DiskPressure | ### What happened?
This is my Pod/Container definition:
```
apiVersion: v1
kind: Pod
metadata:
annotations:
prometheus.io/path: /metrics
prometheus.io/port: "1024"
prometheus.io/scrape: "true"
labels:
app: ingress-haproxy-internal
app.kubernetes.io/instance: ingress-haproxy-internal
app.kubernetes.io/name: kubernetes-ingress
controller-revision-hash: 7b74744854
pod-template-generation: "5"
name: ingress-haproxy-internal-kubernetes-ingress-fbdc9
namespace: ingress-haproxy
spec:
containers:
- args:
- --default-ssl-certificate=ingress-haproxy/ingress-haproxy-internal-kubernetes-ingress-default-cert
- --configmap=ingress-haproxy/ingress-haproxy-internal-kubernetes-ingress
- --http-bind-port=8080
- --https-bind-port=8443
- --ingress.class=haproxy-int
- --publish-service=ingress-haproxy/ingress-haproxy-internal-kubernetes-ingress
- --log=info
- --prometheus
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
image: hub.willhaben.at:8448/haproxytech/kubernetes-ingress:3.0.4
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 1042
scheme: HTTP
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 1
name: kubernetes-ingress-controller
ports:
- containerPort: 8080
name: http
protocol: TCP
- containerPort: 8443
name: https
protocol: TCP
- containerPort: 1024
name: stat
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 1042
scheme: HTTP
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 1
resources:
requests:
cpu: 100m
memory: 512Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
runAsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
seccompProfile:
type: RuntimeDefault
startupProbe:
failureThreshold: 20
httpGet:
path: /healthz
port: 1042
scheme: HTTP
periodSeconds: 1
successThreshold: 1
timeoutSeconds: 1
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /tmp
name: tmp
subPath: tmp
- mountPath: /run
name: tmp
subPath: run
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-fffsq
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: ip-10-11-24-31.eu-central-1.compute.internal
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
runAsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
serviceAccount: ingress-haproxy-internal-kubernetes-ingress
serviceAccountName: ingress-haproxy-internal-kubernetes-ingress
terminationGracePeriodSeconds: 60
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/disk-pressure
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/memory-pressure
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/pid-pressure
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/unschedulable
operator: Exists
volumes:
- emptyDir:
medium: Memory
sizeLimit: 64Mi
name: tmp
- name: kube-api-access-fffsq
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
```
and among all other pods it got evicted as first on DiskPressure with the following message:
```
The node was low on resource: ephemeral-storage. Threshold quantity: 2139512454, available: 1732380Ki. Container kubernetes-ingress-controller was using 27984Ki, request is 0, has larger consumption of ephemeral-storage.
```
### What did you expect to happen?
The `27984Ki` should count as memory usage and not ephemeral storage usage, therefore my pod should not Evicted because it's using more ephemeral-storage than the request
### How can we reproduce it (as minimally and precisely as possible)?
1. Run a Pod with in Memory emptyDir volume
2. Write something in the volume
3. Check the node /stats/summary
4. The container should not report any ephemral storage usage
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
$ kubectl version
Client Version: v1.31.2
Kustomize Version: v5.4.2
Server Version: v1.31.0-eks-a737599
```
</details>
### Cloud provider
<details>
EKS
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
NAME="Amazon Linux"
VERSION="2023"
ID="amzn"
ID_LIKE="fedora"
VERSION_ID="2023"
PLATFORM_ID="platform:al2023"
PRETTY_NAME="Amazon Linux 2023.5.20240916"
ANSI_COLOR="0;33"
CPE_NAME="cpe:2.3:o:amazon:amazon_linux:2023"
HOME_URL="https://aws.amazon.com/linux/amazon-linux-2023/"
DOCUMENTATION_URL="https://docs.aws.amazon.com/linux/"
SUPPORT_URL="https://aws.amazon.com/premiumsupport/"
BUG_REPORT_URL="https://github.com/amazonlinux/amazon-linux-2023"
VENDOR_NAME="AWS"
VENDOR_URL="https://aws.amazon.com/"
SUPPORT_END="2028-03-15"
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| kind/bug,sig/storage,needs-triage | low | Critical |
2,637,789,446 | tensorflow | Unable to register CUDA plug-ins runnung docker image latest-gpu-jypyter | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
binary
### TensorFlow version
2.18.0
### Custom code
No
### OS platform and distribution
docker desktop 4.35.1 , ubuntu 24.04.1 LTS, WSL 2.3.24.0
### Mobile device
None
### Python version
Python 3.11.0rc1 (provided by tensorflow docker image)
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
12.7
### GPU model and memory
NVIDIA GeForce RTX 4060 Ti 16G
### Current behavior?
I Strictly followed the instructions provided in:
https://www.tensorflow.org/install/docker
https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html
Got correct results running a sample workload (as suggested in nvidia contaner toolkit installation manual)

Downloaded tensorflow/tensorflow latest-gpu-jupyter image and ran the container.
Opend a new jupyter notebook (http://127.0.0.1:8888/tree?token=...)
Importing tensorflow I wanted to check the GPU support.
Got error messages and empy available gpu list.
`2024-11-06 10:31:50.143673: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1730889110.283427 23 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1730889110.322596 23 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-11-06 10:31:50.712357: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.`
### Standalone code to reproduce the issue
```shell
import tensorflow as tf
tf.__version__
print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU')))
```
### Relevant log output
```shell
2024-11-06 10:31:50.143673: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1730889110.283427 23 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1730889110.322596 23 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-11-06 10:31:50.712357: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
'2.18.0'
Num GPUs Available: 0
2024-11-06 10:31:54.748888: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: UNKNOWN ERROR (34)
```
| stat:awaiting tensorflower,type:bug,comp:gpu,TF 2.18 | low | Critical |
2,637,808,354 | react | [DevTools Bug] Cannot remove node "2546" because no matching node was found in the Store. | ### Website or app
e-comerce
### Repro steps
login
### How often does this bug happen?
Every time
### DevTools package (automated)
react-devtools-extensions
### DevTools version (automated)
6.0.1-c7c68ef842
### Error message (automated)
Cannot remove node "2546" because no matching node was found in the Store.
### Error call stack (automated)
```text
at chrome-extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:1:1173889
at v.emit (chrome-extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:1:1140783)
at chrome-extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:1:1142390
at bridgeListener (chrome-extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:1:1552529)
```
### Error component stack (automated)
_No response_
### GitHub query string (automated)
```text
https://api.github.com/search/issues?q=Cannot remove node because no matching node was found in the Store. in:title is:issue is:open is:public label:"Component: Developer Tools" repo:facebook/react
```
| Type: Bug,Status: Unconfirmed,Component: Developer Tools | low | Critical |
2,637,818,429 | rust | Rustc fails to compile a program with ThinLTO and split-debuginfo = "packed" | Hi!
A person on Reddit [posted](https://www.reddit.com/r/rust/comments/1gj57up/comment/lvd5rvj/) a strange build error when ThinLTO is enabled. I performed additional build tests in different configurations and I can confirm the error on current Rustc compilers (see the versions below). My environment is Fedora 41 + AMD Ryzen 5900x (x86-64). The test project is https://github.com/anza-xyz/agave on the `master` branch with `144925eda5eba98ef28a47a659be68b93211cdb2`. The test command is `cargo +stable test --profile release-with-debug`/`cargo +nightly test --profile release-with-debug`.
The original report is when ThinLTO is enabled, some binaries fail to be built with the `duplicate split compilation unit` error. I performed the build in multiple configurations and here are my results:
Build ok:
```
[profile.release-with-debug]
inherits = "release"
debug = true
#split-debuginfo = "packed"
lto = "thin"
```
Build ok:
```
[profile.release-with-debug]
inherits = "release"
debug = true
split-debuginfo = "packed"
#lto = "thin"
```
Build ok:
```
[profile.release-with-debug]
inherits = "release"
debug = true
split-debuginfo = "packed"
lto = "fat"
```
Build ok:
```
[profile.release-with-debug]
inherits = "release"
debug = true
split-debuginfo = "unpacked"
lto = "thin"
```
Build fails:
```
[profile.release-with-debug]
inherits = "release"
debug = true
split-debuginfo = "packed"
lto = "thin"
```
Build fails:
```
[profile.release-with-debug]
inherits = "release"
debug = true
split-debuginfo = "packed"
lto = "thin"
codegen-units = 1
```
According to the tests, the buggy is only the combination of ThinLTO and `split-debuginfo = "packed"`. Disabling one of these options or enabling Fat LTO instead of ThinLTO resolves the issue.
I expected to see this happen: the build with `lto = "thin"` + `split-debuginfo = "packed"` is successful
Instead, this happened: the build fails with the `duplicate split compilation unit` error
### Meta
The issue is reproduced on both Rustc versions: stable and current nightly
`rustc --version --verbose`:
```
rustc 1.82.0 (f6e511eec 2024-10-15)
binary: rustc
commit-hash: f6e511eec7342f59a25f7c0534f1dbea00d01b14
commit-date: 2024-10-15
host: x86_64-unknown-linux-gnu
release: 1.82.0
LLVM version: 19.1.1
```
`rustc +nightly --version --verbose`
```
rustc 1.84.0-nightly (fbab78289 2024-11-04)
binary: rustc
commit-hash: fbab78289dd8c6e8860034e0048cfb538f217700
commit-date: 2024-11-04
host: x86_64-unknown-linux-gnu
release: 1.84.0-nightly
LLVM version: 19.1.3
```
Providing `RUST_BACKTRACE=1` doesn't give more information so no backtraces here.
| A-debuginfo,A-codegen,T-compiler,C-bug,A-LTO | low | Critical |
2,637,862,543 | PowerToys | Batch convert from HEIC to JPEG or PNG | ### Description of the new feature / enhancement
It would be great to have a tool for batch photo conversion from HEIC to JPEG or PNG file formats.
### Scenario when this would be used?
Many existing applications doesn't support yet HEIC.
to convert files today it takes a several manual steps- right click an HEIC file, edit in photos app, save as JPG.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,637,872,767 | tensorflow | tf.autodiff.ForwardAccumulator._watch(primal, tangent) erroneously refers to dtype.is_floating which does not exist for a Keras layer. | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
No
### Source
binary
### TensorFlow version
2.17.0
### Custom code
No
### OS platform and distribution
Linux ubuntu 24.04
### Mobile device
_No response_
### Python version
3.12.3
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
The bug is that in tf.autodiff.ForwardAccumulator._watch(primal, tangent), it refers to the attribute primal.dtype.is_floating, but this causes a crash, as primal.dtype is now a string type variable, and so does not have the attribute "is_floating".
Here is the error message I see, from the standalone code below.
File "/home/me/.local/lib/python3.12/site-packages/tensorflow/python/eager/forwardprop.py", line 411, in _watch
if not primal.dtype.is_floating:
^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'str' object has no attribute 'is_floating'
### Standalone code to reproduce the issue
```shell
import tensorflow as tf
x = tf.constant([[2.0, 3.0], [1.0, 4.0]])
targets = tf.constant([[1.], [-1.]])
dense = tf.keras.layers.Dense(1)
dense.build([None, 2])
with tf.autodiff.ForwardAccumulator(
primals=dense.kernel,
tangents=tf.constant([[1.], [0.]])) as acc:
loss = tf.reduce_sum((dense(x) - targets) ** 2.)
print(acc.jvp(loss))
```
### Relevant log output
_No response_ | stat:awaiting response,type:bug,stale,comp:apis,2.17 | low | Critical |
2,637,872,830 | next.js | FFMpeg Wasm lib not loading with turbopack dev enabled. | ### Link to the code that reproduces this issue
https://codesandbox.io/p/devbox/7mqmx8
### To Reproduce
1. Run the application using next dev with --turbopack flag enabled.
2. Check the browser preview. If library loaded, that will print "ffmpeg Loaded" else will print "ffmpeg not loaded" (for debug purpose only).
### Current vs. Expected behavior
**Current Behaviour**
- FFMpeg lib not loading with --turbopack flag enabled
**Expected Behaviour**
- Should work with that flag also. Happening with other wasm lib too - imagemagick wasm etc.
### Provide environment information
```bash
Operating System:
Platform: linux
Arch: x64
Version: #1 SMP PREEMPT_DYNAMIC Sun Aug 6 20:05:33 UTC 2023
Available memory (MB): 4102
Available CPU cores: 2
Binaries:
Node: 20.9.0
npm: 9.8.1
Yarn: 1.22.19
pnpm: 8.10.2
Relevant Packages:
next: 15.0.2 // Latest available version is detected (15.0.2).
eslint-config-next: 14.2.1
react: 18.3.1
react-dom: 18.3.1
typescript: 5.4.5
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Turbopack
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
I've tested this with next 15.0.2 release with --turbopack flag enabled and it's not working with that. Without that flag library loading perfectly fine, even with earlier next.js versions. | bug,Turbopack,linear: turbopack | low | Critical |
2,637,948,046 | flutter | [camera_windows] Camera with given device id already exists. Existing camera must be disposed before creating it again | ### Steps to reproduce
camera_error: Camera with given device id already exists. Existing camera must be disposed before creating it again
### Expected results
camera should work
### Actual results
camera_error: Camera with given device id already exists. Existing camera must be disposed before creating it again
### Code sample
```
import 'dart:async';
import 'dart:developer';
import 'dart:io';
import 'package:camera_platform_interface/camera_platform_interface.dart';
import 'package:flutter/cupertino.dart';
import 'package:flutter/material.dart';
import 'package:flutter/services.dart';
import 'package:flutter_bloc/flutter_bloc.dart';
import 'package:responsive_sizer/responsive_sizer.dart';
import '../InspectionPhotos/Bloc/InspectionPhotosBloc.dart';
import '../InspectionPhotos/Bloc/InspectionPhotosEvent.dart';
class CameraCapture extends StatefulWidget {
final inspectionId, inspectionData;
const CameraCapture(
{super.key, required this.inspectionData, required this.inspectionId});
@override
State<CameraCapture> createState() => _MyAppState();
}
class _MyAppState extends State<CameraCapture> {
List<CameraDescription> _cameras = <CameraDescription>[];
int _cameraIndex = 0;
int _cameraId = -1;
bool _initialized = false;
Size? _previewSize;
String? _capturedImagePath;
final MediaSettings _mediaSettings = const MediaSettings(
resolutionPreset: ResolutionPreset.medium,
fps: 15,
videoBitrate: 200000,
audioBitrate: 32000,
enableAudio: true,
);
StreamSubscription<CameraErrorEvent>? _errorStreamSubscription;
StreamSubscription<CameraClosingEvent>? _cameraClosingStreamSubscription;
@override
void initState() {
super.initState();
WidgetsFlutterBinding.ensureInitialized();
fetch();
}
fetch() async {
await _fetchCameras();
await _initializeCamera();
}
@override
void dispose() {
_disposeCurrentCamera();
_errorStreamSubscription?.cancel();
_cameraClosingStreamSubscription?.cancel();
super.dispose();
}
Future<void> _fetchCameras() async {
try {
List<CameraDescription> cameras =
await CameraPlatform.instance.availableCameras();
if (cameras.isNotEmpty) {
setState(() {
_cameras = cameras;
_cameraIndex %= cameras.length;
});
}
} on PlatformException catch (e) {
log(e.toString());
}
}
Future<void> _initializeCamera() async {
if (_initialized) return;
if (_cameras.isEmpty) return;
try {
final CameraDescription camera = _cameras[_cameraIndex];
if (_cameraId >= 0) {
await _disposeCurrentCamera();
}
_cameraId = await CameraPlatform.instance.createCameraWithSettings(
camera,
_mediaSettings,
);
_errorStreamSubscription?.cancel();
_errorStreamSubscription = CameraPlatform.instance
.onCameraError(_cameraId)
.listen(_onCameraError);
_cameraClosingStreamSubscription?.cancel();
_cameraClosingStreamSubscription = CameraPlatform.instance
.onCameraClosing(_cameraId)
.listen(_onCameraClosing);
await CameraPlatform.instance.initializeCamera(_cameraId);
final event =
await CameraPlatform.instance.onCameraInitialized(_cameraId).first;
_previewSize = Size(event.previewWidth, event.previewHeight);
setState(() {
_initialized = true;
});
} on CameraException catch (e) {
debugPrint('Failed to initialize camera: ${e.code}: ${e.description}');
await _disposeCurrentCamera();
}
}
Future<void> _takePicture() async {
try {
final XFile file = await CameraPlatform.instance.takePicture(_cameraId);
setState(() {
_capturedImagePath = file.path;
});
} catch (e) {
debugPrint('Error capturing image: $e');
}
}
void _onCancelPreview() {
setState(() {
_capturedImagePath = null;
});
}
void _onSaveImage() {
if (_capturedImagePath != null) {
_onCancelPreview();
}
}
void _onCameraClosing(CameraClosingEvent event) {
if (mounted) {
_disposeCurrentCamera();
}
}
Future<void> _disposeCurrentCamera() async {
if (_cameraId >= 0 && _initialized) {
try {
await CameraPlatform.instance.dispose(_cameraId);
setState(() {
_initialized = false;
_cameraId = -1;
_previewSize = null;
});
} on CameraException catch (e) {
log(e.toString());
}
}
}
Future<void> _switchCamera() async {
if (_cameras.isNotEmpty) {
_cameraIndex = (_cameraIndex + 1) % _cameras.length;
await _disposeCurrentCamera(); // Ensure current camera is fully disposed
await _initializeCamera(); // Reinitialize with new camera index
}
}
void _onCameraError(CameraErrorEvent event) {
if (mounted) {
_disposeCurrentCamera();
_fetchCameras();
}
}
Widget _buildPreview() {
return CameraPlatform.instance.buildPreview(_cameraId);
}
Widget _buildCapturedImagePreview() {
return Stack(
children: [
Positioned.fill(
child: Image.file(
File(_capturedImagePath!),
fit: BoxFit.cover,
),
),
Positioned(
bottom: 20,
left: 20,
right: 20,
child: Row(
mainAxisAlignment: MainAxisAlignment.spaceBetween,
children: [
ElevatedButton(
onPressed: _onCancelPreview,
child: const Text("Cancel"),
),
ElevatedButton(
onPressed: _onSaveImage,
child: const Text("Save"),
),
],
),
),
],
);
}
@override
Widget build(BuildContext context) {
return Scaffold(
backgroundColor: Theme.of(context).primaryColorDark,
appBar: AppBar(
title: Text(
"Inspection Photos",
style: Theme.of(context).appBarTheme.titleTextStyle,
),
centerTitle: true,
iconTheme: Theme.of(context).appBarTheme.iconTheme,
automaticallyImplyLeading: true,
),
body: Stack(
children: <Widget>[
if (_initialized &&
_cameraId > 0 &&
_previewSize != null &&
_capturedImagePath == null)
SizedBox(
height: 100.h,
width: 100.w,
child: _buildPreview(),
),
if (_capturedImagePath != null) _buildCapturedImagePreview(),
if (_cameras.isNotEmpty && _capturedImagePath == null)
Align(
alignment: Alignment.centerRight,
child: Padding(
padding: EdgeInsets.symmetric(horizontal: 20.sp),
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
if (_cameras.length > 1)
IconButton(
onPressed: _switchCamera,
icon: const Icon(
CupertinoIcons.switch_camera_solid,
color: Colors.white,
),
),
Padding(
padding: EdgeInsets.only(
bottom: 20.sp, top: 10.sp, right: 15.sp),
child: IconButton(
onPressed: _initialized ? _takePicture : null,
icon: const Icon(
CupertinoIcons.camera_circle,
size: 60,
color: Colors.white,
),
),
),
],
),
),
),
],
),
);
}
}
```
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
[√] Flutter (Channel stable, 3.24.0, on Microsoft Windows [Version 10.0.22631.4317], locale en-IN)
• Flutter version 3.24.0 on channel stable at D:\Mobile Devlopment\flutter_windows_3.13.2-stable\flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 80c2e84975 (3 months ago), 2024-07-30 23:06:49 +0700
• Engine revision b8800d88be
• Dart version 3.5.0
• DevTools version 2.37.2
[√] Windows Version (Installed version of Windows is version 10 or higher)
[!] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at C:\Users\devus\AppData\Local\Android\sdk
• Platform android-34, build-tools 34.0.0
• Java binary at: C:\Program Files\Android\Android Studio\jbr\bin\java
• Java version OpenJDK Runtime Environment (build 17.0.6+0-b2043.56-10027231)
! Some Android licenses not accepted. To resolve this, run: flutter doctor --android-licenses
[√] Chrome - develop for the web
• Chrome at C:\Program Files\Google\Chrome\Application\chrome.exe
[√] Visual Studio - develop Windows apps (Visual Studio Enterprise 2022 17.9.5)
• Visual Studio at C:\Program Files\Microsoft Visual Studio\2022\Enterprise
• Visual Studio Enterprise 2022 version 17.9.34723.18
• Windows 10 SDK version 10.0.22621.0
[√] Android Studio (version 2022.3)
• Android Studio at C:\Program Files\Android\Android Studio
• Flutter plugin can be installed from:
https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.6+0-b2043.56-10027231)
[√] VS Code (version 1.94.2)
• VS Code at C:\Users\devus\AppData\Local\Programs\Microsoft VS Code
• Flutter extension version 3.98.0
[√] Connected device (3 available)
• Windows (desktop) • windows • windows-x64 • Microsoft Windows [Version 10.0.22631.4317]
• Chrome (web) • chrome • web-javascript • Google Chrome 127.0.6533.120
• Edge (web) • edge • web-javascript • Microsoft Edge 130.0.2849.68
[√] Network resources
• All expected network resources are available.
! Doctor found issues in 1 category. | platform-windows,p: camera,package,a: desktop,has reproducible steps,P3,team-windows,triaged-windows,found in release: 3.24,found in release: 3.27 | low | Critical |
2,637,970,726 | godot | GLTFDocument.append_from_file leads to block ui thread. | ### Tested versions
Reproducible in Godot v4.0.4.stable
### System information
Android & iOS - Godot v4.0.4.stable - Mobile
### Issue description
try to invoke GLTFDocument.append_from_file to load a complicated GLTF file executed in a thread, leads to block main ui thread.
### Steps to reproduce
run Main.tscn to check result
### Minimal reproduction project (MRP)
[EmptyView2.zip](https://github.com/user-attachments/files/17647121/EmptyView2.zip)
| discussion,topic:import,performance | low | Minor |
2,638,074,043 | flutter | [go_router][go_router_builder] It will be executed twice redirect when putting the URL directly | ### Steps to reproduce
1. run it on broswer: chrome
e.g. url is : `http://localhost:53306/`
2. put url in Put the url into the address bar.
e.g. `http://localhost:53306/login
3. It will run twice, then show root in page
e.g. `http://localhost:53306/`
### Expected results
Still in Login page
### Actual results
wait late or click other window, It will go back to home page.
### Code sample
<details open><summary>main.dart</summary>
```dart
import 'package:flutter/material.dart';
import 'package:go_router/go_router.dart';
import 'package:web_app/router.dart';
import 'package:flutter_web_plugins/url_strategy.dart';
void main() {
WidgetsFlutterBinding.ensureInitialized();
usePathUrlStrategy();
runApp(const App());
}
class App extends StatelessWidget {
const App({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp.router(
debugShowCheckedModeBanner: false,
routerConfig: GoRouter(
navigatorKey: rootKey,
debugLogDiagnostics: true,
routes: $appRoutes,
initialLocation: "/",
redirect: (context, state) {
print("_>___ app: get router.base path: ${Uri.base.path}/base: ${Uri.base}");
print("__>__ app: get router.location: ${state.matchedLocation}");
final uri = Uri.parse(state.matchedLocation);
final paths = [ContentRouteData().location];
if (paths.contains(uri.path)) {
return LoginRouteData().location;
}
return null;
},
errorBuilder: (context, state) {
// return ErrorRoute().build(context, state);
return Container();
},
// refreshListenable: _appProvider,
),
);
}
}
class IndexPage extends StatelessWidget {
final Widget child;
const IndexPage({super.key, required this.child});
@override
Widget build(BuildContext context) {
return child;
}
}
class HomePage extends StatelessWidget {
const HomePage({super.key});
@override
Widget build(BuildContext context) {
return Scaffold(
body: Center(
child: Column(
children: [
const Text("Home Page"),
ElevatedButton(
onPressed: () {
context.go(ContentRouteData().location);
},
child: const Text("go content"),
),
],
),
),
);
}
}
class MemberPage extends StatelessWidget {
const MemberPage({super.key});
@override
Widget build(BuildContext context) {
return const Scaffold(
body: Center(
child: Text("Member Page"),
),
);
}
}
class ContentPage extends StatelessWidget {
const ContentPage({super.key});
@override
Widget build(BuildContext context) {
return const Scaffold(
body: Center(
child: Text("Content Page"),
),
);
}
}
class LoginPage extends StatelessWidget {
const LoginPage({super.key});
@override
Widget build(BuildContext context) {
return const Scaffold(
body: Center(
child: Text("Login Page"),
),
);
}
}
```
</details>
<details open><summary>router.dart</summary>
```dart
import 'package:flutter/material.dart';
import 'package:go_router/go_router.dart';
import 'package:web_app/main.dart';
part 'router.g.dart';
final rootKey = GlobalKey<NavigatorState>();
final shellKey = GlobalKey<NavigatorState>(debugLabel: "index");
@TypedShellRoute<IndexRouteData>(routes: [
TypedGoRoute<HomeRouteData>(path: "/", routes: [
// TypedGoRoute<LoginRouteData>(path: "login"),
]),
TypedGoRoute<ContentRouteData>(path: "/content"),
])
class IndexRouteData extends ShellRouteData {
static final GlobalKey<NavigatorState> $parentNavigatorKey = rootKey;
static final GlobalKey<NavigatorState> $navigatorKey = shellKey;
@override
Page<Function> pageBuilder(BuildContext context, GoRouterState state, Widget navigator) {
return NoTransitionPage(child: IndexPage(child: navigator));
}
}
class HomeRouteData extends GoRouteData {
@override
Page<void> buildPage(BuildContext context, GoRouterState state) {
return const NoTransitionPage(
child: HomePage(),
);
}
}
class ContentRouteData extends GoRouteData {
@override
Page<void> buildPage(BuildContext context, GoRouterState state) {
return const NoTransitionPage(
child: ContentPage(),
);
}
}
@TypedGoRoute<LoginRouteData>(path: "/login")
class LoginRouteData extends GoRouteData {
// static final GlobalKey<NavigatorState> $parentNavigatorKey = rootKey;
@override
Page<void> buildPage(BuildContext context, GoRouterState state) {
return const NoTransitionPage(
child: LoginPage(),
);
}
}
```
</details>
<details open><summary>pubspec.yaml</summary>
```yaml
name: web_app
description: "A new Flutter project."
publish_to: 'none' # Remove this line if you wish to publish to pub.dev
version: 1.0.0+1
environment:
sdk: ^3.5.3
dependencies:
flutter:
sdk: flutter
# The following adds the Cupertino Icons font to your application.
# Use with the CupertinoIcons class for iOS style icons.
cupertino_icons: ^1.0.8
go_router: ^14.4.1
dev_dependencies:
flutter_test:
sdk: flutter
flutter_lints: ^4.0.0
build_runner: ^2.4.11
go_router_builder: ^2.7.0
flutter:
uses-material-design: true
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
https://github.com/user-attachments/assets/bf04c873-a751-4005-8f10-573a7243bc41
</details>
### Logs
<details open><summary>Logs</summary>
```console
_>___ app: get router.base path: /login/base: http://localhost:53306/login
__>__ app: get router.location: /login
[GoRouter] Full paths for routes:
├─ (ShellRoute)
│ ├─/ (Widget)
│ └─/content (Widget)
└─/login (Widget)
[GoRouter] setting initial location /
_>___ app: get router.base path: /login/base: http://localhost:53306/login
__>__ app: get router.location: /
[GoRouter] Full paths for routes:
├─ (ShellRoute)
│ ├─/ (Widget)
│ └─/content (Widget)
└─/login (Widget)
[GoRouter] setting initial location /
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
flutter -v doctor
[✓] Flutter (Channel stable, 3.24.3, on macOS 14.6.1 23G93 darwin-arm64, locale zh-Hant-TW)
• Flutter version 3.24.3 on channel stable at /Users/mosil/development/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 2663184aa7 (8 weeks ago), 2024-09-11 16:27:48 -0500
• Engine revision 36335019a8
• Dart version 3.5.3
• DevTools version 2.37.3
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Connected device (5 available)
• Chrome (web) • chrome • web-javascript • Google Chrome 130.0.6723.92
```
</details>
| platform-web,package,has reproducible steps,P2,p: go_router,p: go_router_builder,team-go_router,triaged-go_router,found in release: 3.24,found in release: 3.27 | low | Critical |
2,638,094,047 | ui | [feat]: Calendar - Support showWeekNumber | ### Feature description
It would be nice if the Calendar component would support "showWeekNumber" from the underlying React Daypicker.
I mean the prop is working and numbers are showing, but the styling is distorted.

### Affected component/components
Calendar
### Additional Context
Additional details here...
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues and PRs | area: request | low | Minor |
2,638,136,485 | godot | "dead_key" button is not correctly converted to label and triggeres twice on second press (QWERTZ keyboard) | ### Tested versions
- Reproduced in 4.3 stable and 4.2 stable
### System information
Windows 10 - Godot 4.3 - Compatibility - QWERTZ keyboard
### Issue description
I am working on an Input remapping system and ran into an issue with the conversion from a dead_key press into a displayable character. "dead_keys" I am talking about are the keys marked red on the QWERTZ layout in the top right and left:
<img width="712" alt="383262997-9e4175ce-ab36-48cb-a535-de75e55a9848" src="https://github.com/user-attachments/assets/760760f1-f66f-4538-8d4b-837623276c13">
These buttons have a unique behaviour when writing, as when they are pressed, they output nothing and when pressed again, they will output depending on the next pressed button. examples for the QWERTZ layout include:
| first_input | text_output | second_input | text_output | note |
|:---------------:|:-------------:|:--------------:|:-------------:|:----------------------------:|
| ^ | NONE | ^ | ^^ | pressing dead character again outputs it twice |
| ^ | NONE | e | ê | pressing compatible character second |
| ^ | NONE | SPACE_BAR | ^ | there is no space after the ^ |
| ^ | NONE | w | ^w | non compatible characters |
| ´ | NONE | ´ | ´´ | pressing dead character again outputs it twice |
| ´ | NONE | e | é | pressing compatible character second |
| ´ | NONE | SPACE_BAR | ´ | there is no space after the ´ |
| ´ | NONE | w | ´w | non compatible characters |
# the issue
when trying to output the label of the button with `OS.get_keycode_string(event.get_key_label_with_modifiers())` the output changes for dead_keys on the second_input and to a diffrent symbol that is not printed on the key.
### Steps to reproduce
create a new scene with a control node and use the _input() function:
```gdscript
func _input(event: InputEvent) -> void:
if event is InputEventKey and event.is_pressed() and not event.is_echo():
print(OS.get_keycode_string(event.get_key_label_with_modifiers()))
```
## output for the `^` key
When pressing the ^ key repeatedly it outputs the following (first_input outputs AsciiCircum and second_input triggers input function twice with BackSlash as output):
```
AsciiCircum
BackSlash
BackSlash
AsciiCircum
BackSlash
BackSlash
AsciiCircum
```
I find this strange for two reasons:
1. why is the input function triggered twice for the second input?
2. why does it output BackSlash on the second input when there is no backslash on the ^ key
Meanwhile when holding down Ctrl and/or Shift and pressing the ^ key repeateadly produces the following results (not triggering the input function twice and always outputting `mods`+BackSlash):
```
Ctrl+BackSlash
Ctrl+BackSlash
Ctrl+BackSlash
Shift+BackSlash
Shift+BackSlash
Shift+BackSlash
Shift+Ctrl+BackSlash
Shift+Ctrl+BackSlash
```
Also holding down Alt and pressing ^ key repeateadly will not trigger the input function twice (but now alternating between Alt+AsciiCircum and Alt+BackSlash):
```
Alt+AsciiCircum
Alt+BackSlash
Alt+AsciiCircum
Alt+BackSlash
Alt+AsciiCircum
```
Holding Ctrl and Alt and pressing ^ key repeateadly always produces BackSlash:
```
Alt+Ctrl+BackSlash
Alt+Ctrl+BackSlash
Alt+Ctrl+BackSlash
Alt+Ctrl+BackSlash
```
small nitpick: why is the output Alt+Ctrl and not Ctrl+Alt (Ctrl at the beginning) for the `OS.get_keycode_string(event.get_key_label_with_modifiers())` function as thats the way we say it in speech? same for Shift+Ctrl and Shift+Alt+Ctrl ?
## output for the `´` key
for the ´ button something similar occurs when pressed repeatadly with alternations between ´ and BracketRight (second input triggers twice):
```
´
BracketRight
BracketRight
´
BracketRight
BracketRight
´
```
note: its just `´` as output, not `QuoteRight` because only `QuoteLeft` is valid on a US-keyboard layout.
When holding Ctrl and pressing ´ button it always outputs BracketRight:
```
Ctrl+BracketRight
Ctrl+BracketRight
```
for the rest `´` seems to behave similar with `^` as the output differs (in regards to BackSlash has become BracketRight and is also not found on the `´` key), but triggers are identical
### Minimal reproduction project (MRP)
MRP: [dead_key_bug.zip](https://github.com/user-attachments/files/17647818/dead_key_bug.zip)
| bug,platform:windows,topic:input | low | Critical |
2,638,148,041 | flutter | [video_player] Video persisting memory after dispose | ### Steps to reproduce
1. Press button to show video
2. Wait for video to initialize
3. Press the button to hide video and dispose
4. Repeat
### Expected results
Expected memory to be released after disposing the video.
### Actual results
Memory keeps increasing when displaying videos. It does not happen every iteration.
I have tried on iOS with and without impeller. It's definitely more frequent to see an increase when the video plays after initialising.
It appears to have something to do with the memory allocated in FVPVideoPlayer copyPixelBuffer
Device : iPhone 8
Software version: 14.7.1
Regarding flutter versions I have tried 3.22.1 ( as in flutter doctor ) and 3.24.4 ( on stable channel )
1st screenshot - VM memory that was created and persisted. You can see persisted memory increases over time.
2nd screenshot - IOSurface, which seems to be the problematic entry, you can also see the persisted memory increasing over time as well as the code trace ( all entries are the same )
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
import 'package:video_player/video_player.dart';
void main() {
runApp(
MaterialApp(
home: _App(),
),
);
}
class _App extends StatelessWidget {
@override
Widget build(BuildContext context) {
return Scaffold(
key: const ValueKey<String>('home_page'),
appBar: AppBar(title: const Text('Video player example')),
body: OnOffVideo(),
);
}
}
class OnOffVideo extends StatefulWidget {
const OnOffVideo({Key? key}) : super(key: key);
@override
State<OnOffVideo> createState() => _OnOffVideoState();
}
class _OnOffVideoState extends State<OnOffVideo> {
int counter = 0;
bool isVisible = false;
@override
Widget build(BuildContext context) {
return Column(crossAxisAlignment: CrossAxisAlignment.center, children: [
Center(
child: GestureDetector(
onTap: () {
setState(() {
isVisible = !isVisible;
});
},
child: Container(
color: isVisible ? Colors.red : Colors.green,
height: 50,
width: 50,
child: Text(isVisible ? 'Hide' : 'Show'),
),
),
),
if (isVisible) _BumbleBeeRemoteVideo(key: ValueKey<int>(counter++)),
]);
}
}
class _BumbleBeeRemoteVideo extends StatefulWidget {
const _BumbleBeeRemoteVideo({Key? key}) : super(key: key);
@override
_BumbleBeeRemoteVideoState createState() => _BumbleBeeRemoteVideoState();
}
class _BumbleBeeRemoteVideoState extends State<_BumbleBeeRemoteVideo> {
late VideoPlayerController _controller;
@override
void initState() {
super.initState();
_controller = VideoPlayerController.networkUrl(
Uri.parse('https://flutter.github.io/assets-for-api-docs/assets/videos/bee.mp4'),
videoPlayerOptions: VideoPlayerOptions(mixWithOthers: true),
);
_controller.addListener(() {
setState(() {});
});
_controller.setLooping(true);
_controller.initialize().then((value) => _controller.play());
}
@override
void dispose() {
_controller.dispose();
super.dispose();
}
@override
Widget build(BuildContext context) {
return SingleChildScrollView(
child: Column(
children: <Widget>[
Container(padding: const EdgeInsets.only(top: 20.0)),
const Text('With remote mp4'),
Container(
padding: const EdgeInsets.all(20),
child: AspectRatio(
aspectRatio: _controller.value.aspectRatio,
child: Stack(
alignment: Alignment.bottomCenter,
children: <Widget>[
VideoPlayer(_controller),
],
),
),
),
],
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
<img width="1728" alt="Screenshot 2024-11-06 at 13 41 44" src="https://github.com/user-attachments/assets/226ea290-75a5-442c-b379-91f4ee59202f">
<img width="1727" alt="Screenshot 2024-11-06 at 13 40 49" src="https://github.com/user-attachments/assets/b1d1a4c0-c6fa-4fa2-9a5f-62fef23f9066">
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[!] Flutter (Channel [user-branch], 3.22.1, on macOS 14.6.1 23G93 darwin-arm64, locale en-PT)
! Flutter version 3.22.1 on channel [user-branch] at /Users/user/flutter
Currently on an unknown channel. Run `flutter channel` to switch to an official channel.
If that doesn't fix the issue, reinstall Flutter by following instructions at https://flutter.dev/docs/get-started/install.
! Upstream repository unknown source is not a standard remote.
Set environment variable "FLUTTER_GIT_URL" to unknown source to dismiss this error.
• Framework revision a14f74ff3a (6 months ago), 2024-05-22 11:08:21 -0500
• Engine revision 55eae6864b
• Dart version 3.4.1
• DevTools version 2.34.3
• If those were intentional, you can disregard the above warnings; however it is recommended to use "git" directly to perform update checks and upgrades.
[✓] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at /Users/user/Library/Android/sdk
• Platform android-35, build-tools 35.0.0
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11572160)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 15.4)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 15F31d
• CocoaPods version 1.15.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2023.3)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.10+0-17.0.10b1087.21-11572160)
[✓] VS Code (version 1.92.1)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension can be installed from:
🔨 https://marketplace.visualstudio.com/items?itemName=Dart-Code.flutter
[✓] Connected device (3 available)
• macOS (desktop) • macos • darwin-arm64 • macOS 14.6.1 23G93 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 14.6.1 23G93 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 130.0.6723.93
[✓] Network resources
• All expected network resources are available.
```
</details>
| platform-ios,p: video_player,package,perf: memory,has reproducible steps,P2,team-ios,triaged-ios,found in release: 3.24,found in release: 3.27 | low | Critical |
2,638,165,875 | pytorch | Persistent memory leak from failed pinned memory allocation | ### 🐛 Describe the bug
I'm not sure if this is from PyTorch or Nvidia kernel driver or what.
```python
import torch
torch.empty((1024,1024,1024), dtype=torch.float32, device='cpu', pin_memory=True)
```
Output:
```
Traceback (most recent call last):
File "[...]/leakmem.py", line 2, in <module>
torch.empty((1024,1024,1024), dtype=torch.float32, pin_memory=True)
RuntimeError: CUDA error: invalid argument
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
The failure happens when the allocated size is sufficiently large. That's alright, I understand it has to do with limitations with pinned memory. Not so fine is that 4 gigs of memory apparently does get consumed and stays that way after the process exits, and even after the user has no processes. I don't see anything relevant in `/dev/shm/`. I don't know how to free it other than by rebooting.
### Versions
#### Affected system
```
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: openSUSE Tumbleweed (x86_64)
GCC version: (SUSE Linux) 14.2.1 20241007 [revision 4af44f2cf7d281f3e4f3957efce10e8b2ccb2ad3]
Clang version: Could not collect
CMake version: version 3.30.5
Libc version: glibc-2.40
Python version: 3.11.10 (main, Sep 09 2024, 17:03:08) [GCC] (64-bit runtime)
Python platform: Linux-6.11.5-2-default-x86_64-with-glibc2.40
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4070
Nvidia driver version: 550.127.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: AuthenticAMD
Model name: AMD Ryzen 7 5800X3D 8-Core Processor
CPU family: 25
Model: 33
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 2
Frequency boost: enabled
CPU(s) scaling MHz: 70%
CPU max MHz: 4550.0000
CPU min MHz: 550.0000
BogoMIPS: 6803.80
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm debug_swap
L1d cache: 256 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 4 MiB (8 instances)
L3 cache: 96 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Not affected; BHI: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.0.2
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1
[pip3] torch-tb-profiler==0.4.3
[pip3] torchvision==0.20.1
[pip3] triton==3.1.0
[conda] Could not collect
```
#### NOT affected system
On this box I can't trigger the same error, even when pinning 32 gigs, which is most of the memory. Trying to pin more than the physical memory instead simply results in "RuntimeError: CUDA error: out of memory", which is the desired behavior. No memory leak.
```
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.1 LTS (x86_64)
GCC version: (Ubuntu 13.2.0-23ubuntu4) 13.2.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.39
Python version: 3.12.3 (main, Sep 11 2024, 14:17:37) [GCC 13.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-48-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Tesla V100S-PCIE-32GB
Nvidia driver version: 550.120
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 15
On-line CPU(s) list: 0-14
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6226R CPU @ 2.90GHz
CPU family: 6
Model: 85
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 15
Stepping: 7
BogoMIPS: 5786.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat vnmi umip pku ospke avx512_vnni md_clear arch_capabilities
Virtualization: VT-x
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 480 KiB (15 instances)
L1i cache: 480 KiB (15 instances)
L2 cache: 60 MiB (15 instances)
L3 cache: 240 MiB (15 instances)
NUMA node(s): 1
NUMA node0 CPU(s): 0-14
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1
[pip3] triton==3.1.0
[conda] Could not collect
```
cc @ptrblck @msaroufim @jamesr66a | needs reproduction,module: cuda,module: memory usage,triaged,module: memory format | low | Critical |
2,638,174,017 | TypeScript | "Organize imports" eats comments | ### 🔎 Search Terms
source.organizeImports, VSCode, "source action", semicolon, remove, change
### 🕗 Version & Regression Information
- This is the behavior in every version I tried, and I reviewed the FAQ for entries about organize imports
### ⏯ Playground Link
https://www.typescriptlang.org/play/?#code/JYWwDg9gTgLgBAcgM4RAUxgC2AOwOYIBQA9MXAMaro4yEDcQA
### 💻 Code
```ts
import 'something'
// comment
;
```
### 🙁 Actual behavior
When organizing inputs in the given file, the output is this:
```ts
import 'something';
```
### 🙂 Expected behavior
I expected the comment to be preserved.
### Additional information about the issue
The semicolon is important here. Without it the comment is preserved.
#48126 is perhaps tangentially related. | Bug,Help Wanted | low | Minor |
2,638,181,346 | yt-dlp | [ZenYandexChannel] unable to extract channel data | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Germany, Worlwide
### Provide a description that is worded well enough to be understood
ERROR: [ZenYandexChannel] gorkyfilm: Unable to extract channel data; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
yt-dlp -vU https://dzen.ru/gorkyfilm?tab=longs
[debug] Command-line config: ['-vU', '--ignore-config', 'https://dzen.ru/gorkyfilm?tab=longs']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@2024.11.04.232933 from yt-dlp/yt-dlp-nightly-builds [282e19db8] (pip)
[debug] Python 3.11.2 (CPython x86_64 64bit) - Linux-6.1.0-26-amd64-x86_64-with-glibc2.36 (OpenSSL 3.0.14 4 Jun 2024, glibc 2.36)
[debug] exe versions: ffmpeg 6.1.2 (fdk,setts), ffprobe 6.1.2, rtmpdump 2.4
[debug] Optional libraries: Cryptodome-3.21.0, brotlicffi-1.1.0.0, certifi-2024.08.30, curl_cffi-0.7.1, mutagen-1.47.0, requests-2.32.3, secretstorage-3.3.3, sqlite3-3.40.1, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1838 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Latest version: nightly@2024.11.04.232933 from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date (nightly@2024.11.04.232933 from yt-dlp/yt-dlp-nightly-builds)
[ZenYandexChannel] Extracting URL: https://dzen.ru/gorkyfilm?tab=longs
[ZenYandexChannel] gorkyfilm: Downloading webpage
[ZenYandexChannel] gorkyfilm: Redirecting
ERROR: [ZenYandexChannel] gorkyfilm: Unable to extract channel data; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "/home/user/myenv/lib/python3.11/site-packages/yt_dlp/extractor/common.py", line 742, in extract
ie_result = self._real_extract(url)
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/myenv/lib/python3.11/site-packages/yt_dlp/extractor/yandexvideo.py", line 382, in _real_extract
data = self._search_json(
^^^^^^^^^^^^^^^^^^
File "/home/user/myenv/lib/python3.11/site-packages/yt_dlp/extractor/common.py", line 1360, in _search_json
json_string = self._search_regex(
^^^^^^^^^^^^^^^^^^^
File "/home/user/myenv/lib/python3.11/site-packages/yt_dlp/extractor/common.py", line 1346, in _search_regex
raise RegexNotFoundError(f'Unable to extract {_name}')
```
| site-bug | low | Critical |
2,638,219,972 | flutter | Text selection toolbar is affected by TextField transformations | ### Steps to reproduce
1. Create a `TextField` widget in a Flutter app.
2. Wrap it with a `Transform` widget (e.g. to scale or rotate the widget).
The use case here is a _drawing app_ that allows users to transform (scale & rotate) boxes with text and provides WYSIWYG input on the TextFields, regardless of where they are placed and how they are transformed.
### Expected results
* The text selection toolbar should always be sized in screen coordinates, i.e. always have the same size, regardless of the transformation of the TextField.
* The text selection toolbar should always be drawn parallel to the screen's x-axis (i.e. perfectly horizontal), regardless of the transformation of the TextField.
Besides,
* Text selection should still work correctly.
### Actual results
* The text selection toolbar scales and rotates with the TextField, i.e. it will potentially be drawn too big / small or rotated.
* Only parts of the TextField respond to tap events on a rotated TextField.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'dart:math';
import 'package:flutter/material.dart';
void main() {
runApp(const MainApp());
}
class MainApp extends StatelessWidget {
const MainApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
home: Scaffold(
body: Center(
child: ConstrainedBox(
constraints: const BoxConstraints(maxWidth: 300),
child: Transform.rotate(
angle: pi / 8.0,
child: const TextField(),
),
),
),
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>


</details>
### Logs
_No response_
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.24.4, on macOS 15.1 24B83 darwin-arm64, locale en-US)
• Flutter version 3.24.4 on channel stable at /Users/redge/Shared/SDKs/flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 603104015d (13 days ago), 2024-10-24 08:01:25 -0700
• Engine revision db49896cf2
• Dart version 3.5.4
• DevTools version 2.37.3
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /Users/redge/Library/Android/sdk
• Platform android-34, build-tools 34.0.0
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 16.1)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 16B40
• CocoaPods version 1.16.1
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.1)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
[✓] IntelliJ IDEA Ultimate Edition (version 2022.3.1)
• IntelliJ at /Applications/IntelliJ IDEA.app
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
[✓] VS Code (version 1.95.1)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.101.20241031
[✓] Connected device (6 available)
• Daniels iPad Air (mobile) • 00008112-000108C13E81A01E • ios • iOS 18.1 22B83
• iPhone Mini Daniel (mobile) • 00008110-001C410C2112401E • ios • iOS 18.1 22B83
• iPhone SE (3rd generation) (mobile) • 60B82120-7A81-4EFD-84E1-594FA2FA296D • ios • com.apple.CoreSimulator.SimRuntime.iOS-18-1 (simulator)
• macOS (desktop) • macos • darwin-arm64 • macOS 15.1 24B83 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 15.1 24B83 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 130.0.6723.93
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| a: text input,framework,f: material design,has reproducible steps,P2,workaround available,team-text-input,triaged-text-input,found in release: 3.24,found in release: 3.27 | low | Major |
2,638,239,334 | langchain | DOC: Is not clear if AzureAISearchRetriever is using hybrid search | ### URL
https://python.langchain.com/docs/integrations/retrievers/azure_ai_search/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Is not clear if AzureAISearchRetriever is using hybrid search, textual search or vector search.
### Idea or request for content:
_No response_ | 🤖:docs | low | Minor |
2,638,262,910 | opencv | [4.10] opencl kernel build error | ### System Information
OpenCV: 4.10
Operating System / Platform: Windows 10 x64
Compiler & version: MSVC 17.11.5
### Detailed description
I am trying to update the vcpkg recipe to OpenCV 4.10
This is the config log
```
[1/1] "E:/code/downloads/tools/cmake-3.30.1-windows/cmake-3.30.1-windows-i386/bin/cmake.exe" -E chdir ".." "E:/code/downloads/tools/cmake-3.30.1-windows/cmake-3.30.1-windows-i386/bin/cmake.exe" "E:/code/buildtrees/opencv4/src/4.10.0-c4b3b284f0.clean" "-G" "Ninja" "-DCMAKE_BUILD_TYPE=Release" "-DCMAKE_INSTALL_PREFIX=E:/code/packages/opencv4_x64-windows-release" "-DFETCHCONTENT_FULLY_DISCONNECTED=ON" "-DENABLE_CONFIG_VERIFICATION=ON" "-DOPENCV_SKIP_SYSTEM_PROCESSOR_DETECTION=TRUE" "-DAARCH64=" "-DX86_64=1" "-DX86=" "-DARM=" "-DCMAKE_CXX_STANDARD=17" "-DINSTALL_TO_MANGLED_PATHS=OFF" "-DOpenCV_INSTALL_BINARIES_PREFIX=" "-DOPENCV_BIN_INSTALL_PATH=bin" "-DOPENCV_INCLUDE_INSTALL_PATH=include/opencv4" "-DOPENCV_LIB_INSTALL_PATH=lib" "-DOPENCV_3P_LIB_INSTALL_PATH=lib/manual-link/opencv4_thirdparty" "-DOPENCV_CONFIG_INSTALL_PATH=share/opencv4" "-DOPENCV_FFMPEG_USE_FIND_PACKAGE=FFMPEG" "-DOPENCV_FFMPEG_SKIP_BUILD_CHECK=TRUE" "-DCMAKE_DEBUG_POSTFIX=d" "-DOPENCV_DLLVERSION=4" "-DOPENCV_DEBUG_POSTFIX=d" "-DOPENCV_GENERATE_SETUPVARS=OFF" "-DOPENCV_GENERATE_PKGCONFIG=ON" "-DBUILD_DOCS=OFF" "-DBUILD_EXAMPLES=OFF" "-DBUILD_PERF_TESTS=OFF" "-DBUILD_TESTS=OFF" "-Dade_DIR=E:/code/installed/x64-windows-release/share/ade" "-DBUILD_IPP_IW=OFF" "-DBUILD_ITT=OFF" "-DBUILD_JASPER=OFF" "-DBUILD_JPEG=OFF" "-DBUILD_OPENEXR=OFF" "-DBUILD_OPENJPEG=OFF" "-DBUILD_PNG=OFF" "-DBUILD_PROTOBUF=OFF" "-DBUILD_TBB=OFF" "-DBUILD_TIFF=OFF" "-DBUILD_WEBP=OFF" "-DBUILD_ZLIB=OFF" "-DBUILD_opencv_apps=OFF" "-DBUILD_opencv_java=OFF" "-DBUILD_opencv_js=OFF" "-DBUILD_JAVA=OFF" "-DBUILD_ANDROID_PROJECT=OFF" "-DBUILD_ANDROID_EXAMPLES=OFF" "-DBUILD_PACKAGE=OFF" "-DBUILD_WITH_DEBUG_INFO=ON" "-DBUILD_WITH_STATIC_CRT=0" "-DCURRENT_INSTALLED_DIR=E:/code/installed/x64-windows-release" "-DENABLE_PYLINT=OFF" "-DENABLE_FLAKE8=OFF" "-DCMAKE_DISABLE_FIND_PACKAGE_Git=ON" "-DCMAKE_DISABLE_FIND_PACKAGE_JNI=ON" "-DENABLE_CXX11=ON" "-DOPENCV_DOWNLOAD_PATH=E:/code/downloads/opencv-cache" "-DOPENCV_EXTRA_MODULES_PATH=E:/code/buildtrees/opencv4/src/4.10.0-64012148a7.clean/modules" "-DOPENCV_OTHER_INSTALL_PATH=share/opencv4" "-DWITH_ADE=ON" "-DBUILD_opencv_calib3d=ON" "-DWITH_CONTRIB=ON" "-DWITH_CUBLAS=ON" "-DWITH_CUDA=ON" "-DENABLE_CUDA_FIRST_CLASS_LANGUAGE=ON" "-DWITH_CUDNN=ON" "-DWITH_1394=OFF" "-DBUILD_opencv_dnn=ON" "-DPROTOBUF_UPDATE_FILES=ON" "-DUPDATE_PROTO_FILES=ON" "-DWITH_PROTOBUF=ON" "-DOPENCV_DNN_CUDA=ON" "-DWITH_DSHOW=ON" "-DWITH_EIGEN=ON" "-DWITH_FFMPEG=ON" "-DWITH_FREETYPE=ON" "-DBUILD_opencv_gapi=ON" "-DWITH_GDCM=ON" "-DWITH_GSTREAMER=ON" "-DWITH_GTK=OFF" "-DWITH_HALIDE=OFF" "-DWITH_IPP=ON" "-DBUILD_IPP_IW=ON" "-DBUILD_opencv_highgui=ON" "-DCV_ENABLE_INTRINSICS=ON" "-DWITH_JASPER=ON" "-DWITH_OPENJPEG=ON" "-DWITH_OPENMP=ON" "-DWITH_JPEG=ON" "-DWITH_LAPACK=OFF" "-DDOPENCV_LAPACK_FIND_PACKAGE_ONLY=OFF" "-DWITH_MSMF=OFF" "-DOPENCV_ENABLE_NONFREE=ON" "-DOPENCV_ENABLE_FILESYSTEM_SUPPORT=ON" "-DOPENCV_ENABLE_THREAD_SUPPORT=ON" "-DWITH_OPENCL=ON" "-DWITH_OPENVINO=OFF" "-DWITH_OPENEXR=OFF" "-DWITH_OPENGL=ON" "-DCMAKE_REQUIRE_FIND_PACKAGE_OGRE=ON" "-DBUILD_opencv_ovis=ON" "-DWITH_PNG=ON" "-DBUILD_opencv_python3=ON" "-DWITH_PYTHON=ON" "-DBUILD_opencv_quality=OFF" "-DWITH_QUIRC=ON" "-DBUILD_opencv_rgbd=OFF" "-DBUILD_opencv_sfm=ON" "-DWITH_TBB=ON" "-DWITH_TIFF=ON" "-DWITH_VTK=ON" "-DWITH_VULKAN=ON" "-DWITH_WEBP=ON" "-DWITH_WIN32UI=ON" "-DBUILD_opencv_world=OFF" "-DWITH_QT=6" "-DWITH_MATLAB=OFF" "-DWITH_OPENJPEG=OFF" "-DWITH_CPUFEATURES=OFF" "-DWITH_SPNG=OFF" "-DWITH_OPENCLAMDFFT=OFF" "-DWITH_OPENCLAMDBLAS=OFF" "-DWITH_OPENCL_D3D11_NV=OFF" "-DWITH_ITT=OFF" "-DWITH_NVCUVID=OFF" "-DWITH_NVCUVENC=OFF" "-DWITH_AVIF=OFF" "-DWITH_VA=OFF" "-DWITH_VA_INTEL=OFF" "-DWITH_OBSENSOR=OFF" "-DBUILD_opencv_quality=OFF" "-DBUILD_opencv_rgbd=OFF" "-DOPENCV_LAPACK_SHARED_LIBS=ON" "-DOPENCV_DISABLE_FILESYSTEM_SUPPORT=" "-DCV_ENABLE_INTRINSICS=ON" "-DCMAKE_AUTOMOC=ON" "-DCMAKE_MAKE_PROGRAM=E:/code/downloads/tools/ninja/1.10.2-windows/ninja.exe" "-DBUILD_SHARED_LIBS=ON" "-DVCPKG_CHAINLOAD_TOOLCHAIN_FILE=E:/code/scripts/toolchains/windows.cmake" "-DVCPKG_TARGET_TRIPLET=x64-windows-release" "-DVCPKG_SET_CHARSET_FLAG=ON" "-DVCPKG_PLATFORM_TOOLSET=v143" "-DCMAKE_EXPORT_NO_PACKAGE_REGISTRY=ON" "-DCMAKE_FIND_PACKAGE_NO_PACKAGE_REGISTRY=ON" "-DCMAKE_FIND_PACKAGE_NO_SYSTEM_PACKAGE_REGISTRY=ON" "-DCMAKE_INSTALL_SYSTEM_RUNTIME_LIBS_SKIP=TRUE" "-DCMAKE_VERBOSE_MAKEFILE=ON" "-DVCPKG_APPLOCAL_DEPS=OFF" "-DCMAKE_TOOLCHAIN_FILE=E:/code/scripts/buildsystems/vcpkg.cmake" "-DCMAKE_ERROR_ON_ABSOLUTE_INSTALL_DESTINATION=ON" "-DVCPKG_CXX_FLAGS=" "-DVCPKG_CXX_FLAGS_RELEASE=" "-DVCPKG_CXX_FLAGS_DEBUG=" "-DVCPKG_C_FLAGS=" "-DVCPKG_C_FLAGS_RELEASE=" "-DVCPKG_C_FLAGS_DEBUG=" "-DVCPKG_CRT_LINKAGE=dynamic" "-DVCPKG_LINKER_FLAGS=" "-DVCPKG_LINKER_FLAGS_RELEASE=" "-DVCPKG_LINKER_FLAGS_DEBUG=" "-DVCPKG_TARGET_ARCHITECTURE=x64" "-DCMAKE_INSTALL_LIBDIR:STRING=lib" "-DCMAKE_INSTALL_BINDIR:STRING=bin" "-D_VCPKG_ROOT_DIR=E:/code/vcpkg_cenit" "-D_VCPKG_INSTALLED_DIR=E:/code/installed" "-DVCPKG_MANIFEST_INSTALL=OFF" "-D__INSTALL_PATH_PYTHON3=E:/code/packages/opencv4_x64-windows-release/tools/python3/Lib/site-packages/cv2" "-DOPENCV_PYTHON_INSTALL_PATH=E:/code/packages/opencv4_x64-windows-release/tools/python3/Lib/site-packages"
CMake Warning (dev) at CMakeLists.txt:127 (enable_language):
project() should be called prior to this enable_language() call.
This warning is for project developers. Use -Wno-dev to suppress it.
-- The CXX compiler identification is MSVC 19.41.34123.0
-- The C compiler identification is MSVC 19.41.34123.0
CMake Warning (dev) at E:/code/downloads/tools/cmake-3.30.1-windows/cmake-3.30.1-windows-i386/share/cmake-3.30/Modules/Platform/Windows-MSVC.cmake:539 (enable_language):
project() should be called prior to this enable_language() call.
Call Stack (most recent call first):
E:/code/downloads/tools/cmake-3.30.1-windows/cmake-3.30.1-windows-i386/share/cmake-3.30/Modules/Platform/Windows-MSVC.cmake:509 (__windows_compiler_msvc_enable_rc)
E:/code/downloads/tools/cmake-3.30.1-windows/cmake-3.30.1-windows-i386/share/cmake-3.30/Modules/Platform/Windows-MSVC-CXX.cmake:6 (__windows_compiler_msvc)
E:/code/downloads/tools/cmake-3.30.1-windows/cmake-3.30.1-windows-i386/share/cmake-3.30/Modules/CMakeCXXInformation.cmake:48 (include)
CMakeLists.txt:127 (enable_language)
This warning is for project developers. Use -Wno-dev to suppress it.
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: E:/VisualStudio/2022_Professional/VC/Tools/MSVC/14.41.34120/bin/Hostx64/x64/cl.exe - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
CMake Warning (dev) at E:/code/downloads/tools/cmake-3.30.1-windows/cmake-3.30.1-windows-i386/share/cmake-3.30/Modules/Platform/Windows-MSVC.cmake:539 (enable_language):
project() should be called prior to this enable_language() call.
Call Stack (most recent call first):
E:/code/downloads/tools/cmake-3.30.1-windows/cmake-3.30.1-windows-i386/share/cmake-3.30/Modules/Platform/Windows-MSVC.cmake:509 (__windows_compiler_msvc_enable_rc)
E:/code/downloads/tools/cmake-3.30.1-windows/cmake-3.30.1-windows-i386/share/cmake-3.30/Modules/Platform/Windows-MSVC-C.cmake:5 (__windows_compiler_msvc)
E:/code/downloads/tools/cmake-3.30.1-windows/cmake-3.30.1-windows-i386/share/cmake-3.30/Modules/CMakeCInformation.cmake:48 (include)
CMakeLists.txt:127 (enable_language)
This warning is for project developers. Use -Wno-dev to suppress it.
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: E:/VisualStudio/2022_Professional/VC/Tools/MSVC/14.41.34120/bin/Hostx64/x64/cl.exe - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- ocv_init_download: OpenCV source tree is not fetched as git repository. 3rdparty resources will be downloaded from github.com by default.
-- Detected processor: AMD64
-- Found PythonInterp: E:/code/buildtrees/opencv4/x64-windows-release-venv/Scripts/python.exe (found suitable version "3.12.7", minimum required is "3.2")
-- Could NOT find PythonLibs (missing: PYTHON_LIBRARIES PYTHON_INCLUDE_DIRS) (Required is exact version "3.12.7")
-- Performing Test HAVE_CXX_FP:PRECISE
-- Performing Test HAVE_CXX_FP:PRECISE - Success
-- Performing Test HAVE_C_FP:PRECISE
-- Performing Test HAVE_C_FP:PRECISE - Success
-- Performing Test HAVE_CPU_SSE3_SUPPORT (check file: cmake/checks/cpu_sse3.cpp)
-- Performing Test HAVE_CPU_SSE3_SUPPORT - Success
-- Performing Test HAVE_CPU_SSSE3_SUPPORT (check file: cmake/checks/cpu_ssse3.cpp)
-- Performing Test HAVE_CPU_SSSE3_SUPPORT - Success
-- Performing Test HAVE_CPU_SSE4_1_SUPPORT (check file: cmake/checks/cpu_sse41.cpp)
-- Performing Test HAVE_CPU_SSE4_1_SUPPORT - Success
-- Performing Test HAVE_CPU_POPCNT_SUPPORT (check file: cmake/checks/cpu_popcnt.cpp)
-- Performing Test HAVE_CPU_POPCNT_SUPPORT - Success
-- Performing Test HAVE_CPU_SSE4_2_SUPPORT (check file: cmake/checks/cpu_sse42.cpp)
-- Performing Test HAVE_CPU_SSE4_2_SUPPORT - Success
-- Performing Test HAVE_CXX_ARCH:AVX (check file: cmake/checks/cpu_fp16.cpp)
-- Performing Test HAVE_CXX_ARCH:AVX - Success
-- Performing Test HAVE_CXX_ARCH:AVX2 (check file: cmake/checks/cpu_avx2.cpp)
-- Performing Test HAVE_CXX_ARCH:AVX2 - Success
-- Performing Test HAVE_CXX_ARCH:AVX512 (check file: cmake/checks/cpu_avx512.cpp)
-- Performing Test HAVE_CXX_ARCH:AVX512 - Success
-- Performing Test HAVE_CPU_BASELINE_FLAGS
-- Performing Test HAVE_CPU_BASELINE_FLAGS - Success
-- Performing Test HAVE_CPU_DISPATCH_FLAGS_SSE4_1
-- Performing Test HAVE_CPU_DISPATCH_FLAGS_SSE4_1 - Success
-- Performing Test HAVE_CPU_DISPATCH_FLAGS_SSE4_2
-- Performing Test HAVE_CPU_DISPATCH_FLAGS_SSE4_2 - Success
-- Performing Test HAVE_CPU_DISPATCH_FLAGS_FP16
-- Performing Test HAVE_CPU_DISPATCH_FLAGS_FP16 - Success
-- Performing Test HAVE_CPU_DISPATCH_FLAGS_AVX
-- Performing Test HAVE_CPU_DISPATCH_FLAGS_AVX - Success
-- Performing Test HAVE_CPU_DISPATCH_FLAGS_AVX2
-- Performing Test HAVE_CPU_DISPATCH_FLAGS_AVX2 - Success
-- Performing Test HAVE_CPU_DISPATCH_FLAGS_AVX512_SKX
-- Performing Test HAVE_CPU_DISPATCH_FLAGS_AVX512_SKX - Success
-- Performing Test HAVE_CXX_W15240
-- Performing Test HAVE_CXX_W15240 - Success
-- Performing Test HAVE_C_W15240
-- Performing Test HAVE_C_W15240 - Success
-- Looking for a CUDA compiler
-- Looking for a CUDA compiler - C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6/bin/nvcc.exe
-- The CUDA compiler identification is NVIDIA 12.6.20
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Check for working CUDA compiler: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6/bin/nvcc.exe - skipped
-- Detecting CUDA compile features
-- Detecting CUDA compile features - done
-- Looking for malloc.h
-- Looking for malloc.h - found
-- Looking for _aligned_malloc
-- Looking for _aligned_malloc - found
-- Found OpenMP_C: -openmp (found version "2.0")
-- Found OpenMP_CXX: -openmp (found version "2.0")
-- Found OpenMP: TRUE (found version "2.0")
-- Found ZLIB: E:/code/installed/x64-windows-release/lib/zlib.lib (found suitable version "1.3.1", minimum required is "1.2.3")
-- Found JPEG: E:/code/installed/x64-windows-release/lib/jpeg.lib (found version "62")
-- Found TIFF: E:/code/installed/x64-windows-release/lib/tiff.lib (found version "4.7.0")
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - not found
-- Found Threads: TRUE
-- Found Jasper: E:/code/installed/x64-windows-release/lib/jasper.lib (found version "4.2.4")
-- Found ZLIB: E:/code/installed/x64-windows-release/lib/zlib.lib (found version "1.3.1")
-- Found PNG: E:/code/installed/x64-windows-release/lib/libpng16.lib (found version "1.6.43")
-- Performing Test HAVE_STDATOMIC
-- Performing Test HAVE_STDATOMIC - Success
-- Found WrapAtomic: TRUE
-- Found TBB (cmake): E:/code/installed/x64-windows-release/bin/tbb12.dll
-- IPPICV: Downloading ippicv_2021.11.0_win_intel64_20240201_general.zip from https://raw.githubusercontent.com/opencv/opencv_3rdparty/fd27188235d85e552de31425e7ea0f53ba73ba53/ippicv/ippicv_2021.11.0_win_intel64_20240201_general.zip
-- found Intel IPP (ICV version): 2021.11.0 [2021.11.0]
-- at: E:/code/buildtrees/opencv4/x64-windows-release-rel/3rdparty/ippicv/ippicv_win/icv
-- found Intel IPP Integration Wrappers sources: 2021.11.0
-- at: E:/code/buildtrees/opencv4/x64-windows-release-rel/3rdparty/ippicv/ippicv_win/iw
-- Found CUDAToolkit: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6/include (found version "12.6.20")
-- Found CUDNN: C:/Program Files/NVIDIA/CUDNN/v9.3/include/12.6 (Required is at least version "7.5")
-- CUDA: NVCC target flags -D_FORCE_INLINES
-- Found Protobuf: E:/code/installed/x64-windows-release/tools/protobuf/protoc.exe (found version "25.1.0")
-- Searching for PEGTL
-- Searching for PEGTL - found target taocpp::pegtl
CMake Warning (dev) at E:/code/downloads/tools/cmake-3.30.1-windows/cmake-3.30.1-windows-i386/share/cmake-3.30/Modules/FindPackageHandleStandardArgs.cmake:441 (message):
The package name passed to `find_package_handle_standard_args` (NetCDF)
does not match the name of the calling package (netCDF). This can lead to
problems in calling code that expects `find_package` result variables
(e.g., `_FOUND`) to follow a certain pattern.
Call Stack (most recent call first):
E:/code/installed/x64-windows-release/share/vtk/FindNetCDF.cmake:33 (find_package_handle_standard_args)
E:/code/scripts/buildsystems/vcpkg.cmake:859 (_find_package)
E:/code/downloads/tools/cmake-3.30.1-windows/cmake-3.30.1-windows-i386/share/cmake-3.30/Modules/CMakeFindDependencyMacro.cmake:76 (find_package)
E:/code/installed/x64-windows-release/share/external_packages/TPL-Seacas-Netcdf/TPL-Seacas-NetcdfConfig.cmake:4 (find_dependency)
E:/code/installed/x64-windows-release/share/cmake/SEACASExodus/SEACASExodusConfig.cmake:163 (include)
E:/code/installed/x64-windows-release/share/cmake/SEACASIoss/SEACASIossConfig.cmake:174 (include)
E:/code/scripts/buildsystems/vcpkg.cmake:859 (_find_package)
E:/code/installed/x64-windows-release/share/vtk/VTK-vtk-module-find-packages.cmake:685 (find_package)
E:/code/installed/x64-windows-release/share/vtk/vtk-config.cmake:162 (include)
E:/code/scripts/buildsystems/vcpkg.cmake:859 (_find_package)
cmake/OpenCVDetectVTK.cmake:2 (find_package)
CMakeLists.txt:930 (include)
This warning is for project developers. Use -Wno-dev to suppress it.
-- Found NetCDF: E:/code/installed/x64-windows-release/include (found version "4.8.1")
-- Found LibLZMA: E:/code/installed/x64-windows-release/lib/lzma.lib (found version "5.6.3")
-- Found VTK 9.3.20231030
-- Found FFMPEG: E:/code/installed/x64-windows-release/lib/avdevice.lib;E:/code/installed/x64-windows-release/lib/avfilter.lib;E:/code/installed/x64-windows-release/lib/avformat.lib;E:/code/installed/x64-windows-release/lib/avcodec.lib;E:/code/installed/x64-windows-release/lib/swresample.lib;E:/code/installed/x64-windows-release/lib/swscale.lib;E:/code/installed/x64-windows-release/lib/avutil.lib
-- freetype2: YES (ver )
-- harfbuzz: YES (ver )
-- Found HDF5: hdf5::hdf5-shared (found version "1.14.4")
-- Julia not found. Not compiling Julia Bindings.
-- Found PkgConfig: E:/code/installed/x64-windows-release/tools/pkgconf/pkgconf.exe (found version "2.3.0")
-- Checking for module 'libraw'
-- Found libraw, version 0.21.3
-- Checking for module 'libraw_r'
-- Found libraw_r, version 0.21.3
-- Checking SFM glog/gflags deps... TRUE
-- Found ZLIB: E:/code/installed/x64-windows-release/lib/zlib.lib (found suitable version "1.3.1", minimum required is "1")
-- Found LibArchive: E:/code/installed/x64-windows-release/lib/archive.lib (found version "3.7.7")
-- Tesseract: YES (ver 5.4.1)
-- Allocator metrics storage type: 'long long'
-- Excluding from source files list: modules/imgproc/src/imgwarp.lasx.cpp
-- Excluding from source files list: modules/imgproc/src/resize.lasx.cpp
-- Registering hook 'INIT_MODULE_SOURCES_opencv_dnn': E:/code/buildtrees/opencv4/src/4.10.0-c4b3b284f0.clean/modules/dnn/cmake/hooks/INIT_MODULE_SOURCES_opencv_dnn.cmake
-- Excluding from source files list: modules/dnn/src/layers/cpu_kernels/conv_winograd_f63.neon.cpp
-- Excluding from source files list: <BUILD>/modules/dnn/layers/layers_common.rvv.cpp
-- Excluding from source files list: <BUILD>/modules/dnn/layers/layers_common.lasx.cpp
-- Excluding from source files list: <BUILD>/modules/dnn/int8layers/layers_common.rvv.cpp
-- Excluding from source files list: <BUILD>/modules/dnn/int8layers/layers_common.lasx.cpp
-- Excluding from source files list: <BUILD>/modules/dnn/layers/cpu_kernels/conv_block.neon.cpp
-- Excluding from source files list: <BUILD>/modules/dnn/layers/cpu_kernels/conv_block.neon_fp16.cpp
-- Excluding from source files list: <BUILD>/modules/dnn/layers/cpu_kernels/conv_depthwise.rvv.cpp
-- Excluding from source files list: <BUILD>/modules/dnn/layers/cpu_kernels/conv_depthwise.lasx.cpp
-- Excluding from source files list: <BUILD>/modules/dnn/layers/cpu_kernels/conv_winograd_f63.neon_fp16.cpp
-- Excluding from source files list: <BUILD>/modules/dnn/layers/cpu_kernels/fast_gemm_kernels.neon.cpp
-- Excluding from source files list: <BUILD>/modules/dnn/layers/cpu_kernels/fast_gemm_kernels.lasx.cpp
-- imgcodecs: Jasper codec is disabled in runtime. Details: https://github.com/opencv/opencv/issues/14058
-- highgui: using builtin backend: QT6
-- wechat_qrcode: Downloading detect.caffemodel from https://raw.githubusercontent.com/WeChatCV/opencv_3rdparty/a8b69ccc738421293254aec5ddb38bd523503252/detect.caffemodel
-- wechat_qrcode: Downloading detect.prototxt from https://raw.githubusercontent.com/WeChatCV/opencv_3rdparty/a8b69ccc738421293254aec5ddb38bd523503252/detect.prototxt
-- wechat_qrcode: Downloading sr.caffemodel from https://raw.githubusercontent.com/WeChatCV/opencv_3rdparty/a8b69ccc738421293254aec5ddb38bd523503252/sr.caffemodel
-- wechat_qrcode: Downloading sr.prototxt from https://raw.githubusercontent.com/WeChatCV/opencv_3rdparty/a8b69ccc738421293254aec5ddb38bd523503252/sr.prototxt
-- xfeatures2d/boostdesc: Downloading boostdesc_bgm.i from https://raw.githubusercontent.com/opencv/opencv_3rdparty/34e4206aef44d50e6bbcd0ab06354b52e7466d26/boostdesc_bgm.i
-- xfeatures2d/boostdesc: Downloading boostdesc_bgm_bi.i from https://raw.githubusercontent.com/opencv/opencv_3rdparty/34e4206aef44d50e6bbcd0ab06354b52e7466d26/boostdesc_bgm_bi.i
-- xfeatures2d/boostdesc: Downloading boostdesc_bgm_hd.i from https://raw.githubusercontent.com/opencv/opencv_3rdparty/34e4206aef44d50e6bbcd0ab06354b52e7466d26/boostdesc_bgm_hd.i
-- xfeatures2d/boostdesc: Downloading boostdesc_binboost_064.i from https://raw.githubusercontent.com/opencv/opencv_3rdparty/34e4206aef44d50e6bbcd0ab06354b52e7466d26/boostdesc_binboost_064.i
-- xfeatures2d/boostdesc: Downloading boostdesc_binboost_128.i from https://raw.githubusercontent.com/opencv/opencv_3rdparty/34e4206aef44d50e6bbcd0ab06354b52e7466d26/boostdesc_binboost_128.i
-- xfeatures2d/boostdesc: Downloading boostdesc_binboost_256.i from https://raw.githubusercontent.com/opencv/opencv_3rdparty/34e4206aef44d50e6bbcd0ab06354b52e7466d26/boostdesc_binboost_256.i
-- xfeatures2d/boostdesc: Downloading boostdesc_lbgm.i from https://raw.githubusercontent.com/opencv/opencv_3rdparty/34e4206aef44d50e6bbcd0ab06354b52e7466d26/boostdesc_lbgm.i
-- xfeatures2d/vgg: Downloading vgg_generated_48.i from https://raw.githubusercontent.com/opencv/opencv_3rdparty/fccf7cd6a4b12079f73bbfb21745f9babcd4eb1d/vgg_generated_48.i
-- xfeatures2d/vgg: Downloading vgg_generated_64.i from https://raw.githubusercontent.com/opencv/opencv_3rdparty/fccf7cd6a4b12079f73bbfb21745f9babcd4eb1d/vgg_generated_64.i
-- xfeatures2d/vgg: Downloading vgg_generated_80.i from https://raw.githubusercontent.com/opencv/opencv_3rdparty/fccf7cd6a4b12079f73bbfb21745f9babcd4eb1d/vgg_generated_80.i
-- xfeatures2d/vgg: Downloading vgg_generated_120.i from https://raw.githubusercontent.com/opencv/opencv_3rdparty/fccf7cd6a4b12079f73bbfb21745f9babcd4eb1d/vgg_generated_120.i
-- data: Downloading face_landmark_model.dat from https://raw.githubusercontent.com/opencv/opencv_3rdparty/8afa57abc8229d611c4937165d20e2a2d9fc5a12/face_landmark_model.dat
-- NVIDIA_OPTICAL_FLOW: Downloading edb50da3cf849840d680249aa6dbef248ebce2ca.zip from https://github.com/NVIDIA/NVIDIAOpticalFlowSDK/archive/edb50da3cf849840d680249aa6dbef248ebce2ca.zip
-- Building with NVIDIA Optical Flow API 2.0
-- Found 'misc' Python modules from E:/code/buildtrees/opencv4/src/4.10.0-c4b3b284f0.clean/modules/python/package/extra_modules
-- Found 'mat_wrapper;utils' Python modules from E:/code/buildtrees/opencv4/src/4.10.0-c4b3b284f0.clean/modules/core/misc/python/package
-- Found 'gapi' Python modules from E:/code/buildtrees/opencv4/src/4.10.0-c4b3b284f0.clean/modules/gapi/misc/python/package
--
-- General configuration for OpenCV 4.10.0 =====================================
-- Version control: unknown
--
-- Extra modules:
-- Location (extra): E:/code/buildtrees/opencv4/src/4.10.0-64012148a7.clean/modules
-- Version control (extra): unknown
--
-- Platform:
-- Timestamp: 2024-11-06T13:50:26Z
-- Host: Windows 10.0.19045 AMD64
-- CMake: 3.30.1
-- CMake generator: Ninja
-- CMake build tool: E:/code/downloads/tools/ninja/1.10.2-windows/ninja.exe
-- MSVC: 1941
-- Configuration: Release
--
-- CPU/HW features:
-- Baseline: SSE SSE2 SSE3
-- requested: SSE3
-- Dispatched code generation: SSE4_1 SSE4_2 FP16 AVX AVX2 AVX512_SKX
-- requested: SSE4_1 SSE4_2 AVX FP16 AVX2 AVX512_SKX
-- SSE4_1 (16 files): + SSSE3 SSE4_1
-- SSE4_2 (1 files): + SSSE3 SSE4_1 POPCNT SSE4_2
-- FP16 (0 files): + SSSE3 SSE4_1 POPCNT SSE4_2 FP16 AVX
-- AVX (8 files): + SSSE3 SSE4_1 POPCNT SSE4_2 AVX
-- AVX2 (36 files): + SSSE3 SSE4_1 POPCNT SSE4_2 FP16 FMA3 AVX AVX2
-- AVX512_SKX (5 files): + SSSE3 SSE4_1 POPCNT SSE4_2 FP16 FMA3 AVX AVX2 AVX_512F AVX512_COMMON AVX512_SKX
--
-- C/C++:
-- Built as dynamic libs?: YES
-- C++ standard: 17
-- C++ Compiler: E:/VisualStudio/2022_Professional/VC/Tools/MSVC/14.41.34120/bin/Hostx64/x64/cl.exe (ver 19.41.34123.0)
-- C++ flags (Release): /nologo /DWIN32 /D_WINDOWS /utf-8 /GR /MP /D _CRT_SECURE_NO_DEPRECATE /D _CRT_NONSTDC_NO_DEPRECATE /D _SCL_SECURE_NO_WARNINGS /Gy /bigobj /Oi /fp:precise /FS /EHa /wd4127 /wd4251 /wd4324 /wd4275 /wd4512 /wd4589 /wd4819 -openmp /MD /O2 /Oi /Gy /DNDEBUG /Z7
-- C++ flags (Debug): /nologo /DWIN32 /D_WINDOWS /utf-8 /GR /MP /D _CRT_SECURE_NO_DEPRECATE /D _CRT_NONSTDC_NO_DEPRECATE /D _SCL_SECURE_NO_WARNINGS /Gy /bigobj /Oi /fp:precise /FS /EHa /wd4127 /wd4251 /wd4324 /wd4275 /wd4512 /wd4589 /wd4819 -openmp /D_DEBUG /MDd /Z7 /Ob0 /Od /RTC1
-- C Compiler: E:/VisualStudio/2022_Professional/VC/Tools/MSVC/14.41.34120/bin/Hostx64/x64/cl.exe
-- C flags (Release): /nologo /DWIN32 /D_WINDOWS /utf-8 /MP /D _CRT_SECURE_NO_DEPRECATE /D _CRT_NONSTDC_NO_DEPRECATE /D _SCL_SECURE_NO_WARNINGS /Gy /bigobj /Oi /fp:precise /FS -openmp /MD /O2 /Oi /Gy /DNDEBUG /Z7
-- C flags (Debug): /nologo /DWIN32 /D_WINDOWS /utf-8 /MP /D _CRT_SECURE_NO_DEPRECATE /D _CRT_NONSTDC_NO_DEPRECATE /D _SCL_SECURE_NO_WARNINGS /Gy /bigobj /Oi /fp:precise /FS -openmp /D_DEBUG /MDd /Z7 /Ob0 /Od /RTC1
-- Linker flags (Release): /machine:x64 /nologo /DEBUG /INCREMENTAL:NO /OPT:REF /OPT:ICF /debug
-- Linker flags (Debug): /machine:x64 /nologo /debug /INCREMENTAL
-- ccache: NO
-- Precompiled headers: NO
-- Extra dependencies: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6/lib/x64/cudart_static.lib C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6/lib/x64/nppial.lib C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6/lib/x64/nppc.lib C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6/lib/x64/nppitc.lib C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6/lib/x64/nppig.lib C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6/lib/x64/nppist.lib C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6/lib/x64/nppidei.lib C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6/lib/x64/cublas.lib C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6/lib/x64/cublasLt.lib C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6/lib/x64/cufft.lib C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6/lib/x64/nppicc.lib C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6/lib/x64/nppif.lib C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.6/lib/x64/nppim.lib
-- 3rdparty dependencies:
--
-- OpenCV modules:
-- To be built: alphamat aruco bgsegm bioinspired calib3d ccalib core cudaarithm cudabgsegm cudacodec cudafeatures2d cudafilters cudaimgproc cudalegacy cudaobjdetect cudaoptflow cudastereo cudawarping cudev cvv datasets dnn dnn_objdetect dnn_superres dpm face features2d flann freetype fuzzy gapi hdf hfs highgui img_hash imgcodecs imgproc intensity_transform line_descriptor mcc ml objdetect optflow ovis phase_unwrapping photo plot rapid reg saliency sfm shape signal stereo stitching structured_light superres surface_matching text tracking video videoio videostab viz wechat_qrcode xfeatures2d ximgproc xobjdetect xphoto
-- Disabled: quality rgbd world
-- Disabled by dependency: -
-- Unavailable: cannops java julia matlab python2 python3 ts
-- Applications: -
-- Documentation: NO
-- Non-free algorithms: YES
--
-- Windows RT support: NO
--
-- GUI: QT6
-- QT: YES (ver 6.7.3 )
-- QT OpenGL support: YES (Qt6::OpenGL )
-- Win32 UI: YES
-- OpenGL support: YES (opengl32 glu32)
-- VTK support: YES (ver 9.3.20231030)
--
-- Media I/O:
-- ZLib: E:/code/installed/x64-windows-release/lib/zlib.lib (ver 1.3.1)
-- JPEG: E:/code/installed/x64-windows-release/lib/jpeg.lib (ver 62)
-- WEBP: (ver 1.4.0)
-- PNG: E:/code/installed/x64-windows-release/lib/libpng16.lib (ver 1.6.43)
-- TIFF: E:/code/installed/x64-windows-release/lib/tiff.lib (ver 42 / 4.7.0)
-- JPEG 2000: E:/code/installed/x64-windows-release/lib/jasper.lib (ver 4.2.4)
-- GDCM: YES (3.0.24)
-- HDR: YES
-- SUNRASTER: YES
-- PXM: YES
-- PFM: YES
--
-- Video I/O:
-- FFMPEG: YES (find_package)
-- avcodec: YES (61.19.100)
-- avformat: YES (61.7.100)
-- avutil: YES (59.39.100)
-- swscale: YES (8.3.100)
-- avresample: NO
-- GStreamer: YES (1.24.7)
-- DirectShow: YES
--
-- Parallel framework: TBB (ver 2022.0 interface 12140)
--
-- Trace: YES (built-in)
--
-- Other third-party libraries:
-- Intel IPP: 2021.11.0 [2021.11.0]
-- at: E:/code/buildtrees/opencv4/x64-windows-release-rel/3rdparty/ippicv/ippicv_win/icv
-- Intel IPP IW: sources (2021.11.0)
-- at: E:/code/buildtrees/opencv4/x64-windows-release-rel/3rdparty/ippicv/ippicv_win/iw
-- Eigen: YES (ver 3.4.0)
-- Custom HAL: NO
-- Protobuf: optimized E:/code/installed/x64-windows-release/bin/libprotobuf.dll debug __location_debug-NOTFOUND version (25.1.0)
-- Flatbuffers: 23.5.26
--
-- NVIDIA CUDA: YES (ver 12.6.20, CUFFT CUBLAS)
-- NVIDIA GPU arch: 50 52 60 61 70 75 80 86 89 90
-- NVIDIA PTX archs: 90
--
-- cuDNN: YES (ver )
--
-- Vulkan: YES
-- Include path: E:/code/buildtrees/opencv4/src/4.10.0-c4b3b284f0.clean/3rdparty/include
-- Link libraries: Dynamic load
--
-- OpenCL: YES (no extra features)
-- Include path: E:/code/buildtrees/opencv4/src/4.10.0-c4b3b284f0.clean/3rdparty/include/opencl/1.2
-- Link libraries: Dynamic load
--
-- Python 3:
-- Interpreter: E:/code/buildtrees/opencv4/x64-windows-release-venv/Scripts/python.exe (ver 3.12.7)
-- Libraries: NO
-- Limited API: NO
-- numpy: E:/code/buildtrees/opencv4/x64-windows-release-venv/Lib/site-packages/numpy/_core/include (ver 2.1.3)
-- install path: -
--
-- Python (for build): E:/code/buildtrees/opencv4/x64-windows-release-venv/Scripts/python.exe
--
-- Install to: E:/code/packages/opencv4_x64-windows-release
-- -----------------------------------------------------------------
--
-- Verifying WITH_1394=OFF => 'HAVE_DC1394_2'=FALSE
-- Verifying WITH_AVFOUNDATION= => 'HAVE_AVFOUNDATION'=FALSE
-- Verifying WITH_AVIF=OFF => 'HAVE_AVIF'=FALSE
-- Verifying WITH_CAP_IOS= => 'HAVE_CAP_IOS'=FALSE
-- Verifying WITH_CPUFEATURES=OFF => 'HAVE_CPUFEATURES'=FALSE
-- Verifying WITH_VTK=ON => 'HAVE_VTK'=TRUE
-- Verifying WITH_CUDA=ON => 'HAVE_CUDA'=TRUE
-- Verifying WITH_CUFFT=ON => 'HAVE_CUFFT'=TRUE
-- Verifying WITH_CUBLAS=ON => 'HAVE_CUBLAS'=TRUE
-- Verifying WITH_CUDNN=ON => 'HAVE_CUDNN'=TRUE
-- Verifying WITH_NVCUVID=OFF => 'HAVE_NVCUVID'=FALSE
-- Verifying WITH_NVCUVENC=OFF => 'HAVE_NVCUVENC'=FALSE
-- Verifying WITH_EIGEN=ON => 'HAVE_EIGEN'=TRUE
-- Verifying WITH_FFMPEG=ON => 'HAVE_FFMPEG'=TRUE
-- Verifying WITH_GSTREAMER=ON => 'HAVE_GSTREAMER;AND;GSTREAMER_VERSION;VERSION_GREATER;0.99'=TRUE
-- Verifying WITH_GTK=OFF => 'HAVE_GTK'=FALSE
-- Verifying WITH_GTK_2_X= => 'HAVE_GTK;AND;NOT;HAVE_GTK3'=FALSE
-- Verifying WITH_WAYLAND= => 'HAVE_WAYLAND'=FALSE
-- Verifying WITH_IPP=ON => 'HAVE_IPP'=TRUE
-- Verifying WITH_HALIDE=OFF => 'HAVE_HALIDE'=FALSE
-- Verifying WITH_VULKAN=ON => 'HAVE_VULKAN'=TRUE
-- Verifying WITH_OPENVINO=OFF => 'TARGET;ocv.3rdparty.openvino'=FALSE
-- Verifying WITH_WEBNN=OFF => 'HAVE_WEBNN'=FALSE
-- Verifying WITH_JASPER=ON => 'HAVE_JASPER'=TRUE
-- Verifying WITH_OPENJPEG=OFF => 'HAVE_OPENJPEG'=FALSE
-- Verifying WITH_JPEG=ON => 'HAVE_JPEG'=TRUE
-- Verifying WITH_WEBP=ON => 'HAVE_WEBP'=TRUE
-- Verifying WITH_OPENEXR=OFF => 'HAVE_OPENEXR'=FALSE
-- Verifying WITH_OPENGL=ON => 'HAVE_OPENGL'=TRUE
-- Verifying WITH_OPENVX=OFF => 'HAVE_OPENVX'=FALSE
-- Verifying WITH_OPENNI=OFF => 'HAVE_OPENNI'=FALSE
-- Verifying WITH_OPENNI2=OFF => 'HAVE_OPENNI2'=FALSE
-- Verifying WITH_PNG=ON => 'HAVE_PNG'=TRUE
-- Verifying WITH_SPNG=OFF => 'HAVE_SPNG'=FALSE
-- Verifying WITH_GDCM=ON => 'HAVE_GDCM'=TRUE
-- Verifying WITH_PVAPI=OFF => 'HAVE_PVAPI'=FALSE
-- Verifying WITH_ARAVIS= => 'HAVE_ARAVIS_API'=FALSE
-- Verifying WITH_QT=6 => 'HAVE_QT'=TRUE
-- Verifying WITH_WIN32UI=ON => 'HAVE_WIN32UI'=TRUE
-- Verifying WITH_TBB=ON => 'HAVE_TBB'=TRUE
-- Verifying WITH_HPX=OFF => 'HAVE_HPX'=FALSE
-- Verifying WITH_OPENMP=ON => 'HAVE_OPENMP'=TRUE
-- Verifying WITH_PTHREADS_PF= => 'HAVE_PTHREADS_PF'=FALSE
-- Verifying WITH_TIFF=ON => 'HAVE_TIFF'=TRUE
-- Verifying WITH_V4L= => 'HAVE_CAMV4L;OR;HAVE_CAMV4L2;OR;HAVE_VIDEOIO'=FALSE
-- Verifying WITH_DSHOW=ON => 'HAVE_DSHOW'=TRUE
-- Verifying WITH_MSMF=OFF => 'HAVE_MSMF'=FALSE
-- Verifying WITH_MSMF_DXVA=OFF => 'HAVE_MSMF_DXVA'=FALSE
-- Verifying WITH_XIMEA=OFF => 'HAVE_XIMEA'=FALSE
-- Verifying WITH_UEYE=OFF => 'HAVE_UEYE'=FALSE
-- Verifying WITH_XINE= => 'HAVE_XINE'=FALSE
-- Verifying WITH_CLP=OFF => 'HAVE_CLP'=FALSE
-- Verifying WITH_OPENCL=ON => 'HAVE_OPENCL'=TRUE
-- Verifying WITH_OPENCL_SVM=OFF => 'HAVE_OPENCL_SVM'=FALSE
-- Verifying WITH_OPENCLAMDFFT=OFF => 'HAVE_CLAMDFFT'=FALSE
-- Verifying WITH_OPENCLAMDBLAS=OFF => 'HAVE_CLAMDBLAS'=FALSE
-- Verifying WITH_DIRECTX=ON => 'HAVE_DIRECTX'=TRUE
-- Verifying WITH_DIRECTML=ON => 'HAVE_DIRECTML'=TRUE
-- Verifying WITH_OPENCL_D3D11_NV=OFF => 'HAVE_OPENCL_D3D11_NV'=FALSE
-- Verifying WITH_LIBREALSENSE=OFF => 'HAVE_LIBREALSENSE'=FALSE
-- Verifying WITH_VA=OFF => 'HAVE_VA'=FALSE
-- Verifying WITH_VA_INTEL=OFF => 'HAVE_VA_INTEL'=FALSE
-- Verifying WITH_MFX=OFF => 'HAVE_MFX'=FALSE
-- Verifying WITH_GDAL=OFF => 'HAVE_GDAL'=FALSE
-- Verifying WITH_GPHOTO2= => 'HAVE_GPHOTO2'=FALSE
-- Verifying WITH_LAPACK=OFF => 'HAVE_LAPACK'=FALSE
-- Verifying WITH_ITT=OFF => 'HAVE_ITT'=FALSE
-- Verifying WITH_PROTOBUF=ON => 'HAVE_PROTOBUF'=TRUE
-- Verifying WITH_IMGCODEC_HDR=ON => 'HAVE_IMGCODEC_HDR'=TRUE
-- Verifying WITH_IMGCODEC_SUNRASTER=ON => 'HAVE_IMGCODEC_SUNRASTER'=TRUE
-- Verifying WITH_IMGCODEC_PXM=ON => 'HAVE_IMGCODEC_PXM'=TRUE
-- Verifying WITH_IMGCODEC_PFM=ON => 'HAVE_IMGCODEC_PFM'=TRUE
-- Verifying WITH_QUIRC=ON => 'HAVE_QUIRC'=TRUE
-- Verifying WITH_ANDROID_MEDIANDK= => 'HAVE_ANDROID_MEDIANDK'=FALSE
-- Verifying WITH_ANDROID_NATIVE_CAMERA= => 'HAVE_ANDROID_NATIVE_CAMERA'=FALSE
-- Verifying WITH_ONNX=OFF => 'HAVE_ONNX'=FALSE
-- Verifying WITH_TIMVX=OFF => 'HAVE_TIMVX'=FALSE
-- Verifying WITH_OBSENSOR=OFF => 'HAVE_OBSENSOR'=FALSE
-- Verifying WITH_CANN=OFF => 'HAVE_CANN'=FALSE
-- Verifying WITH_FLATBUFFERS=ON => 'HAVE_FLATBUFFERS'=TRUE
-- Verifying WITH_ZLIB_NG=OFF => 'HAVE_ZLIB_NG'=FALSE
-- Verifying ENABLE_CUDA_FIRST_CLASS_LANGUAGE=ON => 'HAVE_CUDA'=TRUE
-- Verifying WITH_TESSERACT=ON => 'HAVE_TESSERACT'=TRUE
-- Configuring done (127.7s)
CMake Warning:
Value of OPENCV_BUILD_INFO_STR contained a newline; truncating
-- Generating done (5.0s)
CMake Warning:
Value of OPENCV_BUILD_INFO_STR contained a newline; truncating
-- Build files have been written to: E:/code/buildtrees/opencv4/x64-windows-release-rel
```
but this is the build/install log, which fails immediately.
```
ninja: error: 'modules/core/opencl_kernels_core.hpp', needed by 'modules/core/CMakeFiles/opencv_core_autogen_timestamp_deps', missing and no known rule to make it
```
I am stuck, any help in understanding what might be the issue? Disabling the `WITH_OPENCL` feature does not solve the problem
### Steps to reproduce
```
git clone https://github.com/cenit/vcpkg
cd vcpkg
git checkout dev/cenit/opencv410
.\bootstrap-vcpkg.bat
.\vcpkg install vcpkg-ci-opencv --overlay-ports=.\scripts\test_ports\ --recurse --triplet=x64-windows-release --host-triplet=x64-windows-release
```
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [X] I updated to the latest OpenCV version and the issue is still there
- [X] There is reproducer code and related data files (videos, images, onnx, etc) | bug,category: build/install | low | Critical |
2,638,289,608 | go | x/net/quic: Endpoint.Dial prefers IPv4 to IPv6 | ### Go version
go version go1.23.2 linux/amd64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE=''
GOARCH='amd64'
GOBIN=''
GOCACHE='/home/user/.cache/go-build'
GOENV='/home/user/.config/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='amd64'
GOHOSTOS='linux'
GOINSECURE=''
GOMODCACHE='/home/user/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='linux'
GOPATH='/home/user/go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/usr/local/go1.23.2.linux-amd64'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/usr/local/go1.23.2.linux-amd64/pkg/tool/linux_amd64'
GOVCS=''
GOVERSION='go1.23.2'
GODEBUG=''
GOTELEMETRY='local'
GOTELEMETRYDIR='/home/user/.config/go/telemetry'
GCCGO='gccgo'
GOAMD64='v1'
AR='ar'
CC='gcc'
CXX='g++'
CGO_ENABLED='1'
GOMOD='/home/user/dev/nspeed-client/go.mod'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -m64 -pthread -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=/tmp/go-build1098587137=/tmp/go-build -gno-record-gcc-switches'
```
### What did you do?
Create a `quic.Endpoint` then call `Dial` method with a name (not a literal ip) in the address parameter.
here is a small POC :
````
package main
import (
"context"
"crypto/tls"
"fmt"
"net"
"golang.org/x/net/quic"
)
func main() {
// a http/3 enabled url like google.com or cloudflare.com
address := "nspeed.app"
quicConf := &quic.Config{
TLSConfig: &tls.Config{
MinVersion: tls.VersionTLS13,
NextProtos: []string{"h3"},
ServerName: address,
},
}
// local endpoint
endp, err := quic.Listen("udp", ":0", nil)
if err != nil {
panic(err)
}
// connect to address on port 443
qconn, err := endp.Dial(context.Background(), "", address+":443", quicConf)
if err != nil {
panic(err)
}
fmt.Println("connected") // currently there is no way to display the remote ip , use strace or a network capture
// unless you have https://github.com/golang/net/pull/225
// fmt.Println("connected to", qconn.RemoteAddr())
qconn.Close()
}
````
### What did you see happen?
The connection happens with IPv4
### What did you expect to see?
The connection should use IPv6 is it's available at the OS level.
This is a direct consequence of `net.ResolveUDPAddr` not preferring IPv6 (`net.ResolveTCPAddr` and `net.ResolveIPAddr` also have this issue)
The culprit is in [src/net/ipsock.go#L82](https://github.com/golang/go/blob/d6fb0ab2c7a13658fc808d431bbaf9c5f6b8da62/src/net/ipsock.go#L82) (`forResolve` function)
see also #28666
A workaround is to resolve the `address` parameter before calling `Endpoint.Dial`
`net.Dial` (tcp or udp) doesn't have this issue. | NeedsInvestigation | low | Critical |
2,638,289,843 | opencv | dnn Tflite module SQUARED_DIFFERENCE, SUM, MAXIMUM, SQRT operands support | ### Describe the feature and motivation
Opencv 4.10.
Hello!
The team and I wrote a neural network on Tensorflow-lite, which uses the operands SQUARED_DIFFERENCE, SUM, MAXIMUM, SQRT. We use it through the dnn opencv module, and we encountered the problem that these operands are not implemented.
Any help with implemeting them?
### Additional context
_No response_ | feature,category: dnn | low | Minor |
2,638,320,239 | rust | Suggest swapping the equality when the message `can't compare` occurs | ### Code
```rust
// inspired by https://github.com/rust-lang/rust/issues/130495
struct T(String);
impl PartialEq<String> for T {
fn eq(&self, other: &String) -> bool {
&self.0 == other
}
}
fn main() {
String::from("123") == T(String::from("123"));
}
```
### Current output
```shell
error[E0277]: can't compare `String` with `T`
--> src/main.rs:11:25
|
11 | String::from("123") == T(String::from("123"));
| ^^ no implementation for `String == T`
|
= help: the trait `PartialEq<T>` is not implemented for `String`
= help: the following other types implement trait `PartialEq<Rhs>`:
`String` implements `PartialEq<&str>`
`String` implements `PartialEq<Cow<'_, str>>`
`String` implements `PartialEq<str>`
`String` implements `PartialEq`
For more information about this error, try `rustc --explain E0277`.
error: could not compile `playground` (bin "playground") due to 1 previous error
```
### Desired output
```shell
error[E0277]: can't compare `String` with `T`
--> src/main.rs:11:25
|
11 | String::from("123") == T(String::from("123"));
| ^^ no implementation for `String == T`
|
= help: the trait `PartialEq<T>` is not implemented for `String`
= help: the following other types implement trait `PartialEq<Rhs>`:
`String` implements `PartialEq<&str>`
`String` implements `PartialEq<Cow<'_, str>>`
`String` implements `PartialEq<str>`
`String` implements `PartialEq`
= note: `T` implements `PartialEq<String>`
help: consider swapping the equality
|
11 | T(String::from("123")) == String::from("123");
| ~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~
For more information about this error, try `rustc --explain E0277`.
error: could not compile `playground` (bin "playground") due to 1 previous error
```
### Rationale and extra context
In this case where `T` implements `PartialEq<String>`, swapping the equality is a good way to fix it.
Although #132404 has implemented this, it only covers the case where `error[0308]: mismatched types` occurs.
### Other cases
_No response_
### Rust Version
1.84.0-nightly (2024-10-15 e7c0d2750726c1f08b1d) (playground)
### Anything else?
_No response_ | A-diagnostics,T-compiler,A-suggestion-diagnostics | low | Critical |
2,638,356,382 | PowerToys | Advanced Paste to folder | ### Description of the new feature / enhancement
The new advanced paste options are awesome! A great new feature would be to be able to paste as a folder, not just a .txt, .html. etc..
Basically It would be awesome to copy text, then paste as a folder with the name of the folder being the text you copied.
### Scenario when this would be used?
As example, i could copy a ticket number and then paste it as a folder, or the name of a application, or whatever you want.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,638,356,415 | go | proposal: runtime: manage off-heap memory lifecycle using the garbage collector | ### Proposal Details
### Proposal
Hi folks, I'd like to explore the possibility for the runtime to 'adopt' externally-allocated memory by tracking pointers to the span and unmapping the underlying memory if there are no more references:
```go
// TrackPointers tells the garbage collector to consider the off-heap memory
// span described by ptr and size as Go memory. finalize is scheduled for
// execution when the span is no longer referenced. The span is never reused
// to satisfy any other allocations.
//
// TrackPointers panics if the span overlaps with any existing memory span
// known to the Go runtime, heap or otherwise.
func TrackPointers(ptr unsafe.Pointer, size int, finalize func())
```
To be used as:
```go
alloc, _ := unix.MmapPtr(mapFD, 0, nil, size, unix.PROT_READ|unix.PROT_WRITE, unix.MAP_SHARED)
runtime.TrackPointers(ptr, size, func(p unsafe.Pointer) {
_ = unix.MunmapPtr(p, size)
})
```
Or, alternatively, a variant without the finalize argument that allows setting a finalizer explicitly:
```go
alloc, _ := unix.MmapPtr(mapFD, 0, nil, size, unix.PROT_READ|unix.PROT_WRITE, unix.MAP_SHARED)
runtime.TrackPointers(ptr, size)
runtime.SetFinalizer(alloc, func(p unsafe.Pointer) {
_ = unix.MunmapPtr(p, size)
})
```
Or, if that's equally undesirable, an internal symbol we can `//go:linkname` and make `unix.Mmap` use it transparently.
---
### Background
I'm working on a new feature in [ebpf-go](https://github.com/cilium/ebpf). A while ago, the Linux kernel gained the abillity to map the contents of a bpf map into process memory using mmap(), essentially treating bpf map contents as a file. Historically, all map accesses required preparing buffers for the key and value to pass to a bpf() syscall, a costly operation for busy maps. The mmapable map change made it so map accesses can be done by simply reading or writing to a memory location in a user space process, speeding things up by an order of magnitude or more.
Naturally, we'd like to enjoy the benefits of mmapable maps over in Go land as well, but this poses some unique challenges around memory management, specifically for managing the lifecycle of the underlying memory mappings. A common use case for mmapable maps is interacting with global BPF C variables. These are laid out in the typical data sections like `.bss`, `.data` and `.rodata` and are exposed to user space as plain BPF array maps. My goal is to be able to represent a global C variable like
```c
volatile __u16 global_u16;
```
as a canonical Go variable, albeit a pointer. For example:
```go
var GlobalUint16 *uint16
```
This becomes even more interesting if the global variable is only accessed atomically (using `__sync_*` primitives) on the C side, allowing the shared memory to be reinterpreted as a Go atomic type like `atomic.Uint16`, automatically giving the caller access to all operations implemented on those types.
Here's a playground link sketching the overall idea: https://go.dev/play/p/NyoPxKZbK5R. (Note: run this locally, playground runners lack CAP_ADMIN and/or CAP_BPF.)
Since the runtime doesn't track references to this mmap()ed region, we need to bind the lifecycle of the memory mapping to some Go object (in my ebpf-go proposal, this is modeled as an `ebpf.Memory` struct), but care needs to be taken not to lose the reference to this object if we allow the caller to take pointers to the underlying memory. The risk of a use-after-free is high.
---
This is somewhat the inverse of Go arenas, yet tangentially-related. I was sad to find out the arenas proposal is on hold indefinitely, since it would've opened the door for some more manual memory management in Go. Aside from accessing bpf maps, I can imagine this mechanism being useful for databases or zero-copy file parsers, as it would enable passing Go pointers to structs that reside in file-backed memory, without worrying about use-after-free.
I originally got this idea from https://pkg.go.dev/github.com/Jille/gcmmap, a package that mmaps over the Go heap using MAP_FIXED. It uses `runtime.mallocgc()` but allocating a byte slice works just as well. I experimented with this approach for a few weeks and it happens to work beautifully, but it makes several hard assumptions:
- there's no moving garbage collector (although we have `runtime.heapObjectsCanMove` nowadays)
- the heap is always mapped using `PROT_READ|PROT_WRITE` and `MAP_ANON|MAP_PRIVATE`
- there are no other protections like mseal()
Not to mention the risk of accidentally clearing a part of, or leaving a hole in the middle of the heap. Our package powers many mission-critical systems, and this feature would be enabled by default, which means we need to be careful. ebpf-go already made itself into the `go:linkname` hall of shame, so I'll try not to exacerbate that issue further. :slightly_smiling_face:
Please let me know what you think. Thank you! | Proposal | low | Major |
2,638,397,610 | vscode | Inline chat code blocks extend more to right than chat input and also overlap the scroll bar | Inline chat with response streaming in shows the code block overlapping with scroll bar:

This is also a problem in terminal chat but wouldn't want to fix that until the editor inline chat is perfect

| bug,ux,inline-chat | low | Minor |
2,638,424,830 | PowerToys | Powertoys Run can't popup on remote desktop | ### Microsoft PowerToys version
0.86.0
### Installation method
PowerToys auto-update
### Running as admin
No
### Area(s) with issue?
PowerToys Run
### Steps to reproduce
While teamviewer is connected, press shortcut key to launch powertoys run. Even when keyboard shortcuts are forwarded to the teamviewer destination, Keys like the windows key do pass through to the remote machine.
But powertoys run, always opens in the source machine, not the remote (it is installed on both)
I am using shift-backspace as the shortcut if it matters.....
### ✔️ Expected Behavior
I want powertoys run to open on the remote computer like other keyboard shortcuts, instead it can only run on the host.
### ❌ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Needs-Triage,Needs-Team-Response,Area-App Compat | low | Minor |
2,638,438,585 | terminal | Terminal crashing in infinite loop when profile set to run as admin for non-admin users | ### Windows Terminal version
1.21.2911.0
### Windows build number
10.0.22631.4317
### Other Software
_No response_
### Steps to reproduce
Add account to administrator group
set default profile to run as Administrator
remove account from administrator group
start terminal
terminal will not start as default profile cant be run in admin mode. Terminal will crash and then keep going in loop forever. Open, crash, start again
In our company people got accidently removed from admin groups and terminal become unusable on majority computers.
### Expected Behavior
- in case of failure terminal will not be trying to start in infinity loop as it will never succeed
- as fallback it should start in non admin mode and provide notification about it or just show notification that profile cant be start in admin mode and user should modify setting manually first to remove admin mode
### Actual Behavior
terminal will not start (it will terminate during startup, ironically :)) and system will keep trying to start it in infinite loop | Issue-Bug,Product-Terminal,Area-Windowing | low | Critical |
2,638,529,227 | ui | [bug]: Sidebar has the wrong background color in mobile dark mode (specificity issue) | ### Describe the bug
The sidebar in mobile dark mode uses the default sheet background color rather than the sidebar's background color because of a specificity issue:

This is the line causing the issue:
```tsx
if (isMobile) {
return (
<Sheet open={openMobile} onOpenChange={setOpenMobile} {...props}>
<SheetContent
data-sidebar="sidebar"
data-mobile="true"
className="w-[--sidebar-width] bg-sidebar p-0 text-sidebar-foreground [&>button]:hidden" // <--- here
style={
{
"--sidebar-width": SIDEBAR_WIDTH_MOBILE,
} as React.CSSProperties
}
side={side}
>
<div className="flex h-full w-full flex-col">{children}</div>
</SheetContent>
</Sheet>
)
}
```
and the fix is here:
```tsx
className="w-[--sidebar-width] bg-sidebar p-0 text-sidebar-foreground dark:bg-sidebar [&>button]:hidden"
```
I intend on opening a PR for this issue
Link to the PR: #5753
### Affected component/components
Sidebar
### How to reproduce
1. Look at the sidebar in desktop mode, `bg-sidebar` is applied
2. Look at the sidebar in mobile mode, `bg-sidebar` is overriden by the default bg color in dark mode
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
No relevant info here
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,638,535,011 | ollama | Cannot get a model prefixed with a namespace from /v1/models/{model} endpoint | ### What is the issue?
My initial goal is to check if specific model is available using Ollama API.
I use OpenAI library `github.com/sashabaranov/go-openai` to do that.
The problem is when I try to get models which are not present in the main catalog and have an author prefix, I'm getting `404` and non-JSON response like if the routing is failing.
To eliminate the possibility of the library bug I tried to do the same with CURL.
The first thing I did is requests full list of models to be sure that the model is really in the list:
``` shell
curl http://localhost:11434/v1/models
```
which resulted in the following response:
```json
{"object":"list","data":[{"id":"T-lite-instruct-0.1.Q4_K_M.gguf:latest","object":"model","created":1730118522,"owned_by":"library"},{"id":"x/llama3.2-vision:11b-instruct-q8_0","object":"model","created":1729612391,"owned_by":"x"},{"id":"mannix/llama3.1-8b-lexi:q8_0","object":"model","created":1729272492,"owned_by":"mannix"},{"id":"llama3.2:3b-instruct-q4_K_M","object":"model","created":1727650425,"owned_by":"library"},{"id":"reader-lm:1.5b-q6_K","object":"model","created":1727092629,"owned_by":"library"},{"id":"qwen2:7b-instruct-q4_K_M","object":"model","created":1727092627,"owned_by":"library"},{"id":"phi3:14b-medium-4k-instruct-q4_K_M","object":"model","created":1727092620,"owned_by":"library"},{"id":"nuextract:3.8b-q4_K_M","object":"model","created":1727092619,"owned_by":"library"},{"id":"hermes3:8b-llama3.1-q6_K","object":"model","created":1727092618,"owned_by":"library"},{"id":"nomic-embed-text:latest","object":"model","created":1727092617,"owned_by":"library"},{"id":"nemotron-mini:4b-instruct-q4_K_M","object":"model","created":1727092615,"owned_by":"library"},{"id":"mxbai-embed-large:latest","object":"model","created":1727092614,"owned_by":"library"},{"id":"mistral-nemo:12b-instruct-2407-q4_K_M","object":"model","created":1727092613,"owned_by":"library"},{"id":"llama3.1:8b-instruct-q6_K","object":"model","created":1727092609,"owned_by":"library"},{"id":"gemma2:9b-instruct-q4_K_M","object":"model","created":1727092603,"owned_by":"library"},{"id":"gemma2:27b-instruct-q4_K_M","object":"model","created":1727092601,"owned_by":"library"},{"id":"deepseek-coder-v2:16b-lite-instruct-q4_K_M","object":"model","created":1727092600,"owned_by":"library"},{"id":"codellama:13b-instruct-q4_K_M","object":"model","created":1727092599,"owned_by":"library"},"id":"codegemma:latest","object":"model","created":1727092598,"owned_by":"library"},{"id":"phi3:3.8b-mini-instruct-4k-q4_K_M","object":"model","created":1713898146,"owned_by":"library"}]}
```
So here's the model I wanted to request: `{"id"
:"mannix/llama3.1-8b-lexi:q8_0","object":"model","created":1729272492,"owned_by":"mannix"}`
```shell
curl http://localhost:11434/v1/models/mannix/llama3.1-8b-lexi:q8_0
```
No luck:
```
404 page not found%
```
The problem is very obvious if you know how back-end works. It looks really like a routing problem.
The obvious solution for me was to encode the value to remove the `/` character from the URL:
```shell
curl http://localhost:11434/v1/models/mannix%2Fllama3.1-8b-lexi%3Aq8_0
```
I've got an error again:
```
404 page not found%
```
To ensure that I'm using correct endpoint, I've tried with another model:
```shell
curl http://localhost:11434/v1/models/qwen2:7b-instruct-q4_K_M
```
It worked:
```json
{"id":"qwen2:7b-instruct-q4_K_M","object":"model","created":1727092627,"owned_by":"library"}
```
When I request non-encoded model ID, Ollama logs show this:
```
ollama | [GIN] 2024/11/06 - 15:59:11 | 404 | 13.715µs | 172.24.0.1 | GET "/v1/models/mannix/llama3.1-8b-lexi:q8_0"
```
When I request encoded model ID, Ollama logs show this:
```
ollama | [GIN] 2024/11/06 - 15:58:01 | 404 | 15.239µs | 172.24.0.1 | GET "/v1/models/mannix/llama3.1-8b-lexi:q8_0"
```
Which is basically the same result, so I guess the reason of a failure may be in the fact that URL is being decoded before matching the route for some reason.
Most likely the same should be applicable for all previous Ollama versions and not only `0.4.0-rc6`
### OS
Docker
### GPU
AMD
### CPU
AMD
### Ollama version
0.4.0-rc6 | bug,api | low | Critical |
2,638,563,990 | flutter | dispose is called twice when navigating forth and back quickly between two widgets | ### Steps to reproduce
1. Press the 'Go to Second Screen' button.
2. Then tap Back to return to the HomeScreen (Android).
3. Repeat the two actions above at a fast pace.
### Expected results
initState and dispose follow the correct lifecycle when switching quickly between screens.
### Actual results
initState and dispose don't behave as expected when quickly switching between screens
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Navigation Example',
// Khai báo route cho Navigator.pushNamed
initialRoute: '/',
routes: {
'/': (context) => const HomeScreen(),
'/second': (context) => const SecondScreen(),
},
);
}
}
class HomeScreen extends StatelessWidget {
const HomeScreen({super.key});
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: const Text('Home Screen'),
),
body: PopScope (
canPop: false,
child: Center(
child: ElevatedButton(
onPressed: () {
// Navigate to the second screen using pushNamed.
Navigator.pushNamed(context, '/second');
},
child: const Text('Go to Second Screen'),
),
),
),
);
}
}
class SecondScreen extends StatefulWidget {
const SecondScreen({super.key});
@override
State<SecondScreen> createState() => _SecondScreenState();
}
int _ii = 0;
class _SecondScreenState extends State<SecondScreen> {
@override
void initState() {
super.initState();
_ii = _ii + 1;
print("initState call: $_ii" );
}
@override
void dispose() {
print("dispose call: $_ii" );
super.dispose();
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: const Text('Second Screen'),
),
body: Center(
child: ElevatedButton(
onPressed: () {
// "Go back to the first screen."
Navigator.pop(context);
},
child: const Text('Back to Home Screen'),
),
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>

</details>
### Logs
<details open><summary>Logs</summary>
```console
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 0
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 1
I/flutter ( 5848): initState call: 1
W/OnBackInvokedCallback( 5848): OnBackInvokedCallback is not enabled for the application.
W/OnBackInvokedCallback( 5848): Set 'android:enableOnBackInvokedCallback="true"' in the application manifest.
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 0
D/Activity( 5848): onKeyDown(KEYCODE_BACK)
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 1
D/Activity( 5848): onKeyUp(KEYCODE_BACK) isTracking()=true isCanceled()=false hasCallback=false
D/ProfileInstaller( 5848): Installing profile for com.example.testt
I/flutter ( 5848): dispose call: 1
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 0
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 1
I/flutter ( 5848): initState call: 2
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 0
D/Activity( 5848): onKeyDown(KEYCODE_BACK)
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 1
D/Activity( 5848): onKeyUp(KEYCODE_BACK) isTracking()=true isCanceled()=false hasCallback=false
I/flutter ( 5848): dispose call: 2
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 0
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 1
I/flutter ( 5848): initState call: 3
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 0
D/Activity( 5848): onKeyDown(KEYCODE_BACK)
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 1
D/Activity( 5848): onKeyUp(KEYCODE_BACK) isTracking()=true isCanceled()=false hasCallback=false
I/flutter ( 5848): dispose call: 3
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 0
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 1
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 0
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 1
I/flutter ( 5848): initState call: 4
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 0
D/Activity( 5848): onKeyDown(KEYCODE_BACK)
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 1
D/Activity( 5848): onKeyUp(KEYCODE_BACK) isTracking()=true isCanceled()=false hasCallback=false
I/flutter ( 5848): dispose call: 4
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 0
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 1
I/flutter ( 5848): initState call: 5
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 0
D/Activity( 5848): onKeyDown(KEYCODE_BACK)
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 1
D/Activity( 5848): onKeyUp(KEYCODE_BACK) isTracking()=true isCanceled()=false hasCallback=false
I/flutter ( 5848): dispose call: 5
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 0
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 1
I/flutter ( 5848): initState call: 6
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 0
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 1
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 0
D/Activity( 5848): onKeyDown(KEYCODE_BACK)
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 1
D/Activity( 5848): onKeyUp(KEYCODE_BACK) isTracking()=true isCanceled()=false hasCallback=false
I/flutter ( 5848): dispose call: 6
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 0
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 1
I/flutter ( 5848): initState call: 7
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 0
D/Activity( 5848): onKeyDown(KEYCODE_BACK)
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 1
D/Activity( 5848): onKeyUp(KEYCODE_BACK) isTracking()=true isCanceled()=false hasCallback=false
I/flutter ( 5848): dispose call: 7
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 0
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 1
I/flutter ( 5848): initState call: 8
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 0
D/Activity( 5848): onKeyDown(KEYCODE_BACK)
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 1
D/Activity( 5848): onKeyUp(KEYCODE_BACK) isTracking()=true isCanceled()=false hasCallback=false
I/flutter ( 5848): dispose call: 8
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 0
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 1
I/flutter ( 5848): initState call: 9
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 0
D/Activity( 5848): onKeyDown(KEYCODE_BACK)
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 1
D/Activity( 5848): onKeyUp(KEYCODE_BACK) isTracking()=true isCanceled()=false hasCallback=false
I/flutter ( 5848): dispose call: 9
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 0
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 1
I/flutter ( 5848): initState call: 10
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 0
D/Activity( 5848): onKeyDown(KEYCODE_BACK)
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 1
D/Activity( 5848): onKeyUp(KEYCODE_BACK) isTracking()=true isCanceled()=false hasCallback=false
I/flutter ( 5848): dispose call: 10
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 0
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 1
I/flutter ( 5848): initState call: 11
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 0
D/Activity( 5848): onKeyDown(KEYCODE_BACK)
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 1
D/Activity( 5848): onKeyUp(KEYCODE_BACK) isTracking()=true isCanceled()=false hasCallback=false
I/flutter ( 5848): dispose call: 11
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 0
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 1
I/flutter ( 5848): initState call: 12
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 0
D/Activity( 5848): onKeyDown(KEYCODE_BACK)
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 1
D/Activity( 5848): onKeyUp(KEYCODE_BACK) isTracking()=true isCanceled()=false hasCallback=false
I/flutter ( 5848): dispose call: 12
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 0
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 1
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 0
D/Activity( 5848): onKeyDown(KEYCODE_BACK)
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 1
D/Activity( 5848): onKeyUp(KEYCODE_BACK) isTracking()=true isCanceled()=false hasCallback=false
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 0
D/Activity( 5848): onKeyDown(KEYCODE_BACK)
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 1
D/Activity( 5848): onKeyUp(KEYCODE_BACK) isTracking()=true isCanceled()=false hasCallback=false
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 0
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 1
I/flutter ( 5848): initState call: 13
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 0
D/Activity( 5848): onKeyDown(KEYCODE_BACK)
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 1
D/Activity( 5848): onKeyUp(KEYCODE_BACK) isTracking()=true isCanceled()=false hasCallback=false
I/flutter ( 5848): dispose call: 13
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 0
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 1
I/flutter ( 5848): initState call: 14
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 0
D/Activity( 5848): onKeyDown(KEYCODE_BACK)
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 1
D/Activity( 5848): onKeyUp(KEYCODE_BACK) isTracking()=true isCanceled()=false hasCallback=false
I/flutter ( 5848): dispose call: 14
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 0
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 1
I/flutter ( 5848): initState call: 15
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 0
D/Activity( 5848): onKeyDown(KEYCODE_BACK)
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 1
D/Activity( 5848): onKeyUp(KEYCODE_BACK) isTracking()=true isCanceled()=false hasCallback=false
I/flutter ( 5848): dispose call: 15
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 0
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 1
I/flutter ( 5848): initState call: 16
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 0
D/Activity( 5848): onKeyDown(KEYCODE_BACK)
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 1
D/Activity( 5848): onKeyUp(KEYCODE_BACK) isTracking()=true isCanceled()=false hasCallback=false
I/flutter ( 5848): dispose call: 16
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 0
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 1
I/flutter ( 5848): initState call: 17
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 0
D/Activity( 5848): onKeyDown(KEYCODE_BACK)
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 1
D/Activity( 5848): onKeyUp(KEYCODE_BACK) isTracking()=true isCanceled()=false hasCallback=false
I/flutter ( 5848): dispose call: 17
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 0
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 1
I/flutter ( 5848): initState call: 18
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 0
D/Activity( 5848): onKeyDown(KEYCODE_BACK)
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 1
D/Activity( 5848): onKeyUp(KEYCODE_BACK) isTracking()=true isCanceled()=false hasCallback=false
I/flutter ( 5848): dispose call: 18
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 0
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 1
I/flutter ( 5848): initState call: 19
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 0
D/Activity( 5848): onKeyDown(KEYCODE_BACK)
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 1
D/Activity( 5848): onKeyUp(KEYCODE_BACK) isTracking()=true isCanceled()=false hasCallback=false
I/flutter ( 5848): dispose call: 19
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 0
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 1
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 0
D/Activity( 5848): onKeyDown(KEYCODE_BACK)
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 1
D/Activity( 5848): onKeyUp(KEYCODE_BACK) isTracking()=true isCanceled()=false hasCallback=false
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 0
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 0
D/Activity( 5848): onKeyDown(KEYCODE_BACK)
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 1
I/flutter ( 5848): initState call: 20
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 1
D/Activity( 5848): onKeyUp(KEYCODE_BACK) isTracking()=true isCanceled()=false hasCallback=false
I/flutter ( 5848): dispose call: 20
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 0
D/Activity( 5848): onKeyDown(KEYCODE_BACK)
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 1
D/Activity( 5848): onKeyUp(KEYCODE_BACK) isTracking()=true isCanceled()=false hasCallback=false
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 0
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 1
I/flutter ( 5848): initState call: 21
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 0
D/Activity( 5848): onKeyDown(KEYCODE_BACK)
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 1
D/Activity( 5848): onKeyUp(KEYCODE_BACK) isTracking()=true isCanceled()=false hasCallback=false
I/flutter ( 5848): dispose call: 21
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 0
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 1
I/flutter ( 5848): initState call: 22
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 0
D/Activity( 5848): onKeyDown(KEYCODE_BACK)
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 1
D/Activity( 5848): onKeyUp(KEYCODE_BACK) isTracking()=true isCanceled()=false hasCallback=false
I/flutter ( 5848): dispose call: 22
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 0
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 1
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 0
D/Activity( 5848): onKeyDown(KEYCODE_BACK)
I/flutter ( 5848): initState call: 23
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 1
D/Activity( 5848): onKeyUp(KEYCODE_BACK) isTracking()=true isCanceled()=false hasCallback=false
I/flutter ( 5848): dispose call: 23
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 0
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 1
I/flutter ( 5848): initState call: 24
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 0
D/Activity( 5848): onKeyDown(KEYCODE_BACK)
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 1
D/Activity( 5848): onKeyUp(KEYCODE_BACK) isTracking()=true isCanceled()=false hasCallback=false
I/flutter ( 5848): dispose call: 24
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 0
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 1
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 0
D/Activity( 5848): onKeyDown(KEYCODE_BACK)
I/flutter ( 5848): initState call: 25
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 1
D/Activity( 5848): onKeyUp(KEYCODE_BACK) isTracking()=true isCanceled()=false hasCallback=false
I/flutter ( 5848): dispose call: 25
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 0
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 1
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 0
D/Activity( 5848): onKeyDown(KEYCODE_BACK)
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 1
D/Activity( 5848): onKeyUp(KEYCODE_BACK) isTracking()=true isCanceled()=false hasCallback=false
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 0
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 0
D/Activity( 5848): onKeyDown(KEYCODE_BACK)
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 1
I/flutter ( 5848): initState call: 26
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 1
D/Activity( 5848): onKeyUp(KEYCODE_BACK) isTracking()=true isCanceled()=false hasCallback=false
I/flutter ( 5848): dispose call: 26
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 0
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 1
I/flutter ( 5848): initState call: 27
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 0
D/Activity( 5848): onKeyDown(KEYCODE_BACK)
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 1
D/Activity( 5848): onKeyUp(KEYCODE_BACK) isTracking()=true isCanceled()=false hasCallback=false
I/flutter ( 5848): dispose call: 27
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 0
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 1
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 0
D/Activity( 5848): onKeyDown(KEYCODE_BACK)
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 1
D/Activity( 5848): onKeyUp(KEYCODE_BACK) isTracking()=true isCanceled()=false hasCallback=false
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 0
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 1
I/flutter ( 5848): initState call: 28
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 0
D/Activity( 5848): onKeyDown(KEYCODE_BACK)
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 1
D/Activity( 5848): onKeyUp(KEYCODE_BACK) isTracking()=true isCanceled()=false hasCallback=false
I/flutter ( 5848): dispose call: 28
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 0
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 0
D/Activity( 5848): onKeyDown(KEYCODE_BACK)
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 1
I/flutter ( 5848): initState call: 29
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 1
D/Activity( 5848): onKeyUp(KEYCODE_BACK) isTracking()=true isCanceled()=false hasCallback=false
I/flutter ( 5848): dispose call: 29
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 0
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 1
I/flutter ( 5848): initState call: 30
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 0
D/Activity( 5848): onKeyDown(KEYCODE_BACK)
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 1
D/Activity( 5848): onKeyUp(KEYCODE_BACK) isTracking()=true isCanceled()=false hasCallback=false
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 0
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 1
I/flutter ( 5848): initState call: 31
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 0
D/Activity( 5848): onKeyDown(KEYCODE_BACK)
I/flutter ( 5848): dispose call: 31
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 1
D/Activity( 5848): onKeyUp(KEYCODE_BACK) isTracking()=true isCanceled()=false hasCallback=false
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 0
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme pointer 1
I/flutter ( 5848): initState call: 32
I/flutter ( 5848): dispose call: 32
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 0
D/Activity( 5848): onKeyDown(KEYCODE_BACK)
I/ViewRootImpl@195aef6[MainActivity]( 5848): ViewPostIme key 1
D/Activity( 5848): onKeyUp(KEYCODE_BACK) isTracking()=true isCanceled()=false hasCallback=false
I/flutter ( 5848): dispose call: 32
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[√] Flutter (Channel stable, 3.24.4, on Microsoft Windows [Version 10.0.22631.4391], locale en-US)
• Flutter version 3.24.4 on channel stable at C:\flutter
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 603104015d (13 days ago), 2024-10-24 08:01:25 -0700
• Engine revision db49896cf2
• Dart version 3.5.4
• DevTools version 2.37.3
[√] Windows Version (Installed version of Windows is version 10 or higher)
[√] Android toolchain - develop for Android devices (Android SDK version 35.0.0)
• Android SDK at C:\Users\HUY\AppData\Local\Android\Sdk
• Platform android-35, build-tools 35.0.0
• ANDROID_HOME = C:\Users\HUY\AppData\Local\Android\Sdk
• Java binary at: C:\Program Files\Android\Android Studi\jbr\bin\java
• Java version OpenJDK Runtime Environment (build 17.0.11+0--11852314)
• All Android licenses accepted.
[√] Chrome - develop for the web
• Chrome at C:\Program Files (x86)\Google\Chrome\Application\chrome.exe
[√] Visual Studio - develop Windows apps (Visual Studio Community 2022 17.10.4)
• Visual Studio at C:\Program Files\Microsoft Visual Studio\2022\Community
• Visual Studio Community 2022 version 17.10.35027.167
• Windows 10 SDK version 10.0.22000.0
[√] Android Studio (version 2024.1)
• Android Studio at C:\Program Files\Android\Android Studi
• Flutter plugin can be installed from:
https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.11+0--11852314)
[√] VS Code (version 1.95.1)
• VS Code at C:\Users\HUY\AppData\Local\Programs\Microsoft VS Code
• Flutter extension version 3.100.0
[√] Connected device (4 available)
• SM A325F (mobile) • RF8R31HCL7K • android-arm64 • Android 13 (API 33)
• Windows (desktop) • windows • windows-x64 • Microsoft Windows [Version 10.0.22631.4391]
• Chrome (web) • chrome • web-javascript • Google Chrome 130.0.6723.92
• Edge (web) • edge • web-javascript • Microsoft Edge 130.0.2849.68
[√] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| framework,f: routes,has reproducible steps,P2,team-framework,triaged-framework,found in release: 3.24,found in release: 3.27 | low | Minor |
2,638,620,176 | storybook | [Bug]: Storybook has stopped loading TypeError: (0 , import_common.handlebars) is not a function | ### Describe the bug
Storybook has been working fine for me for weeks, until today. No new changes. Now it refuses to start.
I have tried `storybook dev -p 6006` and `npx storybook@latest init`, I have also cleared my node modules and reinstalled.
```
> storybook dev --no-open --quiet -p 6006
@storybook/core v8.4.2
WARN The following packages are incompatible with Storybook 8.4.2 as they depend on different major versions of Storybook packages:
WARN - @storybook/addon-essentials@8.2.9 (8.4.2 available!)
WARN Repo: https://github.com/storybookjs/storybook/tree/next/code/addons/essentials
WARN - @storybook/addon-interactions@8.2.9 (8.4.2 available!)
WARN Repo: https://github.com/storybookjs/storybook/tree/next/code/addons/interactions
WARN - @storybook/nextjs@8.2.9 (8.4.2 available!)
WARN Repo: https://github.com/storybookjs/storybook/tree/next/code/frameworks/nextjs
WARN - @storybook/react@8.2.9 (8.4.2 available!)
WARN Repo: https://github.com/storybookjs/storybook/tree/next/code/renderers/react
WARN - @storybook/test@8.2.9 (8.4.2 available!)
WARN Repo: https://github.com/storybookjs/storybook/tree/next/code/lib/test
WARN Please consider updating your packages or contacting the maintainers for compatibility details.
WARN For more on Storybook 8 compatibility, see the linked GitHub issue:
WARN https://github.com/storybookjs/storybook/issues/26031
info Found existing addon "@storybook/addon-controls", skipping.
info Found existing addon "@storybook/addon-viewport", skipping.
info Found existing addon "@storybook/addon-controls", skipping.
info Found existing addon "@storybook/addon-viewport", skipping.
info => Serving static files from ././public at /
=> Failed to build the preview
TypeError: (0 , import_common.handlebars) is not a function
at getVirtualModules (./node_modules/@storybook/builder-webpack5/dist/presets/preview-preset.js:1:3007)
at async iframe_webpack_config_default (./node_modules/@storybook/builder-webpack5/dist/presets/preview-preset.js:6:222)
at async starter (./node_modules/@storybook/builder-webpack5/dist/index.js:1:6115)
at async Module.start (./node_modules/@storybook/builder-webpack5/dist/index.js:1:9902)
at async storybookDevServer (./node_modules/@storybook/core/dist/core-server/index.cjs:36000:11)
at async buildOrThrow (./node_modules/@storybook/core/dist/core-server/index.cjs:35017:12)
at async buildDevStandalone (./node_modules/@storybook/core/dist/core-server/index.cjs:37190:78)
at async withTelemetry (./node_modules/@storybook/core/dist/core-server/index.cjs:35757:12)
at async dev (./node_modules/@storybook/core/dist/cli/bin/index.cjs:2591:3)
at async s.<anonymous> (./node_modules/@storybook/core/dist/cli/bin/index.cjs:2643:74)
WARN Broken build, fix the error above.
WARN You may need to refresh the browser.
```
### Reproduction link
None
### Reproduction steps
_No response_
### System
System:
OS: macOS 14.7.1
CPU: (10) arm64 Apple M1 Max
Shell: 5.9 - /bin/zsh
Binaries:
Node: 22.9.0 - ~/.nvm/versions/node/v22.9.0/bin/node
Yarn: 4.4.0 - /opt/homebrew/bin/yarn <----- active
npm: 10.8.3 - ~/.nvm/versions/node/v22.9.0/bin/npm
Browsers:
Chrome: 130.0.6723.92
Safari: 17.6
npmPackages:
@storybook/addon-essentials: ^8.2.9 => 8.2.9
@storybook/addon-interactions: ^8.2.9 => 8.2.9
@storybook/addon-links: ^8.2.9 => 8.2.9
@storybook/addon-onboarding: ^8.2.9 => 8.2.9
@storybook/blocks: ^8.2.9 => 8.2.9
@storybook/nextjs: ^8.2.9 => 8.2.9
@storybook/react: ^8.2.9 => 8.2.9
@storybook/test: ^8.2.9 => 8.2.9
@storybook/types: ^8.2.9 => 8.2.9
eslint-plugin-storybook: ^0.8.0 => 0.8.0
msw-storybook-addon: ^2.0.4 => 2.0.4
storybook: ^8.2.9 => 8.4.2
### Additional context
_No response_ | bug,dependencies,has workaround | low | Critical |
2,638,664,057 | kubernetes | openapi verify breaks sometimes when release tags are added, we should prevent this | ### Which jobs are failing?
https://prow.k8s.io/?job=pull-kubernetes-verify
### Which tests are failing?
verify: openapi-spec is broken
### Since when has it been failing?
At least 6:55AM Pacific, November 6th. You can see the failed batch runs
### Testgrid link
_No response_
### Reason for failure (if possible)
Diff in the discovery api json data.
### Anything else we need to know?
https://github.com/kubernetes/kubernetes/pull/128615
### Relevant SIG(s)
/sig api-machinery release | sig/api-machinery,kind/failing-test,sig/release,triage/accepted | low | Critical |
2,638,676,798 | vscode | In vim mode pressing the `o` keybinding to insert a new line below the active line from within a line comment inserts line comment on the new line | This issue stems from the comment https://github.com/microsoft/vscode/issues/233186#issuecomment-2460218184
> I think a comment should also not be inserted when the new line is created using the command editor.action.insertLineAfter -- eg vim has a keybinding to create a newline and currently it creates a newline with // at the beginning of the line regardless where my cursor is (and this's an important keybinding in vim)
cc @ulugbekna | polish,editor-autoindent,under-discussion | low | Minor |
2,638,708,409 | rust | Incorrect warning about pointer to non_exhaustive ffi struct not being ffi safe | Rust warns that pointers to structs tagged with `non_exhaustive` aren't FFI safe in downstream crates. We ran into this in the `sdl3-sys` crate, where I've marked some FFI types, e.g. `SDL_Surface`, as non_exhaustive because the internal definition of the type used by the SDL library is larger than the public definition, and the non-public part isn't stable. Marking it with `non_exhaustive` prevents code from manually constructing their own value that would cause immediate UB if passed to SDL. The non_exhaustive structs are only used through pointers.
### Reproduction
Crate `a`, `lib.rs`:
```rust
#[repr(C)]
#[non_exhaustive]
pub struct Struct {
pub field: u8
}
extern "C" {
pub fn create_struct() -> *mut Struct;
pub fn destroy_struct(s: *mut Struct);
}
```
Crate `b`, `lib.rs`:
```rust
use a::Struct;
extern "C" {
pub fn use_struct(s: *mut Struct);
}
```
### I expected to see this happen:
Either neither crate should warn about this, or both should. `non_exhaustive` is as useful for FFI types as it is for native Rust types, so I'd prefer that neither crate warned.
If the warning is intentional, I'd like a way to disable it on the type itself (assuming it's safe, but I don't see why it wouldn't be)
### Instead, this happened:
Crate b warns that `Struct` isn't FFI safe. There's no warning for crate a.
```
warning: `extern` block uses type `Struct`, which is not FFI-safe
--> b/src/lib.rs:4:26
|
4 | pub fn use_struct(s: *mut Struct);
| ^^^^^^^^^^^ not FFI-safe
|
= note: this struct is non-exhaustive
= note: `#[warn(improper_ctypes)]` on by default
warning: `b` (lib) generated 1 warning
```
### Meta
`rustc --version --verbose`:
```
% rustc --version --verbose
rustc 1.82.0 (f6e511eec 2024-10-15)
binary: rustc
commit-hash: f6e511eec7342f59a25f7c0534f1dbea00d01b14
commit-date: 2024-10-15
host: aarch64-apple-darwin
release: 1.82.0
LLVM version: 19.1.1
% rustc +nightly --version --verbose
rustc 1.84.0-nightly (798fb83f7 2024-10-16)
binary: rustc
commit-hash: 798fb83f7d24e31b16acca113496f39ff168c143
commit-date: 2024-10-16
host: aarch64-apple-darwin
release: 1.84.0-nightly
LLVM version: 19.1.1
```
| A-lints,A-FFI,T-lang,T-compiler,C-bug,L-improper_ctypes,L-false-positive | low | Minor |
2,638,731,596 | PowerToys | PowerToys Run often can't resolve Evernote | ### Microsoft PowerToys version
0.86.0
### Installation method
Microsoft Store
### Running as admin
No
### Area(s) with issue?
PowerToys Run
### Steps to reproduce
I have Evernote installed via the Microsoft Store. PowerToys Run often can't resolve ".Evernote", i.e. it matches nothing and hitting Enter is a no-op. The link pinned to the Start menu works fine. It appears as if the issue is that every time there's an update to Evernote (which are plentiful and never auto-installed), PowerToys Run loses track of the app. Once I use Microsoft Store to 'Update', then PT Run recognizes ".Ever" just fine.
Most recently, this repro'd when I'd been running Evernote 10.113.5 previously, but they posted 10.114.2 to MS Store.
I hope you can find an easy fix.
(aside: Love your work. I've retired my Executor launcher-app to use PT Run.)
### ✔️ Expected Behavior
PowerToys Run should be able to find and execute Evernote even if that app is 1+ versions behind what's on MS Store.
### ❌ Actual Behavior
No Evernote icon and app-title appears when I type ".Evernote" and hitting Enter does not launch the app. (Until, it seems, an Evernote update gets installed.)
### Other Software
Evernote 10.113.5 --> 10.114.2 | Issue-Bug,Needs-Triage | low | Minor |
2,638,742,043 | rust | Tracking issue for release notes of #122408: Use futex-based synchronization on Apple platforms |
This issue tracks the release notes text for #122408.
### Steps
- [ ] Proposed text is drafted by PR author (or team) making the noteworthy change.
- [ ] Issue is nominated for release team review of clarity for wider audience.
- [ ] Release team includes text in release notes/blog posts.
### Release notes text
The responsible team for the underlying change should edit this section to replace the automatically generated link with a succinct description of what changed, drawing upon text proposed by the author (either in discussion or through direct editing).
````markdown
# Category (e.g. Language, Compiler, Libraries, Compatibility notes, ...)
- [Use futex-based synchronization on Apple platforms](https://github.com/rust-lang/rust/pull/122408)
````
> [!TIP]
> Use the [previous releases](https://doc.rust-lang.org/nightly/releases.html) categories to help choose which one(s) to use.
> The category will be de-duplicated with all the other ones by the release team.
>
> *More than one section can be included if needed.*
### Release blog section
If the change is notable enough for inclusion in the blog post, the responsible team should add content to this section.
*Otherwise leave it empty.*
````markdown
````
cc @joboet, @m-ou-se -- origin issue/PR authors and assignees for starting to draft text
| relnotes,T-libs,O-unix,A-atomic,O-apple,relnotes-tracking-issue | low | Minor |
2,638,790,251 | vscode | Terminal suggest: Contextually aware suggestions after git status | A common thing for me to do is clean up git state after a git status. Here's an example:

I'll often run `rm <file1> <file2>` or `git checkout <file1> <file2>` etc. after this. Could we have smarts when files are being suggested to auto fill recently seen git status files? I often use the mouse+clipboard to do this currently. | feature-request,terminal-suggest | low | Minor |
2,638,808,413 | deno | When a monorepo uses import maps to reference other modules in the same monorepo, references from other projects can fail | deno 2.0.2 (stable, release, x86_64-apple-darwin)
v8 12.9.202.13-rusty
typescript 5.6.2
This may be a question more than a bug. However I am not finding a solution by reading the [workspace](https://docs.deno.com/runtime/fundamentals/workspaces/) or [import maps](https://docs.deno.com/runtime/fundamentals/modules/#import-maps) documents, and my interpretation of these documents is that this should work.
The issue is
- if I create a module (repo2)
- and it references a module in repo1 `import * as bmod from "@jpravetz/bmod";`
- and repo1 is a monorepo with workspaces (amod and bmod)
- and the reference is via a relative path `"@jpravetz/bmod": "../repo1/bmod/mod.ts"`
- and the referenced module (bmod) references a different module (amod) in repo1 `import { add } from '@scope/amod';`
- and the reference is in the repo1/bmod import map `"@scope/amod": "../amod/mod.ts"`
- then when I run `deno lint` on repo2 I get a relative import path error
To reproduce
```sh
git clone https://github.com/jpravetz/repo1.git
git clone https://github.com/jpravetz/repo2.git
cd repo2
deno lint
```
Error message
```sh
error: Relative import path "@scope/amod" not prefixed with / or ./ or ../ and not in import map from "file:///Users/jpravetz/dev/tmp/repo1/bmod/mod.ts"
at file:///Users/jpravetz/dev/tmp/repo1/bmod/mod.ts:1:21
```
Lint and testing are working within repo1.
| question,workspaces | low | Critical |
2,638,853,298 | go | x/tools/gopls: don't link command-line-arguments packages in hover | This is a follow-up issue to fix some remaining bugs in hovering over command-line-arguments packages.
1. We should url-escape package paths.
2. We should not offer links to command-line-arguments packages, as they are not real.
So, to summarize, there are two bugs here:
1. gopls is not properly url-escaping package paths.
3. ~gopls is not providing accurate hover information for builtins, when viewing the builtin file.~
(1) should be fixed by a relatively straighforward application of url escaping. Assigning to our new team member @h9jiang as this is a good starter exercise for bug fixing and writing a test.
On the other hand, (2) might require a more sweeping refactoring of how we handle the builtin file and unsafe packages, and is just about the polar opposite of a good starter bug: requires familiarity with nitty-gritty details of the codebase. I will do this part.
_Originally posted by @findleyr in https://github.com/golang/go/issues/68026#issuecomment-2182899550_
| gopls,Tools | low | Critical |
2,638,860,728 | pytorch | [ONNX] Run report_exportability when report=True | Leverage https://github.com/pytorch/pytorch/blob/99deedff57feca48af8a364e49325c99acc0a541/torch/_export/tools.py#L61C5-L61C25 when there is an export issue when generating the ONNX export report. | module: onnx,triaged,onnx-triaged | low | Minor |
2,638,864,548 | flutter | [Cocoon] FR: Use class-level logging and structured logging | While debugging issues, all lines are logged unstructured. You can search for specific logs using regex, but you cannot search for "all error logs for class Fu".
Cocoon is using the regular dart logger, so this would be:
1. Every class that logs creates a static `log = Logger('$ClassName');`
2. Add a listener to root level logging that logs the name and the level (if > info).
For 2; we currently call `useLoggingPackageAdaptor()` for appengine - which adds the log name, but the level is passed through to the service. so if we wanted `[level]` in the name, we might have to make detached loggers who modify their message before passing along to the root (appengine watching) logger.
```dart
message = '${record.loggerName}: $message';
```
| team-infra,P2,triaged-infra | low | Critical |
2,638,891,106 | rust | `thread_local!` initialization code panics on some `aarch64-apple-darwin` envs, works on others | I tried this code:
https://github.com/near/near-sdk-rs/blob/master/near-sdk/src/environment/mock/mod.rs#L11-L16
```rust
thread_local! {
static BLOCKCHAIN_INTERFACE: RefCell<MockedBlockchain>
= RefCell::new(MockedBlockchain::default());
}
```
I expected to see this happen:
`thread_local!` initialization code for `RefCell::new(MockedBlockchain::new)`
works
Instead, this happened:
on **unclarified** `aarch64-apple-darwin` environment(s) `thread_local!`
panics on https://doc.rust-lang.org/src/core/ptr/mod.rs.html#1277 .
The issue doesn't reproduce on other `aarch64-apple-darwin` real macs,
and on `macos-14-arm64` github actions vm.
Original issue: https://github.com/near/near-sdk-rs/issues/1252
### Meta
`rustc --version --verbose`:
```
stable: 1.80, 1.81, 1.82 panics
beta 1.83 panics
nightly 1.84 panics
```
backtrace from original issue:
https://github.com/near/near-sdk-rs/issues/1252#issuecomment-2454692564
| O-macos,A-thread-locals,C-bug,T-libs,O-AArch64,E-needs-investigation | medium | Major |
2,638,904,859 | pytorch | Inductor vs. Liger Performance Track | ### 🐛 Describe the bug
Recently, we did some benchmarking on custom operators in liger kernels compared with inductor compiled kernels. Inductor is worse on some cases. Here is the operator and config list we need to improve.
## List
### Format
For each operator, the data format in the following task list is:
- [ ] **Operator Name**, (20th percentile speedup, 50th percentile (median), 80th percentile)
### Speedup Calculation
The speedup numbers are computed as follows:
$$
\text{inductor\\_vs\\_liger} = \frac{\text{speedup\\_inductor}}{\text{speedup\\_liger}} = \frac{\frac{\text{latency\\_eager}}{\text{latency\\_inductor}}}{\frac{\text{latency\\_eager}}{\text{latency\\_liger}}} = \frac{\text{latency\\_liger}}{\text{latency\\_inductor}}
$$
Since each operator has multiple inputs, there are multiple speedup numbers. We use the 20th, 50th, and 80th percentiles to better represent the results. If the number is >1, it means inductor's results are faster. The GPU peak memory usage is determined using the same process.
We need to improve inductor's performance on cases that less than 1.
fwd_bwd fp32 latency:
- [x] cross_entropy: 1.37, 1.50, 1.60
- [ ] embedding: 0.95, 1.43, 1.86
- [x] fused_linear_cross_entropy: 1.11, 1.14, 1.24
- [x] fused_linear_jsd: 1.42, 1.68, 2.32
- [x] geglu: 1.00, 1.00, 1.00
- [x] jsd: 1.18, 3.38, 3.67
- [x] kl_div: 1.18, 1.21, 1.29
- [ ] rms_norm: 0.90, 0.99, 1.29
- [ ] rope: 0.41, 0.48, 0.58 #141265
- [x] swiglu: 0.99, 1.00, 1.00
fwd_bwd fp32 peak gpu memory usage:
- [ ] cross_entropy: 0.75, 0.75, 0.75
- [ ] embedding: 0.77, 0.84, 0.93
- [ ] fused_linear_cross_entropy: 0.49, 0.64, 0.73
- [ ] fused_linear_jsd: 0.78, 0.79, 0.81
- [ ] geglu: 0.53, 0.72, 0.91
- [x] jsd: 1.09, 1.09, 1.09
- [x] kl_div: 1.00, 1.00, 1.00
- [x] rms_norm: 1.01, 1.01, 1.01
- [ ] rope: 0.53, 0.54, 0.59
- [ ] swiglu: 0.55, 0.74, 0.91
fwd fp32 latency:
- [x] cross_entropy: 1.71, 2.28, 2.28
- [ ] embedding, (0.83, 0.95, 1.00) https://github.com/pytorch/pytorch/issues/142250
- [x] fused_linear_cross_entropy: 3.07, 3.25, 3.58
- [x] fused_linear_jsd: 2.90, 3.63, 5.18
- [x] geglu: 0.99, 1.00, 1.02
- [x] jsd: 2.96, 15.97, 17.37
- [ ] kl_div: 0.90, 0.92, 0.97
- [ ] rms_norm: 0.91, 0.96, 0.98 #141916
- [ ] rope: 0.80, 0.90, 0.95
- [x] swiglu: 1.00, 1.00, 1.00
fwd fp32 peak gpu memory usage:
- [x] cross_entropy: 1.00, 1.00, 1.00
- [x] embedding: 1.00, 1.00, 1.00
- [ ] fused_linear_cross_entropy: 0.38, 0.62, 0.91
- [x] fused_linear_jsd: 1.19, 1.40, 1.59
- [ ] geglu: 0.36, 0.68, 1.00
- [x] jsd: 1.67, 1.67, 1.67
- [x] kl_div: 1.00, 1.00, 1.00
- [x] rms_norm: 1.00, 1.00, 1.00
- [ ] rope: 0.76, 0.78, 0.81
- [ ] swiglu: 0.39, 0.70, 1.00
bwd fp32 latency:
- [x] cross_entropy: 1.07, 1.10, 1.18
- [x] embedding: 1.13, 1.88, 3.32
- [ ] fused_linear_cross_entropy: 0.00, 0.01, 0.01
- [ ] fused_linear_jsd: 0.01, 0.02, 0.03
- [x] geglu: 1.00, 1.00, 1.00
- [ ] jsd: 0.71, 0.72, 0.73
- [x] kl_div: 1.25, 1.29, 1.36
- [x] rms_norm: 1.36, 1.44, 1.66
- [ ] rope: 0.36, 0.42, 0.44
- [x] swiglu: 0.99, 0.99, 0.99
bwd fp32 peak gpu memory usage:
- [ ] cross_entropy: 0.75, 0.75, 0.75
- [ ] embedding: 0.76, 0.82, 0.92
- [ ] fused_linear_cross_entropy: 0.51, 0.59, 0.69
- [ ] fused_linear_jsd: 0.64, 0.67, 0.69
- [ ] geglu: 0.86, 0.87, 0.89
- [x] jsd: 1.00, 1.00, 1.00
- [x] kl_div: 1.00, 1.00, 1.00
- [x] rms_norm: 1.01, 1.01, 1.01
- [ ] rope: 0.60, 0.62, 0.66
- [ ] swiglu: 0.87, 0.89, 0.90
fwd_bwd bf16 latency:
- [ ] cross_entropy: 0.94, 0.99, 1.07
- [x] fused_linear_cross_entropy: 1.56, 1.95, 2.78
- [x] fused_linear_jsd: 6.08, 8.14, 11.68
- [x] geglu: 1.00, 1.01, 1.01
- [x] jsd: 1.29, 3.82, 3.95
- [x] kl_div: 1.01, 1.04, 1.12
- [x] rms_norm: 0.98, 1.12, 1.87
- [ ] rope: 0.31, 0.44, 0.53
- [x] swiglu: 0.99, 1.00, 1.00
fwd_bwd bf16 peak gpu memory usage:
- [ ] cross_entropy: 0.89, 0.89, 0.89
- [ ] fused_linear_cross_entropy: 0.42, 0.54, 0.71
- [ ] fused_linear_jsd: 0.63, 0.84, 0.85
- [ ] geglu: 0.53, 0.72, 0.91
- [x] jsd: 1.08, 1.08, 1.08
- [x] kl_div: 1.00, 1.00, 1.00
- [ ] rms_norm: 0.76, 0.76, 0.76
- [ ] rope: 0.54, 0.56, 0.63
- [ ] swiglu: 0.55, 0.74, 0.91
fwd bf16 latency:
- [x] cross_entropy: 1.88, 2.43, 2.60
- [ ] embedding: 0.21, 0.60, 0.98
- [x] fused_linear_cross_entropy: 4.66, 5.84, 8.48
- [x] fused_linear_jsd: 11.18, 17.52, 28.28
- [x] geglu: 1.01, 1.02, 1.02
- [x] jsd: 3.77, 28.18, 28.68
- [ ] kl_div: 0.48, 0.53, 0.65
- [ ] rms_norm: 0.71, 0.75, 0.76
- [ ] rope: 0.60, 0.87, 0.92
- [x] swiglu: 1.00, 1.00, 1.00
fwd bf16 peak gpu memory usage:
- [x] cross_entropy: 1.00, 1.00, 1.00
- [x] embedding: 1.00, 1.00, 1.00
- [ ] fused_linear_cross_entropy: 0.40, 0.64, 0.92
- [ ] fused_linear_jsd: 0.27, 1.00, 1.74
- [ ] geglu: 0.36, 0.68, 1.00
- [x] jsd: 1.37, 1.37, 1.37
- [x] kl_div: 1.00, 1.00, 1.00
- [ ] rms_norm: 0.67, 0.67, 0.67
- [ ] rope: 0.77, 0.80, 0.85
- [ ] swiglu: 0.39, 0.70, 1.00
bwd bf16 latency:
- [ ] cross_entropy: 0.82, 0.84, 0.86
- [ ] fused_linear_cross_entropy: 0.02, 0.04, 0.07
- [ ] fused_linear_jsd: 0.05, 0.08, 0.16
- [x] geglu: 0.99, 1.00, 1.00
- [ ] jsd: 0.87, 0.87, 0.89
- [x] kl_div: 1.12, 1.16, 1.22
- [x] rms_norm: 1.13, 1.39, 1.62
- [ ] rope: 0.32, 0.37, 0.39
- [x] swiglu: 0.99, 0.99, 1.00
bwd bf16 peak gpu memory usage:
- [ ] cross_entropy: 0.89, 0.89, 0.89
- [ ] fused_linear_cross_entropy: 0.51, 0.59, 0.69
- [ ] fused_linear_jsd: 0.68, 0.69, 0.70
- [ ] geglu: 0.86, 0.87, 0.89
- [x] jsd: 1.04, 1.04, 1.04
- [x] kl_div: 1.00, 1.00, 1.00
- [ ] rms_norm: 0.81, 0.81, 0.81
- [ ] rope: 0.61, 0.64, 0.70
- [ ] swiglu: 0.87, 0.89, 0.90
## Reproduce
Install tritonbench dependencies.
```
git clone https://github.com/pytorch-labs/tritonbench/
git submodule update --init --recursive
python install.py
```
Install liger kernels.
```
git clone https://github.com/linkedin/Liger-Kernel.git
cd Liger-Kernel
pip install -e .
# or if using transformers
pip install -e .[transformers]
```
or
```
pip install liger-kernel-nightly
```
Benchmark operators.
```
python run.py --op rope --mode fwd_bwd --precision fp32 --metrics latency,gpu_peak_mem,speedup,mem_footprint
```
Use `--op` to specify the operator to benchmark. Use `--mode` to specify the mode to
benchmark. Use `--precision` to specify the precision to benchmark. Use `--metrics`
to specify the metrics to benchmark. You can also add `--ncu_rep` to `--metrics` to save the nsight
compute profiling reports for this operator.
By default, all inputs will be benchmarked, but inductor is not slower on all inputs. You can use `--input-id` to specify the
index of the input to benchmark. The following is an exampled output.
```
% python run.py --op rope --mode fwd_bwd --precision fp32 --metrics latency,gpu_peak_mem,speedup,mem_footprint
0%| | 0/8 [00:00<?, ?it/s]`LlamaRotaryEmbedding` can now be fully parameterized by passing the model config through the `config` argument. All other arguments will be removed in v4.46
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:08<00:00, 1.11s/it]
(H, T) apply_rotary_pos_emb-latency apply_rotary_pos_emb-gpu_peak_mem liger_rotary_pos_emb-latency liger_rotary_pos_emb-speedup liger_rotary_pos_emb-gpu_peak_mem liger_rotary_pos_emb-mem_footprint inductor_rotary_pos_emb_full_op-latency inductor_rotary_pos_emb_full_op-speedup inductor_rotary_pos_emb_full_op-gpu_peak_mem inductor_rotary_pos_emb_full_op-mem_footprint
------------- ------------------------------ ----------------------------------- ------------------------------ ------------------------------ ----------------------------------- ------------------------------------ ----------------------------------------- ----------------------------------------- ---------------------------------------------- -----------------------------------------------
(8192, 1024) 0.554368 0.312484 0.107328 5.16518 0.205537 1.52033 0.151904 3.64946 0.364921 0.856306
(8192, 2048) 1.09846 0.591413 0.163776 6.70711 0.37752 1.56657 0.297376 3.69386 0.696287 0.849381
(8192, 4096) 2.14995 1.14927 0.320192 6.71457 0.721486 1.59292 0.529696 4.05884 1.35902 0.845662
(8192, 8192) 4.23626 2.26499 0.630336 6.72063 1.40942 1.60704 1.13363 3.73689 2.68449 0.843733
(8192, 16384) 8.51472 4.49642 1.25142 6.80403 2.78528 1.61435 2.51034 3.39186 5.33542 0.84275
(512, 2048) 0.29776 0.068436 0.125696 2.36889 0.055083 1.24242 0.226336 1.31557 0.075006 0.912407
(2048, 2048) 0.292704 0.173031 0.073984 3.95631 0.11957 1.44711 0.188608 1.55192 0.199262 0.86836
(8192, 2048) 1.09587 0.591413 0.163776 6.69129 0.37752 1.56657 0.264064 4.15002 0.696287 0.849381
```
### Versions
nightly
cc @ezyang @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @aakhundov @eellison @shunting314 @Chillee @xuzhao9 | triaged,oncall: pt2,module: inductor | low | Critical |
2,638,909,286 | pytorch | Setting a `str` value to `weight` and `pos_weight` argument of `BCEWithLogitsLoss()` gets an indirect error message | ### 🐛 Describe the bug
Setting the `complex` value `3.+2.j` to `weight` and `pos_weight` argument of [BCEWithLogitsLoss()](https://pytorch.org/docs/main/generated/torch.nn.BCEWithLogitsLoss.html) gets the direct error message as shown below:
```python
import torch
from torch import nn
tensor1 = torch.tensor([8., -3., 0., 1., 5., -2.])
tensor2 = torch.tensor([-3., 7., 4., -2., -9., 6.])
# ↓↓↓↓↓↓
bcelogits = nn.BCEWithLogitsLoss(weight=torch.tensor(3.+2.j))
bcelogits(input=tensor1, target=tensor2) # Error
# ↓↓↓↓↓↓
bcelogits = nn.BCEWithLogitsLoss(pos_weight=torch.tensor(3.+2.j))
bcelogits(input=tensor1, target=tensor2) # Error
```
> RuntimeError: result type ComplexFloat can't be cast to the desired output type Float
But setting the `str` value `"Hello"` to `weight` and `pos_weight` argument of `BCEWithLogitsLoss()` gets the indirect error message as shown below:
```python
import torch
from torch import nn
tensor1 = torch.tensor([8., -3., 0., 1., 5., -2.])
tensor2 = torch.tensor([-3., 7., 4., -2., -9., 6.])
# ↓↓↓↓↓↓↓
bcelogits = nn.BCEWithLogitsLoss(weight=torch.tensor("Hello"))
bcelogits(input=tensor1, target=tensor2) # Error
# ↓↓↓↓↓↓↓
bcelogits = nn.BCEWithLogitsLoss(pos_weight=torch.tensor("Hello"))
bcelogits(input=tensor1, target=tensor2) # Error
```
> TypeError: new(): invalid data type 'str'
And, setting the `bool` value `True` to `pos_weight` argument of `BCEWithLogitsLoss()` gets the indirect error message as shown below:
```python
import torch
from torch import nn
tensor1 = torch.tensor([8., -3., 0., 1., 5., -2.])
tensor2 = torch.tensor([-3., 7., 4., -2., -9., 6.])
# ↓↓↓↓
bcelogits = nn.BCEWithLogitsLoss(pos_weight=torch.tensor(True))
bcelogits(input=tensor1, target=tensor2) # Error
```
> RuntimeError: Subtraction, the `-` operator, with a bool tensor is not supported. If you are trying to invert a mask, use the `~` or `logical_not()` operator instead.
So, their error messages should be direct as shown below:
> RuntimeError: result type str can't be cast to the desired output type Float
> RuntimeError: result type bool can't be cast to the desired output type Float
Or, their error messages should be direct as shown below:
> RuntimeError: weight should be Float
> RuntimeError: pos_weight should be Float
### Versions
```python
import torch
torch.__version__ # '2.5.1'
```
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki | module: nn,triaged | low | Critical |
2,638,909,879 | pytorch | Error `torch._inductor.exc.LoweringException` when running AOTI on TorchRec DLM | ### 🐛 Describe the bug
## Installation
Install PyTorch
```
pip install torch -U --index-url https://download.pytorch.org/whl/nightly/cu121
```
Install TorchRec
https://github.com/pytorch/torchrec?tab=readme-ov-file#installation
## Script
python dlrm_aoti.py
```
import logging
import os
from dataclasses import dataclass
from typing import Dict, List, Optional
import fbgemm_gpu.sparse_ops # noqa: F401
import torch
from torchrec.datasets.criteo import DEFAULT_CAT_NAMES, DEFAULT_INT_NAMES
from torchrec.datasets.random import RandomRecDataset
from torchrec.datasets.utils import Batch
from torchrec.distributed.global_settings import set_propogate_device
from torchrec.fx.tracer import Tracer
from torchrec.inference.modules import (
PredictFactory,
PredictModule,
quantize_inference_model,
shard_quant_model,
)
from torchrec.models.dlrm import DLRM
from torchrec.modules.embedding_configs import EmbeddingBagConfig
from torchrec.modules.embedding_modules import EmbeddingBagCollection
from torchrec.sparse.jagged_tensor import KeyedJaggedTensor
import argparse
import sys
from typing import List
logger: logging.Logger = logging.getLogger(__name__)
def register_fake_classes() -> None:
@torch._library.register_fake_class("fbgemm::AtomicCounter")
class FakeAtomicCounter:
def __init__(self, counter_):
self.counter_ = counter_
@classmethod
def __obj_unflatten__(cls, flat_obj):
return cls(**dict(flat_obj))
def increment(self) -> int:
self.counter_ += 1
return self.counter_
def decrement(self) -> int:
self.counter_ -= 1
return self.counter_
def reset(self):
self.counter_ = 0
def get(self) -> int:
return self.counter_
def set(self, val):
self.counter_ = val
@torch._library.register_fake_class("fbgemm::TensorQueue")
class FakeTensorQueue:
def __init__(self, queue, init_tensor):
self.queue = queue
self.init_tensor = init_tensor
@classmethod
def __obj_unflatten__(cls, flattened_ctx):
return cls(**dict(flattened_ctx))
def push(self, x):
self.queue.append(x)
def pop(self):
if len(self.queue) == 0:
return self.init_tensor
return self.queue.pop(0)
def top(self):
if len(self.queue) == 0:
return self.init_tensor
return self.queue[0]
def size(self):
return len(self.queue)
def create_training_batch(args) -> Batch:
return RandomRecDataset(
keys=DEFAULT_CAT_NAMES,
batch_size=args.batch_size,
hash_size=args.num_embedding_features,
ids_per_feature=1,
num_dense=len(DEFAULT_INT_NAMES),
).batch_generator._generate_batch()
def parse_args(argv: List[str]) -> argparse.Namespace:
parser = argparse.ArgumentParser(description="torchrec dlrm model packager")
parser.add_argument(
"--num_embeddings",
type=int,
default=100_000,
help="max_ind_size. The number of embeddings in each embedding table. Defaults"
" to 100_000 if num_embeddings_per_feature is not supplied.",
)
parser.add_argument(
"--num_embeddings_per_feature",
type=str,
default="45833188,36746,17245,7413,20243,3,7114,1441,62,29275261,1572176,345138,"
"10,2209,11267,128,4,974,14,48937457,11316796,40094537,452104,12606,104,35",
help="Comma separated max_ind_size per sparse feature. The number of embeddings"
" in each embedding table. 26 values are expected for the Criteo dataset.",
)
parser.add_argument(
"--sparse_feature_names",
type=str,
default=",".join(DEFAULT_CAT_NAMES),
help="Comma separated names of the sparse features.",
)
parser.add_argument(
"--dense_arch_layer_sizes",
type=str,
default="512,256,64",
help="Comma separated layer sizes for dense arch.",
)
parser.add_argument(
"--over_arch_layer_sizes",
type=str,
default="512,512,256,1",
help="Comma separated layer sizes for over arch.",
)
parser.add_argument(
"--embedding_dim",
type=int,
default=64,
help="Size of each embedding.",
)
parser.add_argument(
"--num_dense_features",
type=int,
default=len(DEFAULT_INT_NAMES),
help="Number of dense features.",
)
parser.add_argument(
"--output_path",
type=str,
help="Output path of model package.",
)
return parser.parse_args(argv)
@dataclass
class DLRMModelConfig:
"""
Model Config for specifying DLRM model parameters.
"""
dense_arch_layer_sizes: List[int]
dense_in_features: int
embedding_dim: int
id_list_features_keys: List[str]
num_embeddings_per_feature: List[int]
num_embeddings: int
over_arch_layer_sizes: List[int]
sample_input: Batch
class DLRMPredictModule(PredictModule):
"""
nn.Module to wrap DLRM model to use for inference.
Args:
embedding_bag_collection (EmbeddingBagCollection): collection of embedding bags
used to define SparseArch.
dense_in_features (int): the dimensionality of the dense input features.
dense_arch_layer_sizes (List[int]): the layer sizes for the DenseArch.
over_arch_layer_sizes (List[int]): the layer sizes for the OverArch. NOTE: The
output dimension of the InteractionArch should not be manually specified
here.
id_list_features_keys (List[str]): the names of the sparse features. Used to
construct a batch for inference.
dense_device: (Optional[torch.device]).
"""
def __init__(
self,
embedding_bag_collection: EmbeddingBagCollection,
dense_in_features: int,
dense_arch_layer_sizes: List[int],
over_arch_layer_sizes: List[int],
id_list_features_keys: List[str],
dense_device: Optional[torch.device] = None,
) -> None:
module = DLRM(
embedding_bag_collection=embedding_bag_collection,
dense_in_features=dense_in_features,
dense_arch_layer_sizes=dense_arch_layer_sizes,
over_arch_layer_sizes=over_arch_layer_sizes,
dense_device=dense_device,
)
super().__init__(module, dense_device)
self.id_list_features_keys: List[str] = id_list_features_keys
def predict_forward(
self, batch: Dict[str, torch.Tensor]
) -> Dict[str, torch.Tensor]:
"""
Args:
batch (Dict[str, torch.Tensor]): currently expects input dense features
to be mapped to the key "float_features" and input sparse features
to be mapped to the key "id_list_features".
Returns:
Dict[str, torch.Tensor]: output of inference.
"""
try:
logits = self.predict_module(
batch["float_features"],
KeyedJaggedTensor(
keys=self.id_list_features_keys,
lengths=batch["id_list_features.lengths"],
values=batch["id_list_features.values"],
),
)
predictions = logits.sigmoid()
except Exception as e:
logger.info(e)
raise e
# Flip predictions tensor to be 1D. TODO: Determine why prediction shape
# can be 2D at times (likely due to input format?)
predictions = predictions.reshape(
[
predictions.size()[0],
]
)
return {
"default": predictions.to(torch.device("cpu"), non_blocking=True).float()
}
class DLRMPredictFactory(PredictFactory):
"""
Factory Class for generating TorchScript DLRM Model for C++ inference.
Args:
model_config (DLRMModelConfig): model config
"""
def __init__(self, model_config: DLRMModelConfig) -> None:
self.model_config = model_config
def create_predict_module(self, world_size: int, device: str) -> torch.nn.Module:
logging.basicConfig(level=logging.INFO)
set_propogate_device(True)
eb_configs = [
EmbeddingBagConfig(
name=f"t_{feature_name}",
embedding_dim=self.model_config.embedding_dim,
num_embeddings=(
self.model_config.num_embeddings_per_feature[feature_idx]
if self.model_config.num_embeddings is None
else self.model_config.num_embeddings
),
feature_names=[feature_name],
)
for feature_idx, feature_name in enumerate(
self.model_config.id_list_features_keys
)
]
ebc = EmbeddingBagCollection(tables=eb_configs, device=torch.device("meta"))
module = DLRMPredictModule(
embedding_bag_collection=ebc,
dense_in_features=self.model_config.dense_in_features,
dense_arch_layer_sizes=self.model_config.dense_arch_layer_sizes,
over_arch_layer_sizes=self.model_config.over_arch_layer_sizes,
id_list_features_keys=self.model_config.id_list_features_keys,
dense_device=device,
)
quant_model = quantize_inference_model(module)
sharded_model, _ = shard_quant_model(
quant_model, compute_device=device, sharding_device=device
)
batch = {}
batch["float_features"] = self.model_config.sample_input.dense_features.to(
device
)
batch["id_list_features.lengths"] = (
self.model_config.sample_input.sparse_features.lengths().to(device)
)
batch["id_list_features.values"] = (
self.model_config.sample_input.sparse_features.values().to(device)
)
sharded_model(batch)
aot_compile_options = {
"aot_inductor.output_path": os.path.join(os.getcwd(), "dlrm_pt2.so"),
}
#with torch.no_grad():
with torch.inference_mode():
exported_program = torch.export.export(
sharded_model,
(batch,),
strict=False,
)
so_path = torch._inductor.aot_compile(
exported_program.module(),
(batch,),
# Specify the generated shared library path
options=aot_compile_options
)
def batching_metadata(self) -> Dict[str, str]:
return {
"float_features": "dense",
"id_list_features": "sparse",
}
def result_metadata(self) -> str:
return "dict_of_tensor"
def run_weights_independent_tranformations(
self, predict_module: torch.nn.Module
) -> torch.nn.Module:
return predict_module
def run_weights_dependent_transformations(
self, predict_module: torch.nn.Module
) -> torch.nn.Module:
"""
Run transformations that depends on weights of the predict module. e.g. lowering to a backend.
"""
return predict_module
def main(argv: List[str]) -> None:
"""
Use torch.package to package the torchrec DLRM Model.
Args:
argv (List[str]): command line args.
Returns:
None.
"""
args = parse_args(argv)
args.batch_size = 10
args.num_embedding_features = 26
batch = create_training_batch(args)
register_fake_classes()
model_config = DLRMModelConfig(
dense_arch_layer_sizes=list(map(int, args.dense_arch_layer_sizes.split(","))),
dense_in_features=args.num_dense_features,
embedding_dim=args.embedding_dim,
id_list_features_keys=args.sparse_feature_names.split(","),
num_embeddings_per_feature=list(
map(int, args.num_embeddings_per_feature.split(","))
),
num_embeddings=args.num_embeddings,
over_arch_layer_sizes=list(map(int, args.over_arch_layer_sizes.split(","))),
sample_input=batch,
)
DLRMPredictFactory(model_config).create_predict_module(world_size=1, device="cuda")
if __name__ == "__main__":
main(sys.argv[1:])
```
## Error Logs
```
/home/agunapal/anaconda3/envs/torchrec/lib/python3.10/site-packages/torch/fx/graph.py:1794: UserWarning: Node _torchbind_obj0 target _torchbind_obj0 _torchbind_obj0 of does not reference an nn.Module, nn.Parameter, or buffer, which is what 'get_attr' Nodes typically target
warnings.warn(
/home/agunapal/anaconda3/envs/torchrec/lib/python3.10/site-packages/torch/export/_unlift.py:63: UserWarning: Attempted to insert a get_attr Node with no underlying reference in the owning GraphModule! Call GraphModule.add_submodule to add the necessary submodule, GraphModule.add_parameter to add the necessary Parameter, or nn.Module.register_buffer to add the necessary buffer
getattr_node = gm.graph.get_attr(lifted_node)
/home/agunapal/anaconda3/envs/torchrec/lib/python3.10/site-packages/torch/fx/graph.py:1794: UserWarning: Node _module_sparse_arch_embedding_bag_collection_tbes_0_lxu_cache_locations_list target _module.sparse_arch.embedding_bag_collection.tbes.0.lxu_cache_locations_list lxu_cache_locations_list of _module.sparse_arch.embedding_bag_collection.tbes.0 does not reference an nn.Module, nn.Parameter, or buffer, which is what 'get_attr' Nodes typically target
warnings.warn(
/home/agunapal/anaconda3/envs/torchrec/lib/python3.10/site-packages/torch/fx/graph.py:1794: UserWarning: Node _module_sparse_arch_embedding_bag_collection_tbes_0_weights_dev target _module.sparse_arch.embedding_bag_collection.tbes.0.weights_dev weights_dev of _module.sparse_arch.embedding_bag_collection.tbes.0 does not reference an nn.Module, nn.Parameter, or buffer, which is what 'get_attr' Nodes typically target
warnings.warn(
/home/agunapal/anaconda3/envs/torchrec/lib/python3.10/site-packages/torch/fx/graph.py:1794: UserWarning: Node _module_sparse_arch_embedding_bag_collection_tbes_0_weights_uvm target _module.sparse_arch.embedding_bag_collection.tbes.0.weights_uvm weights_uvm of _module.sparse_arch.embedding_bag_collection.tbes.0 does not reference an nn.Module, nn.Parameter, or buffer, which is what 'get_attr' Nodes typically target
warnings.warn(
/home/agunapal/anaconda3/envs/torchrec/lib/python3.10/site-packages/torch/fx/graph.py:1794: UserWarning: Node _module_sparse_arch_embedding_bag_collection_tbes_0_weights_placements target _module.sparse_arch.embedding_bag_collection.tbes.0.weights_placements weights_placements of _module.sparse_arch.embedding_bag_collection.tbes.0 does not reference an nn.Module, nn.Parameter, or buffer, which is what 'get_attr' Nodes typically target
warnings.warn(
/home/agunapal/anaconda3/envs/torchrec/lib/python3.10/site-packages/torch/fx/graph.py:1794: UserWarning: Node _module_sparse_arch_embedding_bag_collection_tbes_0_weights_offsets target _module.sparse_arch.embedding_bag_collection.tbes.0.weights_offsets weights_offsets of _module.sparse_arch.embedding_bag_collection.tbes.0 does not reference an nn.Module, nn.Parameter, or buffer, which is what 'get_attr' Nodes typically target
warnings.warn(
/home/agunapal/anaconda3/envs/torchrec/lib/python3.10/site-packages/torch/fx/graph.py:1794: UserWarning: Node _torchbind_obj0 target _torchbind_obj0 _torchbind_obj0 of does not reference an nn.Module, nn.Parameter, or buffer, which is what 'get_attr' Nodes typically target
warnings.warn(
/home/agunapal/anaconda3/envs/torchrec/lib/python3.10/site-packages/torch/fx/graph.py:1794: UserWarning: Node _torchbind_obj0 target _torchbind_obj0 _torchbind_obj0 of does not reference an nn.Module, nn.Parameter, or buffer, which is what 'get_attr' Nodes typically target
warnings.warn(
/home/agunapal/anaconda3/envs/torchrec/lib/python3.10/site-packages/torch/fx/graph.py:1794: UserWarning: Node _torchbind_obj0 target _torchbind_obj0 _torchbind_obj0 of does not reference an nn.Module, nn.Parameter, or buffer, which is what 'get_attr' Nodes typically target
warnings.warn(
/home/agunapal/anaconda3/envs/torchrec/lib/python3.10/site-packages/torch/_inductor/compile_fx.py:222: UserWarning: TensorFloat32 tensor cores for float32 matrix multiplication available but not enabled. Consider setting `torch.set_float32_matmul_precision('high')` for better performance.
warnings.warn(
/home/agunapal/anaconda3/envs/torchrec/lib/python3.10/site-packages/torch/fx/graph.py:1794: UserWarning: Node _torchbind_obj0 target _torchbind_obj0 _torchbind_obj0 of does not reference an nn.Module, nn.Parameter, or buffer, which is what 'get_attr' Nodes typically target
warnings.warn(
Traceback (most recent call last):
File "/home/agunapal/torchrec_aoti/dlrm_aoti_error.py", line 375, in <module>
main(sys.argv[1:])
File "/home/agunapal/torchrec_aoti/dlrm_aoti_error.py", line 370, in main
DLRMPredictFactory(model_config).create_predict_module(world_size=1, device="cuda")
File "/home/agunapal/torchrec_aoti/dlrm_aoti_error.py", line 308, in create_predict_module
so_path = torch._inductor.aot_compile(
File "/home/agunapal/anaconda3/envs/torchrec/lib/python3.10/site-packages/torch/_inductor/__init__.py", line 211, in aot_compile
return compile_fx_aot(
File "/home/agunapal/anaconda3/envs/torchrec/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1246, in compile_fx_aot
compiled_lib_path = compile_fx(
File "/home/agunapal/anaconda3/envs/torchrec/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1427, in compile_fx
return compile_fx(
File "/home/agunapal/anaconda3/envs/torchrec/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1469, in compile_fx
return compile_fx(
File "/home/agunapal/anaconda3/envs/torchrec/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1700, in compile_fx
return inference_compiler(unlifted_gm, example_inputs_)
File "/home/agunapal/anaconda3/envs/torchrec/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1524, in fw_compiler_base
return _fw_compiler_base(model, example_inputs, is_inference)
File "/home/agunapal/anaconda3/envs/torchrec/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 1593, in _fw_compiler_base
return inner_compile(
File "/home/agunapal/anaconda3/envs/torchrec/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/home/agunapal/anaconda3/envs/torchrec/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 587, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
File "/home/agunapal/anaconda3/envs/torchrec/lib/python3.10/site-packages/torch/_dynamo/repro/after_aot.py", line 100, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
File "/home/agunapal/anaconda3/envs/torchrec/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 754, in _compile_fx_inner
compiled_graph = codegen_and_compile(
File "/home/agunapal/anaconda3/envs/torchrec/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 651, in codegen_and_compile
compiled_graph = fx_codegen_and_compile(gm, example_inputs, **fx_kwargs)
File "/home/agunapal/anaconda3/envs/torchrec/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 943, in fx_codegen_and_compile
graph.run(*example_inputs)
File "/home/agunapal/anaconda3/envs/torchrec/lib/python3.10/site-packages/torch/_inductor/graph.py", line 827, in run
return super().run(*args)
File "/home/agunapal/anaconda3/envs/torchrec/lib/python3.10/site-packages/torch/fx/interpreter.py", line 167, in run
self.env[node] = self.run_node(node)
File "/home/agunapal/anaconda3/envs/torchrec/lib/python3.10/site-packages/torch/_inductor/graph.py", line 1422, in run_node
result = super().run_node(n)
File "/home/agunapal/anaconda3/envs/torchrec/lib/python3.10/site-packages/torch/fx/interpreter.py", line 228, in run_node
return getattr(self, n.op)(n.target, args, kwargs)
File "/home/agunapal/anaconda3/envs/torchrec/lib/python3.10/site-packages/torch/_inductor/graph.py", line 1067, in call_function
raise LoweringException(e, target, args, kwargs).with_traceback(
File "/home/agunapal/anaconda3/envs/torchrec/lib/python3.10/site-packages/torch/_inductor/graph.py", line 1064, in call_function
out = lowerings[target](*args, **kwargs) # type: ignore[index]
File "/home/agunapal/anaconda3/envs/torchrec/lib/python3.10/site-packages/torch/_inductor/lowering.py", line 401, in wrapped
out = decomp_fn(*args, **kwargs)
File "/home/agunapal/anaconda3/envs/torchrec/lib/python3.10/site-packages/torch/_inductor/lowering.py", line 6463, in with_effects
result = ir.EffectfulKernel.create(op, *args, **kwargs)
File "/home/agunapal/anaconda3/envs/torchrec/lib/python3.10/site-packages/torch/_inductor/ir.py", line 6386, in create
device = cls.find_device(tensor_args, example_output)
File "/home/agunapal/anaconda3/envs/torchrec/lib/python3.10/site-packages/torch/_inductor/ir.py", line 6201, in find_device
return devices[0]
torch._inductor.exc.LoweringException: IndexError: list index out of range
target: with_effects
args[0]: TensorBox(StorageBox(
MultiOutput(
python_kernel_name=None,
name=buf1,
layout=FixedLayout('cpu', torch.float32, size=[0], stride=[1]),
inputs=[FallbackKernel(
python_kernel_name='torch.ops.prims._make_token.default',
name=buf0,
layout=MultiOutputLayout(device=device(type='cpu')),
inputs=[],
constant_args=(),
kwargs={},
output_view=None,
python_kernel_name=torch.ops.prims._make_token.default,
cpp_kernel_name=prims::_make_token,
ordered_kwargs_for_cpp_kernel=[],
op_overload=prims._make_token.default,
arg_properties=[],
kwarg_properties=None,
unbacked_bindings=None,
mutation_outputs=[],
origin_node=_make_token_default,
origins=OrderedSet([_make_token_default])
)],
constant_args=(),
kwargs={},
output_view=None,
python_kernel_name=None,
cpp_kernel_name=None,
ordered_kwargs_for_cpp_kernel=(),
op_overload=None,
arg_properties=[{}],
kwarg_properties=None,
unbacked_bindings={},
mutation_outputs=[],
origin_node=_make_token_default,
origins=OrderedSet([_make_token_default])
)
))
args[1]: call_torchbind
args[2]: TorchBindObject(name='_torchbind_obj0', value=<torch.ScriptObject object at 0x561e262c5410>)
args[3]: pop
```
### Versions
```
Collecting environment information...
PyTorch version: 2.6.0.dev20241104+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: CentOS Stream 9 (x86_64)
GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-2)
Clang version: Could not collect
CMake version: version 3.30.5
Libc version: glibc-2.34
Python version: 3.10.0 (default, Mar 3 2022, 09:58:08) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.12.0-0_fbk16_zion_7661_geb00762ce6d2-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA PG509-210
GPU 1: NVIDIA PG509-210
GPU 2: NVIDIA PG509-210
GPU 3: NVIDIA PG509-210
GPU 4: NVIDIA PG509-210
GPU 5: NVIDIA PG509-210
GPU 6: NVIDIA PG509-210
GPU 7: NVIDIA PG509-210
Nvidia driver version: 525.105.17
cuDNN version: Probably one of the following:
/usr/lib64/libcudnn.so.8.8.0
/usr/lib64/libcudnn.so.9.1.0
/usr/lib64/libcudnn_adv.so.9.1.0
/usr/lib64/libcudnn_adv_infer.so.8.8.0
/usr/lib64/libcudnn_adv_train.so.8.8.0
/usr/lib64/libcudnn_cnn.so.9.1.0
/usr/lib64/libcudnn_cnn_infer.so.8.8.0
/usr/lib64/libcudnn_cnn_train.so.8.8.0
/usr/lib64/libcudnn_engines_precompiled.so.9.1.0
/usr/lib64/libcudnn_engines_runtime_compiled.so.9.1.0
/usr/lib64/libcudnn_graph.so.9.1.0
/usr/lib64/libcudnn_heuristic.so.9.1.0
/usr/lib64/libcudnn_ops.so.9.1.0
/usr/lib64/libcudnn_ops_infer.so.8.8.0
/usr/lib64/libcudnn_ops_train.so.8.8.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 192
On-line CPU(s) list: 0-191
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8339HC CPU @ 1.80GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 4
Stepping: 11
Frequency boost: enabled
CPU(s) scaling MHz: 100%
CPU max MHz: 1801.0000
CPU min MHz: 800.0000
BogoMIPS: 3600.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 3 MiB (96 instances)
L1i cache: 3 MiB (96 instances)
L2 cache: 96 MiB (96 instances)
L3 cache: 132 MiB (4 instances)
NUMA node(s): 4
NUMA node0 CPU(s): 0-23,96-119
NUMA node1 CPU(s): 24-47,120-143
NUMA node2 CPU(s): 48-71,144-167
NUMA node3 CPU(s): 72-95,168-191
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.1.105
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] pytorch-triton==3.1.0+cf34004b8a
[pip3] torch==2.6.0.dev20241104+cu121
[pip3] torchmetrics==1.0.3
[pip3] torchrec==1.1.0a0+de7e041
[pip3] torchx==0.7.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] pytorch-triton 3.1.0+cf34004b8a pypi_0 pypi
[conda] torch 2.6.0.dev20241104+cu121 pypi_0 pypi
[conda] torchmetrics 1.0.3 pypi_0 pypi
[conda] torchrec 1.1.0a0+de7e041 dev_0 <develop>
[conda] torchx 0.7.0 pypi_0 pypi
```
cc @ezyang @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @chenyang78 | oncall: pt2,oncall: export,module: aotinductor | low | Critical |
2,638,974,419 | deno | Add JS binding for wgpu's start_capture and stop_capture | I would like to use Deno to profile WebGPU compute workloads within Xcode without any need for graphics. Currently, to capture a Metal trace within Xcode for a Deno workload, I'm having to create a window and canvas, invoke my compute workload, and then present the canvas, and then I can (manually) capture the Metal layer and see my compute-workload profile. This is not ideal; I don't want or need graphics at all. I'd like to do this at the command line and/or completely headless.
I believe I can do this if I have JS bindings to the [wgpu calls `start_capture` and `stop_capture`](https://docs.rs/wgpu/latest/wgpu/struct.Device.html#method.start_capture). With JS bindings to those calls, I can then start and stop capture around my compute workload, then run within Xcode and see profile information.
What would make this even better is if I could write that trace to a file (and this was exposed in JS). Metal has a [`MTLCaptureManager`](https://developer.apple.com/documentation/metal/mtlcapturemanager) and https://github.com/gfx-rs/wgpu/pull/3504 appears relevant here.
Also please note https://github.com/gfx-rs/wgpu/issues/6255, which notes Metal backend device capture issues in wgpu.
cc: @raphlinus
| suggestion,webgpu | low | Minor |
2,638,982,132 | animate.css | [FEATURE] Combining multiple classes | ### Is your feature request related to a problem? Please describe.
It will be great if you can combine multiple classes.
### Describe the solution you'd like.
```
<h1 class="animate__animated animate__backInLeft
pause-0.5
animate__bounce
pause-1 .0
animate__backOutRight
">
An animated element
</h1>
```
### Describe alternatives you've considered.
n/a
### Additional Context
_No response_ | feature request | low | Minor |
2,638,994,003 | TypeScript | "Private identifers are only available when"... should not occur in .d.ts files | See https://github.com/microsoft/TypeScript/issues/60427#issuecomment-2460357068
| Bug,Help Wanted | low | Minor |
2,638,995,266 | flutter | [ios][platform_view]Fix weird implementation of gesture recognizer delegate in FlutterDelayingGestureRecognizer | ### Use case
Just circle back on [this PR](https://github.com/flutter/engine/pull/55724) which got reverted, but some weird implementation we found is still valid:
1. `shouldBeRequiredToFailByGestureRecognizer` checks `otherGestureRecognizer != self`, which is always YES, since we set `self` (the delaying recognizer) as the delegate.
2. `shouldRequireFailureOfGestureRecognizer` checks `otherGestureRecognizer == self`, which is always NO for the same reason. The PR changed it to `otherGestureRecognizer == _forwardingRecognizer`, however, it makes one of our customer's app not responding to touch completely.
### Proposal
We might as well just fix this logic error as a clean up.
In long term, if we are able to get https://github.com/flutter/flutter/issues/157080 done, we can remove the 2 recognizers all together. Though I am still curious how the reverted PR broke one of our customer's app, which can be helpful for us to identify any missing corner cases for testing purpose.
| platform-ios,a: platform-views,P2,c: tech-debt,team-ios,triaged-ios | low | Critical |
2,638,995,777 | TypeScript | "Bigint literals are not available when targeting..." error should not occur in .d.ts files | See https://github.com/microsoft/TypeScript/issues/60427#issuecomment-2460357068 | Bug,Help Wanted | low | Critical |
2,639,032,033 | go | x/tools/go/{packages/packagestest,expect}: deprecate, tag, and delete | - [golang.org/x/tools/go/packages/packagestest](https://golang.org/x/tools/go/packages/packagestest)
- [golang.org/x/tools/go/expect](https://golang.org/x/tools/go/expect)
**Background:** These packages, designed for use in tests in x/tools, were published without proper deliberation, so any changes to their public APIs must go through the proposal process. There are dozens of improvements we (in x/tools) would like to have made that are simply too costly as a result. However, these packages are almost never used outside x/tools. Of the couple dozen [imports reported by pkg.go.dev](https://pkg.go.dev/golang.org/x/tools/go/packages/packagestest?tab=importedby), nearly all are in repos that are forks of x/tools. The only real one is is k8s.io.
**Proposal:** We propose to fork these packages to an internal subtree, and then to tag and delete the public packages using the same process as https://github.com/golang/go/issues/59676. | Proposal-Accepted,Tools | low | Major |
2,639,046,401 | pytorch | Both `input` and `target` argument or at least `target` argument of `BCELoss()` shoud accept the values out of `0 <= x <= 1` so that `BCELoss()` with `torch.sigmoid()` and `nn.Sigmoid()` gets the same results as `BCEWithLogitsLoss()`. | ### 🚀 The feature, motivation and pitch
I can use `0 <= x <= 1` values both for `input` and `target` argument of [BCEWithLogitsLoss()](https://pytorch.org/docs/main/generated/torch.nn.BCEWithLogitsLoss.html) and [BCELoss()](https://pytorch.org/docs/main/generated/torch.nn.BCELoss.html) with [torch.sigmoid()](https://pytorch.org/docs/stable/generated/torch.sigmoid.html) and [nn.Sigmoid()](https://pytorch.org/docs/stable/generated/torch.nn.Sigmoid.html) to get the same results as shown below:
```python
import torch
from torch import nn
tensor1 = torch.tensor([0.4, 0.8, 0.6, 0.3, 0.0, 0.5])
tensor2 = torch.tensor([0.2, 0.9, 0.4, 0.1, 0.8, 0.5])
bcelogits = nn.BCEWithLogitsLoss()
bcelogits(input=tensor1, target=tensor2)
# tensor(0.7205)
bceloss = nn.BCELoss()
bceloss(input=torch.sigmoid(input=tensor1), target=tensor2)
# tensor(0.7205)
sigmoid = nn.Sigmoid()
bceloss = nn.BCELoss()
bceloss(input=sigmoid(input=tensor1), target=tensor2)
# tensor(0.7205)
```
Then, I can also use the values out of `0 <= x <= 1` both for `input` and `target` argument of `BCEWithLogitsLoss()` but I cannot use the values out of `0 <= x <= 1` both for `input` and `target` argument of `BCELoss()` with `torch.sigmoid()` and `nn.Sigmoid()` to get the same results as shown below. *The results of `BCELoss()` are different from `BCEWithLogitsLoss()` if I use `torch.sigmoid()` and `nn.Sigmoid()` both for `input` and `target` argument of `BCELoss()` as shown below:
```python
import torch
from torch import nn
tensor1 = torch.tensor([8., -3., 0., 1., 5., -2.])
tensor2 = torch.tensor([-3., 7., 4., -2., -9., 6.])
bcelogits = nn.BCEWithLogitsLoss()
bcelogits(input=tensor1, target=tensor2)
# tensor(19.8648)
bceloss = nn.BCELoss()
bceloss(input=torch.sigmoid(input=tensor1), target=tensor2) # Error
bceloss = nn.BCELoss()
bceloss(input=torch.sigmoid(input=tensor1), target=torch.sigmoid(input=tensor2))
# tensor(3.2804)
sigmoid = nn.Sigmoid()
bceloss = nn.BCELoss()
bceloss(input=sigmoid(input=tensor1), target=tensor2) # Error
bceloss = nn.BCELoss()
bceloss(input=sigmoid(input=tensor1), target=sigmoid(input=tensor2))
# tensor(3.2804)
```
> RuntimeError: all elements of target should be between 0 and 1
### Alternatives
So, both `input` and `target` argument or at least `target` argument of `BCELoss()` shoud accept the values out of `0 <= x <= 1` so that `BCELoss()` with `torch.sigmoid()` and `nn.Sigmoid()` gets the same results as `BCEWithLogitsLoss()`.
### Additional context
_No response_
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki | module: nn,module: loss,triaged | low | Critical |
2,639,076,746 | terminal | [Terminal Chat Setup] App link like ms-terminal-can://... doesn't launch Terminal | ### Windows Terminal version
1.23.3091.0, 1.23.3101.0
### Windows build number
10.0.22631.0
### Other Software
_No response_
### Steps to reproduce
1. Launch Settings
2. Navigate to Terminal Chat (Experimental)
3. Expand GitHub Copilot under Service Providers
4. Click Authenticate via GitHub
5. Authenticate and approve access to GitHub account in browser
6. Click "Open Terminal Canary" in resulting browser modal dialog
### Expected Behavior
Windows Terminal Canary should launch and receive the auth token from GitHub to complete GitHub Copilot setup for Terminal Chat.
### Actual Behavior
Windows Terminal Canary does not launch. This does not work from the default browser (Chrome) nor from Edge. | Issue-Bug,Product-Terminal,Needs-Tag-Fix,Area-Chat | medium | Major |
2,639,083,472 | TypeScript | "Go to Symbol in Workspace…" but with symbols in dependencies, or equivalent functionality | ### 🔍 Search Terms
typescript, symbol, go to symbol in workspace, dependencies
### ✅ Viability Checklist
- [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
- [x] This wouldn't change the runtime behavior of existing JavaScript code
- [x] This could be implemented without emitting different JS based on the types of the expressions
- [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, new syntax sugar for JS, etc.)
- [x] This isn't a request to add a new utility type: https://github.com/microsoft/TypeScript/wiki/No-New-Utility-Types
- [x] This feature would agree with the rest of our Design Goals: https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals
### ⭐ Suggestion
When using the "Go to Symbol in Workspace…" feature in VS Code, only symbols within my project are shown. I think it should show symbols in my project _and_ my dependencies, or at the very least there should be a setting to toggle showing dependency symbols.
I previously thought this was a bug but was told this was intentional (#59551) so I'm opening a formal feature request.
### 📃 Motivating Example
I often want to use "Go to Symbol in Workspace…" to look up symbols in Node, TS, VS Code, or other dependencies, as I work. I actually rarely use it to look up my own symbols, I instead use ⌘P and jump to the relevant file.
### 💻 Use Cases
## What do you want to use this for?
I want to use it for browsing the APIs of my dependencies, such as whether a given API even exists.
## What shortcomings exist with current approaches?
It is more tedious to type out the desired type and ⌘-click it.
Sometimes I want to want to look up a symbol to see if it even exists before I type it.
## What workarounds are you using in the meantime?
I type out the symbol and ⌘-click it. | Needs More Info | low | Critical |
2,639,087,654 | TypeScript | Show deprecation warnings on implementations of a deprecated property | ### 🔎 Search Terms
- Deprecated implementation
- Deprecated interface
Related https://github.com/microsoft/typescript/issues/57584
### 🕗 Version & Regression Information
Not a regression
### ⏯ Playground Link
_No response_
### 💻 Code
```ts
type I = {
/**
* @deprecated
*/
text: string;
};
function f(i: I) { return i; }
f({ text: 'a' });
const a: I = { text: 'a' }
a.text;
```
### 🙁 Actual behavior
Currently only the last use of `a.text` shows as deprecated
### 🙂 Expected behavior
It would be helpful to also render deprecations for any implementations of `I.text` too as these are also using the deprecated property
### Additional information about the issue
_No response_ | Suggestion,Experience Enhancement | low | Major |
2,639,110,297 | kubernetes | Job: Consider to limit the number of goroutine workers in parallel executions | ### What would you like to be added?
We would like to consider the possibility of limiting the goroutine workers in the following logics:
https://github.com/kubernetes/kubernetes/blob/2caf4eddd8fc1ab7236ed608c1b548404dbc6bcf/pkg/controller/job/job_controller.go#L1076-L1079
https://github.com/kubernetes/kubernetes/blob/2caf4eddd8fc1ab7236ed608c1b548404dbc6bcf/pkg/controller/job/job_controller.go#L1127-L1130
https://github.com/kubernetes/kubernetes/blob/2caf4eddd8fc1ab7236ed608c1b548404dbc6bcf/pkg/controller/job/job_controller.go#L1426-L1429
https://github.com/kubernetes/kubernetes/blob/2caf4eddd8fc1ab7236ed608c1b548404dbc6bcf/pkg/controller/job/job_controller.go#L1727-L1736
### Why is this needed?
As we discussed in https://github.com/kubernetes/kubernetes/pull/128513#discussion_r1829829553, the current JobController launches the number of Pods goroutine workers to deletion or creation operation against owned Pods.
This allows us to increase JobController operation throughput as long as the APIServer performance.
OTOH, the APIServer seats have limited resources, and it is not unlimited.
So, when the Job has 1k and 10k parallelisms (`.spec.parallelism`) and completions (`.spec.completions`), the JobController potentially faces the API Priority and Fairness issues, then the kube-controller-manager might be locked out, finally WRITE operation request by the kube-controller-manager never succeeded.
To mitigate these issues, the cluster administrator might be able to prepare the kube-controller-manager specific seat by APF setting by themselves.
But, by default, we might want to prepare minimum safeguards like limiting the number of workers to 100 at the JobController level.
/sig apps
| kind/feature,sig/apps,needs-triage | low | Major |
2,639,150,683 | neovim | feat(defaults): add ]p and [p linewise mappings from unimpaired-vim | ### Problem
Since https://github.com/neovim/neovim/pull/28525 was merged, I've removed https://github.com/tpope/vim-unimpaired from my configuration.
I miss nothing except for `]p` and `[p`. Using these, I can force something I've yanked to paste "on its own line", even when the yanked text is not a full line.
Precedent for other additions:
- #30984
- #30943
cc @gpanders
### Expected behavior
I'd like those to be present.
Counterpoint: it seems like https://github.com/tummetott/unimpaired.nvim does not have `]p`/`[p`. Maybe that means that few people use it.
These mappings being missing from unimpaired.nvim also means I haven't figured out an effective/short way to define these myself to tide me over. Help from a (n)vim master is welcome. | enhancement,defaults | low | Minor |
2,639,194,804 | flutter | [pigeon] Consider removing Java generator | (Filing to track and centralize discussion, since this has come up in several discussions.)
The number of generators in Pigeon is an issue for ongoing development (e.g., event channel support), and we are currently maintaining two generators—Java and Kotlin—for the use case of Android plugin development. Ideally we would only maintain one, and Kotlin has a number of advantages (true nullability support, more similarity to the modern generation of languages that also use, most notably Dart and Swift).
In favor of turning it down:
- Java/Kotlin interop is pretty seamless; in my experiment with some Flutter-team-owned plugins, I could replace the Java generated code with Kotlin generated code with almost no changes to non-generated code.
- It gives us stronger type safety all the way to the client-authored code boundary, so there's some incremental value in switching.
Impediments to turning it down:
- We are not using Kotlin in our own plugins currently, and don't have plans to switch in the near term, so we would be introducing an extra language (although not one we would need to interact with regularly as it's generated).
- Adding a Kotlin dependency to a plugin that didn't previously have one can affect plugin clients, and increase the incidence of ecosystem issues like Kotlin version mismatches. | platform-android,package,team-ecosystem,p: pigeon,P2,triaged-ecosystem | low | Minor |
2,639,194,877 | flutter | [pigeon] Consider removing Obj-C generator | (Filing to track and centralize discussion, since this has come up in several discussions.)
The number of generators in Pigeon is an issue for ongoing development (e.g., event channel support), and we are currently maintaining two generators—Objective-C and Swift—for the use case of iOS/macOS plugin development. Ideally we would only maintain one, which given the direction of iOS and macOS development would be Swift.
In favor of turning it down:
- We're actively working on [deprecating explicit support for Obj-C in Flutter tooling](https://github.com/flutter/flutter/issues/148586).
- We [want to be using Swift for our own Plugins](https://github.com/flutter/flutter/issues/119015), at which point we would not have any actual first-party usage.
Impediments to turning it down:
- Swift/Obj-C interop is non-trivial; in particular, it doesn't seem to be possible to implement a Swift interface in Obj-C, so at least the top level code (the plugin class, or code factored out of it in cases where that's the whole implementation) would need to be converted to Swift as part of switching to the Swift Pigeon generator. That's work we want to do eventually, but isn't currently a priority.
- This would apply to third parties as well; very crude GitHub search (e.g., no fork de-duping) suggests a majority of iOS Pigeon clients are using Obj-C rather than Swift, but not an overwhelming majority. Of course, they could always keep using older versions of Pigeon until they had time to transition.
- Clients would be forced to write/maintain non-generated Swift code, which is not ideal for plugin maintainers who want to use Obj-C based on their own experience. | platform-ios,platform-mac,package,team-ecosystem,p: pigeon,P2,triaged-ecosystem | low | Minor |
2,639,198,108 | flutter | Get Android 35 Emulator Tests working with SwANGLE | This blocks: https://github.com/flutter/flutter/issues/152374
Here is a sample failing build: https://ci.chromium.org/ui/p/flutter/builders/try/Linux_android_emu%20android%20views/11825/infra
```
W1106 11:30:56.306128 144389 VkCommonOperations.cpp:1022] Selecting Vulkan device: SwiftShader Device (Subzero), Version: 1.3.0
initialize: Supports id properties, got a vulkan device UUID
No protocol specified
No protocol specified
E1106 11:30:56.414060 144389 EmulationGl.cpp:94] Failed to find exactly 1 GLES 2.x config: found 0.
E1106 11:30:56.414084 144389 EmulationGl.cpp:309] Failed to validate creating GLES 2.x context.
E1106 11:30:56.414116 144389 FrameBuffer.cpp:345] Failed to initialize GL emulation.
E1106 11:30:56.415569 144389 RendererImpl.cpp:147] Could not initialize emulated framebuffer
```
For this PR: https://github.com/flutter/flutter/pull/158017/files
My understanding is that with SwANGLE, there is a translation pipeline of:
Vulkan -> OpenGL (SwiftShader)
OpenGL -> GLES (ANGLE)
Also, only test engine targets compile swiftshader support, per:
```bash
$ grep -rnI 'swift.*shader' --include='BUILD.gn' | grep -v 'third_party\/'
display_list/testing/BUILD.gn:37:# causes linkage problems with swiftshader.
testing/BUILD.gn:222:use_swiftshader = enable_unittests && shell_enable_gl
testing/BUILD.gn:223:if (use_swiftshader) {
impeller/playground/BUILD.gn:22: "backend/vulkan/swiftshader_utilities.cc",
impeller/playground/BUILD.gn:23: "backend/vulkan/swiftshader_utilities.h",
``` | team-infra,P1,triaged-infra | medium | Critical |
2,639,199,283 | deno | Deno LSP : handling of code actions "context.only" may be too restrictive. | Version: Deno 2.0.5
Hello,
By investigating the following issue https://github.com/zed-industries/zed/issues/20312, I encountered a potential inconsistency in the way the LSP handle the code actions context.
For context a LSP client can use the "only" field, to specify the code action kinds it is interested in.
https://microsoft.github.io/language-server-protocol/specifications/lsp/3.17/specification/#codeActionContext
Zed editor will use it and base the list on the capabilities provided by the server, but any editor could provide it as well.
Currently the LSP will only pass the first "only" item to the typescript server kind parameter :
https://github.com/denoland/deno/blob/b3a3d84ce249ff126f92e7a0849ec0a6ce26e973/cli/lsp/language_server.rs#L1739..L1744
For instance if the list is ["quickfix", "refactor"] it means that only "quickfix" actions can be provided, which is not what is expected, because it will inhibit refactor actions.
It seems that the handling of `only` is too restrictive in that case.
One possible approach, albeit less performant, would be not to pass the `kind` parameter and perform post-filtering on the actions provided by the ts_server.
WDYT ? would you accept a PR implementing the proposal above ?
| needs investigation,lsp | low | Minor |
2,639,200,063 | PowerToys | Normal typing with space set as the Quick Accent activation key sometimes results in space characters moving one character late. | ### Microsoft PowerToys version
0.86.0
### Installation method
PowerToys auto-update
### Running as admin
No
### Area(s) with issue?
Quick Accent
### Steps to reproduce
With space configured as the activation key, normal typing sometimes results in the space character moving one character late when not intending to invoke quick accent.
This happens constantly for me, both on a laptop keyboard and a ZSA Moonlander keyboard.
### ✔️ Expected Behavior
When I type a sentence I want spaces to be between words.
### ❌ Actual Behavior
Sometimes whent yping, the spacec haracter moves one character late.
### Other Software
_No response_ | Issue-Bug,Priority-2,Product-Quick Accent | low | Major |
2,639,246,440 | go | crypto/tls: interoperability problems between go tls server and microsoft/outlook.com tls (smtp starttls) client | ### Go version
go1.23.2 linux/amd64
### Output of `go env` in your module/workspace:
```shell
n/a
```
### What did you do?
Deploy mox, a mail server, and successfully get incoming email message deliveries from microsoft (outlook.com, both office365 and personal/free accounts) to mox over SMTP with STARTTLS (crypto/tls server).
### What did you see happen?
On October 24 I started receiving "TLS reporting" errors with "validation failure" error in the "sts" (MTA-STS) section. Up to and including October 23 I received TLS reports with only successful delivery attempts. I investigated, but couldn't find anything wrong. Yesterday I learned message deliveries from microsoft (outlook.com servers) to mox were failing. The TLS reporting error message wasn't precise/clear, but there's a good chance it was about these failing deliveries attempts.
The symptoms: I would see an incoming smtp connection, the "starttls" command, and an abrupt close of the connection by remote. Debugging revealed the connection was closed by remote after reading the server-side response the the TLS client hello message, without the remote writing anything in response (EOF while trying to read the first bytes looking for the "client finished" message). During more debugging, I noticed the Go TLS server code sends a session ticket message as part of its response to the client hello message. Setting `tls.Config.SessionTicketsDisabled = true` prevents the new session ticket from being sent, and makes the Microsoft SMTP STARTTLS command, and delivery of messages, succeed.
At https://datatracker.ietf.org/doc/html/rfc8446#section-4.6.1 I noticed:
> At any time after the server has received the client Finished
> message, it MAY send a NewSessionTicket message.
One theory: The Go TLS server is sending the NewSessionTicket message too soon, and Microsoft changed their implementation to be more strict about when it allows certain messages.
This isn't specific to mox. Maddy, another mail server written in Go is also seeing TLS interoperability issues with Microsoft/outlook.com. More details:
https://github.com/mjl-/mox/issues/237
https://github.com/foxcpp/maddy/issues/730
### What did you expect to see?
The Go TLS session ticket may come too early for some other TLS clients. I did not try changing the crypto/tls code to only send a new session ticket message after having read the client finished message. May be worth trying, to see if that will result in a successful TLS session or sees the same abrupt connection close.
| NeedsInvestigation | low | Critical |
2,639,291,588 | kubernetes | Allow WorkEstimatorConfig to be configured | ### What would you like to be added?
Allow WorkEstimatorConfig especially `MaximumSeatsLimit` to be configured.
Currently it's hard-coded to 10 and there's no way for a cluster admin to configure it.
https://github.com/kubernetes/kubernetes/blob/e2bf630940946df5bc161d224e4a9b2e191a3b2e/staging/src/k8s.io/apiserver/pkg/server/config.go#L1010-L1012
https://github.com/kubernetes/kubernetes/blob/e2bf630940946df5bc161d224e4a9b2e191a3b2e/staging/src/k8s.io/apiserver/pkg/util/flowcontrol/request/config.go#L27
### Why is this needed?
It's fairly common that a list request is more than 10X more expensive than a single object operation e.g. get, update, create.
This underestimation can cause APIServer to admit more list requests and increase OOM risk. We don't panelize list requests enough.
If there's a knob to configure this maximumSeatsLimit to higher like 20 or even higher, it can help the APIServer to panelize and throttle expensive list request. | sig/api-machinery,kind/feature,triage/accepted | low | Major |
2,639,307,358 | vscode | Notebook `Add Find Match to Selection` not set up for cmd palette | ## Environment data
- VS Code version: 1.95.1
- Jupyter Extension version (available under the Extensions sidebar): v2024.10.0
- Python Extension version (available under the Extensions sidebar): v2024.18.0
- OS (Windows | Mac | Linux distro) and version: Windows 10
- Python: 3.13
- Type of virtual environment used (N/A | venv | virtualenv | conda | ...): XXX
- Jupyter server running: Local
## Expected behaviour
In a jupyter notebook, executing "Add Selection to Next Find Match" in the command palette (multiple times) selects the next occurrence of a variable until the end of the notebook.
## Actual behaviour
"Add Selection to Next Find Match" selects the next occurrence of a variable until the end of the code cell.
There is also "Notebook: Add Find Match to Selection" in the command palette, but I don't know what this should do. Nothing happens when I execute it.
| feature-request,polish,notebook-cell-editor | low | Major |
2,639,320,896 | stable-diffusion-webui | [Bug]: Access denied reading models sub folders | ### Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
In Windows 11 mapping drives as paths to sub folders of the models folder causes access denied and fails to load.
### Steps to reproduce the problem
Taken from - https://learn.microsoft.com/en-us/windows-server/storage/disk-management/assign-a-mount-point-folder-path-to-a-drive
In the search box on the taskbar, enter Computer Management, and select Disk Management.
Choose the partition or volume that has the folder you want to mount the drive.
Go to Action > All Tasks > Change Drive Letter and Paths, then choose Add.
Select Mount in the following empty NTFS folder option.
Select the Browse button to locate the folder- at this point select a sub folder in the models foler
for example models/Stable-diffusion/hdd1
After you select the folder, choose select OK.
Select OK in the Change Drive Letter and Paths dialog box to finish.
run webui-user.bat
### What should have happened?
UI fails to start:
Traceback (most recent call last):
File "D:\stable-diffusion-webui\launch.py", line 48, in <module>
main()
File "D:\stable-diffusion-webui\launch.py", line 44, in main
start()
File "D:\stable-diffusion-webui\modules\launch_utils.py", line 469, in start
webui.webui()
File "D:\stable-diffusion-webui\webui.py", line 64, in webui
shared.demo = ui.create_ui()
File "D:\stable-diffusion-webui\modules\ui.py", line 494, in create_ui
extra_networks_ui = ui_extra_networks.create_ui(txt2img_interface, [txt2img_generation_tab], 'txt2img')
File "D:\stable-diffusion-webui\modules\ui_extra_networks.py", line 751, in create_ui
page_elem = gr.HTML(page.create_html(tabname, empty=True), elem_id=elem_id)
File "D:\stable-diffusion-webui\modules\ui_extra_networks.py", line 621, in create_html
pane_content = self.pane_content_dirs_tpl.format(**page_params, dirs_html=self.create_dirs_view_html(tabname))
File "D:\stable-diffusion-webui\modules\ui_extra_networks.py", line 532, in create_dirs_view_html
is_empty = len(os.listdir(x)) == 0
PermissionError: [WinError 5] Access is denied: 'D:\\stable-diffusion-webui\\models\\Stable-diffusion\\HDD1\\System Volume Information'
Loading VAE weights specified in settings: D:\stable-diffusion-webui\models\VAE\sdxl_vae_fix.safetensors
Applying attention optimization: xformers... done.
Model loaded in 9.1s (load weights from disk: 2.8s, create model: 2.8s, apply weights to model: 2.9s, load VAE: 0.1s, move model to device: 0.1s, calculate empty prompt: 0.2s).
Press any key to continue . . .
### What browsers do you use to access the UI ?
Google Chrome
### Sysinfo
[sysinfo.txt](https://github.com/user-attachments/files/17653352/sysinfo.txt)
### Console logs
```Shell
Already up to date.
venv "D:\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.10.1
Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2
Launching Web UI with arguments: --xformers
D:\stable-diffusion-webui\venv\lib\site-packages\timm\models\layers\__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
CHv1.8.11: Get Custom Model Folder
Loading weights [ee9e02e512] from D:\stable-diffusion-webui\models\Stable-diffusion\HDD1\Pony\realisticPonyPhoto_v10.safetensors
Creating model from config: D:\stable-diffusion-webui\repositories\generative-models\configs\inference\sd_xl_base.yaml
D:\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py:797: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
warnings.warn(
CHv1.8.11: Set Proxy:
Traceback (most recent call last):
File "D:\stable-diffusion-webui\launch.py", line 48, in <module>
main()
File "D:\stable-diffusion-webui\launch.py", line 44, in main
start()
File "D:\stable-diffusion-webui\modules\launch_utils.py", line 469, in start
webui.webui()
File "D:\stable-diffusion-webui\webui.py", line 64, in webui
shared.demo = ui.create_ui()
File "D:\stable-diffusion-webui\modules\ui.py", line 494, in create_ui
extra_networks_ui = ui_extra_networks.create_ui(txt2img_interface, [txt2img_generation_tab], 'txt2img')
File "D:\stable-diffusion-webui\modules\ui_extra_networks.py", line 751, in create_ui
page_elem = gr.HTML(page.create_html(tabname, empty=True), elem_id=elem_id)
File "D:\stable-diffusion-webui\modules\ui_extra_networks.py", line 621, in create_html
pane_content = self.pane_content_dirs_tpl.format(**page_params, dirs_html=self.create_dirs_view_html(tabname))
File "D:\stable-diffusion-webui\modules\ui_extra_networks.py", line 532, in create_dirs_view_html
is_empty = len(os.listdir(x)) == 0
PermissionError: [WinError 5] Access is denied: 'D:\\stable-diffusion-webui\\models\\Stable-diffusion\\HDD1\\System Volume Information'
Loading VAE weights specified in settings: D:\stable-diffusion-webui\models\VAE\sdxl_vae_fix.safetensors
Applying attention optimization: xformers... done.
Model loaded in 9.1s (load weights from disk: 2.8s, create model: 2.8s, apply weights to model: 2.9s, load VAE: 0.1s, move model to device: 0.1s, calculate empty prompt: 0.2s).
Press any key to continue . . .
```
### Additional information
Adding the following to "ui_extra_networks.py" at line 520 seems to work for now.
xtest = os.path.join(x, "WPSettings.dat")
if not os.access(xtest, os.W_OK):
continue
| bug-report | low | Critical |
2,639,342,378 | terminal | [TerminalChat/AI; Enterprise] Company wide predefine lm provider parameter | ### Description of the new feature
Add a second group policy to set the required ChatAI/LM provider parameters like endpoint uri, api key, and so on.
### Use case
That a company can push the configuration to their clients instead of telling (mail or documentation) their users how to configure the provider. This is especially useful if all people should use the same (local/on premise) LM provider infrastructure.
### Implementation notes
- String policy accepting a json value in one line.
- Maybe not possible because of the encryption of the api key(s).
- Maybe a second policy for the provider that is enabled by default.
- Possible json format:
```json
{
"defaultEnabledProvider": "lm-provider-id",
"lm-provider-id": [
"property": "value",
"property": "value"
],
"lm-provider-id2": [
"property": "value",
"property": "value"
]
}
```
### Proposed technical implementation details
_No response_ | Issue-Feature,Product-Terminal,Needs-Tag-Fix,Area-Chat | low | Minor |
2,639,353,230 | material-ui | [core] Adopt react compiler? | ### Summary
Same as https://github.com/mui/base-ui/issues/809.
The action plan would involve:
- [ ] Close #42548
- [ ] Run all the tests on the output of the compiler. I believe we already do this with the Babel optimizations that we have, so won't be too hard. It's important to guarantee behavior.
- [ ] Publish the source with the output of the compiler
### Motivation
Performance.
Now, to be fair, we likely want to invest time into this for Base UI first, since in Material UI there should be barely anything to optimize (it should be almost only about style). | performance,discussion,core | low | Major |
2,639,372,455 | PowerToys | Mouse Highlighter turns on when power toys is opened | ### Microsoft PowerToys version
0.86.0
### Installation method
Microsoft Store
### Running as admin
Yes
### Area(s) with issue?
Mouse Utilities
### Steps to reproduce
log into my computer or open the Power Toys app.
### ✔️ Expected Behavior
the mouse highlighter shouldn't turn on unless I hit the shortcut
### ❌ Actual Behavior
When logging in or opening power toys the mouse highlighter is on.
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,639,379,485 | terminal | Pass keyboard layout down to ConPTY and consider readding support for GetConsoleKeyboardLayoutName | ### Description of the new feature
To help PowerShell with their `ToUnicode` usage we should pass down the keyboard layout to ConPTY so that their keyboard-layout hack works:
https://github.com/PowerShell/PSReadLine/blob/e87a265ef8d2c6c5498500deb155bf6258b34629/PSReadLine/PlatformWindows.cs#L1106-L1109
To avoid the need for such a workaround, we should readd support for the existing `GetConsoleKeyboardLayoutName` function. There's not really a good reason to not support it. It's also used by vim and probably others so it may be quite helpful.
### Proposed technical implementation details
_No response_ | Product-Conhost,Product-Conpty,Area-Server,Issue-Task | low | Minor |
2,639,382,631 | TypeScript | Ukrainian localisation | ### 🔍 Search Terms
"Ukrainian" "localization"
### ✅ Viability Checklist
- [x] This wouldn't be a breaking change in existing TypeScript/JavaScript code
- [x] This wouldn't change the runtime behavior of existing JavaScript code
- [x] This could be implemented without emitting different JS based on the types of the expressions
- [x] This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, new syntax sugar for JS, etc.)
- [x] This isn't a request to add a new utility type: https://github.com/microsoft/TypeScript/wiki/No-New-Utility-Types
- [x] This feature would agree with the rest of our Design Goals: https://github.com/Microsoft/TypeScript/wiki/TypeScript-Design-Goals
### ⭐ Suggestion
To everyone's shame there is no Ukrainian localization of TypeScript.
I am willing to contribute **the bulk of work,** and adhere to any specific standard and practice to make it happen. Hope to help Microsoft not to spend much effort, and gain the good will.
If your internal tooling has some funky format that's OK, just give me a template for English or even say Spanish and I'll get Ukrainian back to you.
We can also arrange for a linguist professional to look over any concerns, questions and such.
### 📃 Motivating Example
There is a large body and community of Ukrainian software developers using TypeScript every day.
You'll gain tons of goodwill, and a positive branding from this.
### 💻 Use Cases
1. What do you want to use this for? -- Reduce friction of mental translation for the newcomers to TypeScript.
2. What shortcomings exist with current approaches? -- great approaches, just no Ukrainian locale.
3. What workarounds are you using in the meantime? -- English locale.
| Suggestion,Awaiting More Feedback | low | Major |
2,639,385,891 | rust | `dyn AsyncFn` generates many independent errors | ### Code
```rust
#![feature(async_closure)]
use std::ops::AsyncFn;
async fn foo(x: &dyn AsyncFn()) {
x().await;
}
```
### Current output
```
error[E0038]: the trait `AsyncFnMut` cannot be made into an object
--> src/lib.rs:5:22
|
5 | async fn foo(x: &dyn AsyncFn()) {
| ^^^^^^^^^ `AsyncFnMut` cannot be made into an object
|
note: for a trait to be "dyn-compatible" it needs to allow building a vtable to allow the call to be resolvable dynamically; for more information visit <https://doc.rust-lang.org/reference/items/traits.html#object-safety>
--> /playground/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/ops/async_function.rs:30:10
|
30 | type CallRefFuture<'a>: Future<Output = Self::Output>
| ^^^^^^^^^^^^^ the trait cannot be made into an object because it contains the generic associated type `CallRefFuture`
= help: the following types implement the trait, consider defining an enum where each variant holds one of these types, implementing `AsyncFnMut` for this new enum and using it instead:
&F
&mut F
std::boxed::Box<F, A>
error[E0038]: the trait `AsyncFn` cannot be made into an object
--> src/lib.rs:5:18
|
5 | async fn foo(x: &dyn AsyncFn()) {
| ^^^^^^^^^^^^^ `AsyncFn` cannot be made into an object
|
note: for a trait to be "dyn-compatible" it needs to allow building a vtable to allow the call to be resolvable dynamically; for more information visit <https://doc.rust-lang.org/reference/items/traits.html#object-safety>
--> /playground/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/ops/async_function.rs:30:10
|
30 | type CallRefFuture<'a>: Future<Output = Self::Output>
| ^^^^^^^^^^^^^ the trait cannot be made into an object because it contains the generic associated type `CallRefFuture`
= help: the following types implement the trait, consider defining an enum where each variant holds one of these types, implementing `AsyncFn` for this new enum and using it instead:
&F
std::boxed::Box<F, A>
error[E0038]: the trait `AsyncFn` cannot be made into an object
--> src/lib.rs:6:5
|
6 | x().await;
| ^^^ `AsyncFn` cannot be made into an object
|
note: for a trait to be "dyn-compatible" it needs to allow building a vtable to allow the call to be resolvable dynamically; for more information visit <https://doc.rust-lang.org/reference/items/traits.html#object-safety>
--> /playground/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/ops/async_function.rs:30:10
|
30 | type CallRefFuture<'a>: Future<Output = Self::Output>
| ^^^^^^^^^^^^^ the trait cannot be made into an object because it contains the generic associated type `CallRefFuture`
= help: the following types implement the trait, consider defining an enum where each variant holds one of these types, implementing `AsyncFn` for this new enum and using it instead:
&F
std::boxed::Box<F, A>
error[E0038]: the trait `AsyncFn` cannot be made into an object
--> src/lib.rs:6:9
|
6 | x().await;
| ^^^^^ `AsyncFn` cannot be made into an object
|
note: for a trait to be "dyn-compatible" it needs to allow building a vtable to allow the call to be resolvable dynamically; for more information visit <https://doc.rust-lang.org/reference/items/traits.html#object-safety>
--> /playground/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/ops/async_function.rs:30:10
|
30 | type CallRefFuture<'a>: Future<Output = Self::Output>
| ^^^^^^^^^^^^^ the trait cannot be made into an object because it contains the generic associated type `CallRefFuture`
= help: the following types implement the trait, consider defining an enum where each variant holds one of these types, implementing `AsyncFn` for this new enum and using it instead:
&F
std::boxed::Box<F, A>
For more information about this error, try `rustc --explain E0038`.
error: could not compile `playground` (lib) due to 4 previous errors
```
### Desired output
```
error[E0038]: the trait `AsyncFn` cannot be made into an object
--> src/lib.rs:5:22
|
5 | async fn foo(x: &dyn AsyncFn()) {
| ^^^^^^^^^ `AsyncFn` cannot be made into an object
|
note: `async` closures are not yet usable with dynamic dispatch. See <link for more information>
error: could not compile `playground` (lib) due to 1 previous error
```
### Rationale and extra context
The current error output spits out four related issues where only one is needed. Additionally, none provides quite the right context, and all reference implementation details of `AsyncFn` that users may not be concerned with.
### Rust Version
2024-11-05 nightly | A-diagnostics,T-compiler,F-async_closure,D-verbose,A-trait-objects | low | Critical |
2,639,419,516 | ollama | [Feature Request] OpenCL 3.0 support | OpenCL 3.0 looks more supported in community drivers, including in mobile environments using like freedreno driver using rusticl or zink driver.
A opencl 3.0 support can open doors for run on mobile devices with postmarket or in some cases run on top of vulkan using zink or angle (android), it's can too provide support for Intel GPUs | feature request | low | Minor |
2,639,455,422 | go | cmd/compile: doesn't inline basic funcs and doesn't optimize code after inlining | ### Go version
go version go1.22.6 darwin/arm64
### Output of `go env` in your module/workspace:
```shell
GO111MODULE=''
GOARCH='arm64'
GOBIN=''
GOCACHE='/Users/xxx/Library/Caches/go-build'
GOENV='/Users/xxx/Library/Application Support/go/env'
GOEXE=''
GOEXPERIMENT=''
GOFLAGS=''
GOHOSTARCH='arm64'
GOHOSTOS='darwin'
GOINSECURE=''
GOMODCACHE='/Users/xxx/go/pkg/mod'
GONOPROXY=''
GONOSUMDB=''
GOOS='darwin'
GOPATH='/Users/xxx/go'
GOPRIVATE=''
GOPROXY='https://proxy.golang.org,direct'
GOROOT='/opt/homebrew/Cellar/go/1.22.6/libexec'
GOSUMDB='sum.golang.org'
GOTMPDIR=''
GOTOOLCHAIN='auto'
GOTOOLDIR='/opt/homebrew/Cellar/go/1.22.6/libexec/pkg/tool/darwin_arm64'
GOVCS=''
GOVERSION='go1.22.6'
GCCGO='gccgo'
AR='ar'
CC='cc'
CXX='c++'
CGO_ENABLED='1'
GOMOD='/dev/null'
GOWORK=''
CGO_CFLAGS='-O2 -g'
CGO_CPPFLAGS=''
CGO_CXXFLAGS='-O2 -g'
CGO_FFLAGS='-O2 -g'
CGO_LDFLAGS='-O2 -g'
PKG_CONFIG='pkg-config'
GOGCCFLAGS='-fPIC -arch arm64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -ffile-prefix-map=/var/folders/6n/4bnk08l915n2qsvtzm_lmtv40000gn/T/go-build2946791027=/tmp/go-build -gno-record-gcc-switches -fno-common'
```
### What did you do?
I'm trying to add a basic tracing library to our code and found out so many example when golang doesn't inline basic cases, so providing here a "simplified" to the minimum examples w/o code doing a real tracing.
### What did you see happen?
### example 1
```
package main
import "fmt"
var level = 0
func StartSpan(name string) {
if level&1 != 0 {
fmt.Println("StartSpan")
}
}
func main() {
StartSpan("span")
}
```
`StartSpan` doesn't inline in this example in main().
gcflags -m says that './main.go:9:15: "StartSpan" escapes to heap', though it is unclear what exactly escapes here and why it is a problem for const string.
Seems this is a problem as in tracing one frequently need to provide variadic number of attributes.
Replacing `fmt.Println()` to some non-variadic function helps in this example.
### Example 2
Ok, lets make compiler to inline StartSpan by replacing Println() to a different function, but having a variadic KV list:
```
func StartSpan(name string, kv ...KV) {
if level&1 != 0 {
// do smth useless, just need a call to some other func here
fn(name, kv)
}
}
var list []KV
func fn(name string, kv []KV) {
// do smth with kv... e.g. copy elements to some global list
for idx := range kv {
list = append(list, kv[idx])
}
}
func main() {
StartSpan("span", KV{"k1", "v1"}, KV{"k2", "v2"})
}
```
now `StartSpan` is inlined to main, but its arguments are prepared BEFORE check for `if level&1 != 0` and it takes significant amount of code like this (actually, I saw 2 pages of asm with allocations and WriteBarrier in real life):
```
main.main STEXT size=432 args=0x0 locals=0xb8 funcid=0x0 align=0x0
...
0x001c 00028 (/tmp/test/main.go:27) STP (ZR, ZR), main..autotmp_6-80(SP)
0x0020 00032 (/tmp/test/main.go:27) STP (ZR, ZR), main..autotmp_6-64(SP)
0x0024 00036 (/tmp/test/main.go:27) STP (ZR, ZR), main..autotmp_6-48(SP)
0x0028 00040 (/tmp/test/main.go:27) STP (ZR, ZR), main..autotmp_6-32(SP)
0x002c 00044 (/tmp/test/main.go:27) MOVD $2, R5
0x0030 00048 (/tmp/test/main.go:27) MOVD R5, main..autotmp_6-72(SP)
0x0034 00052 (/tmp/test/main.go:27) MOVD $go:string."k1"(SB), R6
0x003c 00060 (/tmp/test/main.go:27) MOVD R6, main..autotmp_6-80(SP)
0x0040 00064 (/tmp/test/main.go:27) MOVD R5, main..autotmp_6-56(SP)
0x0044 00068 (/tmp/test/main.go:27) MOVD $go:string."v1"(SB), R6
0x004c 00076 (/tmp/test/main.go:27) MOVD R6, main..autotmp_6-64(SP)
0x0050 00080 (/tmp/test/main.go:27) MOVD R5, main..autotmp_6-40(SP)
0x0054 00084 (/tmp/test/main.go:27) MOVD $go:string."k2"(SB), R6
0x005c 00092 (/tmp/test/main.go:27) MOVD R6, main..autotmp_6-48(SP)
0x0060 00096 (/tmp/test/main.go:27) MOVD R5, main..autotmp_6-24(SP)
0x0064 00100 (/tmp/test/main.go:27) MOVD $go:string."v2"(SB), R5
0x006c 00108 (/tmp/test/main.go:27) MOVD R5, main..autotmp_6-32(SP)
0x0070 00112 (<unknown line number>) NOP
0x0070 00112 (<unknown line number>) PCDATA $0, $-3
while the level flag is checked just here:
0x0070 00112 (/tmp/test/main.go:11) MOVD main.level(SB), R5
0x0078 00120 (/tmp/test/main.go:11) PCDATA $0, $-1
0x0078 00120 (/tmp/test/main.go:11) TBZ $0, R5, 132
...
```
so it seems like compiler is absolutely incapable to detect and reorder code efficiently in situations like this... :/
I believe such situations are pretty common in loggers, tracing and other typical scenarios and it deserves optimization.
### What did you expect to see?
efficient inlining of function `StartSpan` in both examples and checking level variable BEFORE initializing lots of argument objects. | Performance,NeedsInvestigation,compiler/runtime | low | Major |
2,639,455,960 | PowerToys | New+ Custom Icons | ### Description of the new feature / enhancement
Pretty much self-explanatory -right now we have the opening app's icon beside the file, but sometimes a custom icon can make finding the correct file faster.
### Scenario when this would be used?
I use Notepad++ to open both CSS and JSON files, but I would rather use these SVGs:


Should be toggleable between using opening program and a custom image.
### Supporting information
_No response_ | Idea-Enhancement,Needs-Triage,Needs-Team-Response,Product-New+ | low | Minor |
2,639,494,109 | kubernetes | Add an option for the testserver to reset its metrics when it starts and torn down | ### What would you like to be added?
Add an option to [TestServer](https://github.com/kubernetes/kubernetes/blob/6399c32669c62cfbf7c33b14b77d6781ce1cce27/cmd/kube-apiserver/app/testing/testserver.go#L117) to reset apiserver related metrics when it [starts](https://github.com/kubernetes/kubernetes/blob/6399c32669c62cfbf7c33b14b77d6781ce1cce27/cmd/kube-apiserver/app/testing/testserver.go#L528) and [teardown](https://github.com/kubernetes/kubernetes/blob/6399c32669c62cfbf7c33b14b77d6781ce1cce27/cmd/kube-apiserver/app/testing/testserver.go#L174).
### Why is this needed?
Kube-apiserver related metrics are stored globally, so they are shared under the same test suites and metrics' value need to be reseted explicitly,
https://github.com/kubernetes/kubernetes/blob/76790cee96d2af21ff60f297a452e819db5934b6/test/integration/client/metrics/metrics_test.go#L51
even when the testServer has been torn down.
https://github.com/kubernetes/kubernetes/blob/6399c32669c62cfbf7c33b14b77d6781ce1cce27/test/integration/metrics/metrics_test.go#L83
| sig/api-machinery,kind/feature,sig/instrumentation,sig/testing,triage/accepted | low | Minor |
2,639,554,775 | godot | Node duplication issue with @tool | ### Tested versions
4.2, 4.3 stable
### System information
Godot v4.3.stable - Windows 10.0.19045 - GLES3 (Compatibility) - Radeon RX 560 Series (Advanced Micro Devices, Inc.; 31.0.14001.45012) - Intel(R) Core(TM) i5-4570 CPU @ 3.20GHz (4 Threads)
### Issue description
I noticed a strange problem when duplicating nodes in the scene tree that were added by a plugin, I first encountered this problem in [my plugin](https://github.com/JekSun97/gdTree3D/issues/1) developed using GDExtension, now I decided to test this problem locally in the editor itself using GDScript plugins, and this problem repeated.
If we duplicate a node added using GDExtension or a plugin and `@tool`, then the node is duplicated twice, the more we duplicate this node, the more descendants it has
When reloading a project, extra nodes are sometimes shown in the editor, sometimes not, which is also quite strange.
### Steps to reproduce
1. Open MRP
2. Create a new node added by the "newnode" plugin
3. Duplicate it
4. You will see extra descendants in the output window
### Minimal reproduction project (MRP)
[bugdublicate.zip](https://github.com/user-attachments/files/17654469/bugdublicate.zip)
| topic:editor,needs testing | low | Critical |
2,639,639,333 | PowerToys | Mouse without borders not working | ### Microsoft PowerToys version
0.86.0
### Installation method
Microsoft Store
### Running as admin
Yes
### Area(s) with issue?
Mouse Without Borders
### Steps to reproduce
Installed and updated powertoys on both machines, did the setup for mouse without borders on both machines.
Device layout on both machines show 2 computers, but mouse doesn't move from 1 machine to the other like it used to do in the past

### ✔️ Expected Behavior
Mouse should mouse smoothly between machines
### ❌ Actual Behavior
Mouse not moving
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,639,639,796 | pytorch | Can't seem to run tip of tree on Linux | ### 🐛 Describe the bug
I ran setup.py develop within a venv and am now trying to run torch but I get a missing symbol. Is it because I'm running from a virtual environment?
```
Traceback (most recent call last):
File "$HOME/pytorch/test.py", line 1, in <module>
import torch
File "$HOME/projects/pytorch/torch/__init__.py", line 376, in <module>
from torch._C import * # noqa: F403
^^^^^^^^^^^^^^^^^^^^^^
ImportError: $HOME/projects/pytorch/build/lib/libtorch_cuda.so: undefined symbol: _ZNK2at10TensorBase14const_data_ptrIsTnNSt9enable_ifIXntgssr3stdE10is_const_vIT_EEiE4typeELi0EEEPKS3_v
```
Installation command:
```
. .env/bin/activate
DEBUG=1 USE_CUDA=1 USE_DISTRIBUTED=0 USE_MKLDNN=0 USE_NNPACK=0 USE_QNNPACK=0 USE_XNNPACK=0 CMAKE_INSTALL_PREFIX=$PWD/install python setup.py develop --cmake-only
(cd build && ninja)
(cd build && ninja install)
```
For some reason had to do some weird linkages to make paths work:
```
ln -s ../build/bin torch/bin
ln -s ../../build/lib/*.so torch/lib/
```
(otherwise it was complaining that it couldn't find `libtorch.so` in `$HOME/projects/pytorch/torch/lib/libtorch.so` even with the LD_LIBRARY_PATH set).
### Versions
Version aafb3deaf1460764432472a749d625f03570a53d
```
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Arch Linux (x86_64)
GCC version: (GCC) 14.2.1 20240910
Clang version: 18.1.8
CMake version: version 3.30.5
Libc version: glibc-2.40
Python version: 3.12.7 (main, Oct 1 2024, 11:15:50) [GCC 14.2.1 20240910] (64-bit runtime)
Python platform: Linux-6.11.5-1-ck-x86_64-with-glibc2.40
Is CUDA available: N/A
CUDA runtime version: 12.6.77
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2080
Nvidia driver version: 565.57.01
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.9.2.1
/usr/lib/libcudnn_adv.so.9.2.1
/usr/lib/libcudnn_cnn.so.9.2.1
/usr/lib/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/libcudnn_graph.so.9.2.1
/usr/lib/libcudnn_heuristic.so.9.2.1
/usr/lib/libcudnn_ops.so.9.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i9-13900KF
CPU family: 6
Model: 183
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 1
CPU(s) scaling MHz: 58%
CPU max MHz: 5800.0000
CPU min MHz: 800.0000
BogoMIPS: 5990.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 896 KiB (24 instances)
L1i cache: 1.3 MiB (24 instances)
L2 cache: 32 MiB (12 instances)
L3 cache: 36 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] numpy==1.26.4
[pip3] pytorch-triton==3.1.0+cf34004b8a
[pip3] rotary-embedding-torch==0.5.3
[pip3] rotary-embedding-torch==0.5.3
[pip3] torch==2.6.0a0+gitaafb3de
[conda] Could not collect
```
cc @malfet @seemethere | module: build,triaged | low | Critical |
2,639,644,364 | pytorch | fbgemm build errors on warning that VLAs are a clang extension when building with clang18 | ### 🐛 Describe the bug
fbgemm's CMakelists.txt has a -Werror set but when building it throws errors about VLAs being a clang extension, even though I'm using clang. Removing the Werror from fbgemm's CMakeLists.txt works around the issue but not sure what I did wrong (following the docs in CONTRIBUTING.md).
```
WERROR=0 DEBUG=1 USE_CUDA=1 USE_DISTRIBUTED=0 USE_MKLDNN=0 USE_NNPACK=0 USE_QNNPACK=0 USE_XNNPACK=0 python setup.py develop --cmake-only
(cd build && ninja)
```
clang version 18.1.8
Arch Linux.
Must be a new warning added that's either on by default for Wall or Wextra.
### Versions
```
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Arch Linux (x86_64)
GCC version: (GCC) 14.2.1 20240910
Clang version: 18.1.8
CMake version: version 3.30.5
Libc version: glibc-2.40
Python version: 3.12.7 (main, Oct 1 2024, 11:15:50) [GCC 14.2.1 20240910] (64-bit runtime)
Python platform: Linux-6.11.5-1-ck-x86_64-with-glibc2.40
Is CUDA available: N/A
CUDA runtime version: 12.6.77
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2080
Nvidia driver version: 565.57.01
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.9.2.1
/usr/lib/libcudnn_adv.so.9.2.1
/usr/lib/libcudnn_cnn.so.9.2.1
/usr/lib/libcudnn_engines_precompiled.so.9.2.1
/usr/lib/libcudnn_engines_runtime_compiled.so.9.2.1
/usr/lib/libcudnn_graph.so.9.2.1
/usr/lib/libcudnn_heuristic.so.9.2.1
/usr/lib/libcudnn_ops.so.9.2.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i9-13900KF
CPU family: 6
Model: 183
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 1
CPU(s) scaling MHz: 58%
CPU max MHz: 5800.0000
CPU min MHz: 800.0000
BogoMIPS: 5990.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi vnmi umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize arch_lbr ibt flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 896 KiB (24 instances)
L1i cache: 1.3 MiB (24 instances)
L2 cache: 32 MiB (12 instances)
L3 cache: 36 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Mitigation; Clear Register File
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] numpy==1.26.4
[pip3] pytorch-triton==3.1.0+cf34004b8a
[pip3] rotary-embedding-torch==0.5.3
[pip3] rotary-embedding-torch==0.5.3
[pip3] torch==2.6.0a0+gitaafb3de
[conda] Could not collect
```
cc @malfet @seemethere | module: build,triaged,module: third_party | low | Critical |
2,639,699,880 | PowerToys | [feature] a global shortcut key management | ### Description of the new feature / enhancement
Let me know which shortcut key is occupied by which software and can disable or modify the user of this shortcut key
### Scenario when this would be used?
every where
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,639,784,791 | excalidraw | Feature Request: Spaced node-style connections on shapes | I work a lot in node-based animation and VFX programs like Houdini, Blender, Unreal Engine, etc, and I often sketch out node networks in my notes. In Excalidraw I've built up a library of node shapes for these programs, each consisting of the node shape grouped with circular connectors. Here's a shot of my shapes library and how I use them in practice.

While the library shapes method works, it's a bit of a time investment to set up. So this request is for shapes that auto create spaced input/output 'ports' as the arrow is connected. In Houdini for example, multiple connections looks like this:

The above example shows a merge node taking inputs to one big port, though ideally there'd be an option for a port to be created with each connector, like the in/out ports on the top row of nodes.
| enhancement | low | Minor |
2,639,791,036 | pytorch | Report issue for torch.nn.Linear when forwarding a 3-dim tensor. | ### 🐛 Describe the bug
Dear all,
We seemly found a bug in nn.linear forwarding, here is a minimal example:
```python
# import
import torch
import time
# Set input size, output size, and batch size
input_size = 1024
output_size = 512
feature_size = 100
batch_size = 2
# Create test input data
x = torch.randn(batch_size, feature_size, input_size, device="cuda")
# Test function
def test_precision(dtype):
# Convert the linear layer to the specified precision
linear.to(dtype)
x_dtype = x.to(dtype)
# Perform a warm-up computation to ensure GPU is ready
_ = linear(x_dtype)
# Start timing
torch.cuda.synchronize()
start = time.time()
# Perform forward pass
a1 = linear(x_dtype)[:1]
a2 = linear(x_dtype[:1])
# Calculate and display the difference
diff = (a1 - a2).abs().sum()
print(f"Diff error: {diff} at {dtype}")
# End timing
torch.cuda.synchronize()
end = time.time()
# Compute elapsed time
return end - start
# Define the linear layer (without bias)
linear = torch.nn.Linear(input_size, output_size, bias=False).cuda()
print("linear \"WITHOUT\" bias")
# Test computation time for different precisions
time_bfloat16 = test_precision(torch.bfloat16)
time_float16 = test_precision(torch.float16)
time_float32 = test_precision(torch.float32)
# Output results
print(f"bfloat16 forward pass time: {time_bfloat16:.6f} seconds")
print(f"float16 forward pass time: {time_float16:.6f} seconds")
print(f"float32 forward pass time: {time_float32:.6f} seconds")
# Define the linear layer (with bias)
linear = torch.nn.Linear(input_size, output_size, bias=True).cuda()
print("linear \"WITH\" bias")
time_bfloat16 = test_precision(torch.bfloat16)
time_float16 = test_precision(torch.float16)
time_float32 = test_precision(torch.float32)
# Output results
print(f"bfloat16 forward pass time: {time_bfloat16:.6f} seconds")
print(f"float16 forward pass time: {time_float16:.6f} seconds")
print(f"float32 forward pass time: {time_float32:.6f} seconds")
```
Definitely, the ```diff``` variable in the code should be zero mathematically, however, in our test case the output is
```
linear "WITHOUT" bias
Diff error: 38.25 at torch.bfloat16
Diff error: 4.76953125 at torch.float16
Diff error: 0.0046875495463609695 at torch.float32
bfloat16 forward pass time: 0.037810 seconds
float16 forward pass time: 0.000848 seconds
float32 forward pass time: 0.000678 seconds
linear "WITH" bias
Diff error: 0.0 at torch.bfloat16
Diff error: 6.48828125 at torch.float16
Diff error: 0.004732653498649597 at torch.float32
bfloat16 forward pass time: 0.000422 seconds
float16 forward pass time: 0.000555 seconds
float32 forward pass time: 0.000377 seconds
```
Especially, if we use ```bfloat16```, the ```diff``` will be huge, we believe something may goes wrong here.
# My question is:
Question 1: In linear tensor computation, according to the mathematical formula, computations for different samples within the same batch should be independent. However, the results in our code indicate that when processing multiple samples in a batch, the results differ from calculating each sample individually. Is this normal?
Question 2: Is this due to numerical stability issues with floating-point numbers?
### Versions
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.1.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.5.1
[pip3] triton==3.1.0
[conda] cuda-cudart 12.6.77 h5888daf_0 conda-forge
[conda] cuda-cudart-dev 12.6.77 h5888daf_0 conda-forge
[conda] cuda-cudart-dev_linux-64 12.6.77 h3f2d84a_0 conda-forge
[conda] cuda-cudart-static 12.6.77 h5888daf_0 conda-forge
[conda] cuda-cudart-static_linux-64 12.6.77 h3f2d84a_0 conda-forge
[conda] cuda-cudart_linux-64 12.6.77 h3f2d84a_0 conda-forge
[conda] cuda-cupti 12.6.80 hbd13f7d_0 conda-forge
[conda] cuda-libraries 12.6.2 ha770c72_0 conda-forge
[conda] cuda-libraries-dev 12.6.2 ha770c72_0 conda-forge
[conda] cuda-libraries-static 12.6.2 ha770c72_0 conda-forge
[conda] cuda-nvrtc 12.6.77 hbd13f7d_0 conda-forge
[conda] cuda-nvrtc-dev 12.6.77 h5888daf_0 conda-forge
[conda] cuda-nvrtc-static 12.6.77 h5888daf_0 conda-forge
[conda] cuda-nvtx 12.6.77 hbd13f7d_0 conda-forge
[conda] cuda-opencl 12.6.77 hbd13f7d_0 conda-forge
[conda] cuda-opencl-dev 12.6.77 h5888daf_0 conda-forge
[conda] libcublas 12.6.3.3 hbd13f7d_1 conda-forge
[conda] libcublas-dev 12.6.3.3 h5888daf_1 conda-forge
[conda] libcublas-static 12.6.3.3 h5888daf_1 conda-forge
[conda] libcufft 11.3.0.4 hbd13f7d_0 conda-forge
[conda] libcufft-dev 11.3.0.4 h5888daf_0 conda-forge
[conda] libcufft-static 11.3.0.4 h5888daf_0 conda-forge
[conda] libcurand 10.3.7.77 hbd13f7d_0 conda-forge
[conda] libcurand-dev 10.3.7.77 h5888daf_0 conda-forge
[conda] libcurand-static 10.3.7.77 h5888daf_0 conda-forge
[conda] libcusolver 11.7.1.2 hbd13f7d_0 conda-forge
[conda] libcusolver-dev 11.7.1.2 h5888daf_0 conda-forge
[conda] libcusolver-static 11.7.1.2 h5888daf_0 conda-forge
[conda] libcusparse 12.5.4.2 hbd13f7d_0 conda-forge
[conda] libcusparse-dev 12.5.4.2 h5888daf_0 conda-forge
[conda] libcusparse-static 12.5.4.2 h5888daf_0 conda-forge
[conda] libnvjitlink 12.6.77 hbd13f7d_1 conda-forge
[conda] libnvjitlink-dev 12.6.77 h5888daf_1 conda-forge
[conda] libnvjitlink-static 12.6.77 h5888daf_1 conda-forge
[conda] numpy 2.1.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi | module: numerical-stability,triaged | low | Critical |
2,639,802,375 | electron | WebContentsView - visibility (backgroundThrottling, size, Page Visibility API) | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
33.1.0
### What operating system(s) are you using?
macOS, Windows
### Operating System Version
macOS Sonoma 14.6.1
### What arch are you using?
arm64 (including Apple Silicon)
### Last Known Working Electron version
_No response_
### Expected Behavior
https://www.electronjs.org/docs/latest/api/browser-window#page-visibility:
> If `backgroundThrottling` is disabled, the visibility state will remain `visible` even if the window is minimized, occluded, or hidden.
Output of test case:
```
{"x":0,"y":0,"width":100,"height":100} {"backgroundThrottling":true} {"visibilityState":"visible","innerWidth":100}
{"x":-1000,"y":-1000,"width":100,"height":100} {"backgroundThrottling":true} {"visibilityState":"hidden","innerWidth":100}
{"x":-1000,"y":-1000,"width":100,"height":100} {"backgroundThrottling":false} {"visibilityState":"visible","innerWidth":100}
```
### Actual Behavior
`WebContentsView` has visibility state `hidden` when `backgroundThrottling: false` and `WebContentsView` is occluded.
And `innerWidth` has also incorrect value of `0` instead of `100` (in both cases when `backgroundThrottling: false` and `backgroundThrottling: true`, when `WebContentsView` is occluded).
Output of test case:
```
{"x":0,"y":0,"width":100,"height":100} {"backgroundThrottling":true} {"visibilityState":"visible","innerWidth":100}
{"x":-1000,"y":-1000,"width":100,"height":100} {"backgroundThrottling":true} {"visibilityState":"hidden","innerWidth":0}
{"x":-1000,"y":-1000,"width":100,"height":100} {"backgroundThrottling":false} {"visibilityState":"hidden","innerWidth":0}
```
### Testcase Gist URL
https://gist.github.com/alexander-at-t/c5a24e70d86001ae9dc10de3c8217c7f
### Additional Information
I've also tested in Electron v34.0.0-alpha.7 - the result is the same. | platform/windows,platform/macOS,bug :beetle:,has-repro-gist,component/WebContentsView,33-x-y | low | Critical |
2,639,830,826 | PowerToys | Please put the Musical symbols in the Quick Accent bar | ### Description of the new feature / enhancement
flat ♭ could be accessed under "b" in the quick accent bar.
♯ sharp could be accessed under 3 (because it has the sharp symbol)
natural ♮ could be under n, for natural maybe? I'm not sure on that one.
I use those symbols a lot, and it's a pain to keep copy pasting them
### Scenario when this would be used?
I'm a teacher, and I love using the quick accent bar for when I teach pronunciation and IPA. I also teach music, and it would be super helpful if we could have access to the flat (♭), sharp (♯), and natural (♮) symbols.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,639,856,035 | godot | Heap buffer overflow in RenderingServer::mesh_surface_get_lods | ### Tested versions
current master @ b00e1cbf743dcb6a2b7f6bd14e348a1a7cf3d403
### System information
macOS 14.5.0, M2 Pro, Vulkan
### Issue description
Related: #98884 , #97862
CC @clayjohn @BlueCube3310
The culprit this time is a 1024x1024 RGTC RG8 normal map, except this time, it seems to decompress normally when testing:
```
*** Loading texture res://ShakerNormal_decompressed.png
Loading resource: res://ShakerNormal_decompressed.png
*** Decompressing the image
bcdec: Decompressing mipmap 0: 1024x1024, src_ofs: 0, dst_ofs: 0, target_size: 2796202
bcdec: Decompressing mipmap 1: 512x512, src_ofs: 1048576, dst_ofs: 2097152, target_size: 2796202
bcdec: Decompressing mipmap 2: 256x256, src_ofs: 1310720, dst_ofs: 2621440, target_size: 2796202
bcdec: Decompressing mipmap 3: 128x128, src_ofs: 1376256, dst_ofs: 2752512, target_size: 2796202
bcdec: Decompressing mipmap 4: 64x64, src_ofs: 1392640, dst_ofs: 2785280, target_size: 2796202
bcdec: Decompressing mipmap 5: 32x32, src_ofs: 1396736, dst_ofs: 2793472, target_size: 2796202
bcdec: Decompressing mipmap 6: 16x16, src_ofs: 1397760, dst_ofs: 2795520, target_size: 2796202
bcdec: Decompressing mipmap 7: 8x8, src_ofs: 1398016, dst_ofs: 2796032, target_size: 2796202
bcdec: Decompressing mipmap 8: 4x4, src_ofs: 1398080, dst_ofs: 2796160, target_size: 2796202
bcdec: Decompressing mipmap 9: 2x2, src_ofs: 1398096, dst_ofs: 2796192, target_size: 2796202
bcdec: Decompressing mipmap 10: 1x1, src_ofs: 1398112, dst_ofs: 2796200, target_size: 2796202
bcdec: Decompression of a 1024x1024 RGTC RedGreen8 image with 10 mipmaps took 13 ms.
*** Image decompressed
*** Saving the image to res://ShakerNormal_decompressed-test.png
```
HOWEVER, when the texture is attached to a mesh as a surface material, and we attempt to save the scene as a GLB, the following occurs:
```
*** Loading the scene res://Shaker_test.tscn
Loading resource: res://Shaker_test.tscn
Loading resource: res://ShakerNormal_decompressed.png
*** Dependencies for res://Shaker_test.tscn:
uid://8eeijsot8t6::::res://ShakerNormal_decompressed.png
*** Instancing the scene
*** Appending the scene to the GLTF document
glTF: Converting light: Light
glTF: Converting camera: Camera
Copying sd.lods[0].index_data, size 98304, lc 49152, rptr[0] @ index_data + 0, past the end? false
Copying sd.lods[1].index_data, size 49152, lc 24576, rptr[1] @ index_data + 2, past the end? false
Copying sd.lods[2].index_data, size 24564, lc 12282, rptr[2] @ index_data + 4, past the end? false
Copying sd.lods[3].index_data, size 12288, lc 6144, rptr[3] @ index_data + 6, past the end? false
Copying sd.lods[4].index_data, size 6144, lc 3072, rptr[4] @ index_data + 8, past the end? false
Copying sd.lods[5].index_data, size 3072, lc 1536, rptr[5] @ index_data + 10, past the end? false
Copying sd.lods[6].index_data, size 1536, lc 768, rptr[6] @ index_data + 12, past the end? false
Copying sd.lods[7].index_data, size 768, lc 384, rptr[7] @ index_data + 14, past the end? false
Copying sd.lods[8].index_data, size 384, lc 192, rptr[8] @ index_data + 16, past the end? false
Copying sd.lods[9].index_data, size 192, lc 96, rptr[9] @ index_data + 18, past the end? false
Copying sd.lods[10].index_data, size 96, lc 48, rptr[10] @ index_data + 20, past the end? false
Copying sd.lods[11].index_data, size 48, lc 24, rptr[11] @ index_data + 22, past the end? false
Copying sd.lods[12].index_data, size 12, lc 6, rptr[12] @ index_data + 24, past the end? true
=================================================================
==39550==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x000159ea7d48 at pc 0x000115fc4960 bp 0x00016b0fb210 sp 0x00016b0fb208
READ of size 2 at 0x000159ea7d48 thread T0
#0 0x115fc495c in RenderingServer::mesh_surface_get_lods(RID, int) const rendering_server.cpp:1724
```
The problem is here:
<img width="561" alt="image" src="https://github.com/user-attachments/assets/01cc4b83-0dd4-4fe7-8787-a345d106facc">
The uint8_t `index_data.ptr()` is cast to a uint16_t ptr `rptr`, and then it's read at `rptr[i]`.
Thus, `rptr[12]` is at index_data + 24, and since `sd.lods[12].index_data` is only 12 bytes long, it reads past the bounds and causes the crash.
Full asan log:
[thing.log](https://github.com/user-attachments/files/17656056/thing.log)
### Steps to reproduce
1. Optionally, put the following in servers/rendering_server.cpp @ line 1722:
```c++
print_verbose(vformat("Copying sd.lods[%d].index_data, size %d, lc %d, rptr[%d] @ index_data + %d, past the end? %s", i, sd.lods[i].index_data.size(), lc, i, i * 2, i * 2 >= sd.lods[i].index_data.size() ? "true" : "false"));
```
2. Build the editor with sanitizers enabled
3. Extract the MRP somewhere.
4. Run the MRP with `<editor_bin> --path <wherever_you_extracted_the_mrp_to> --verbose`
5. Observe crash.
### Minimal reproduction project (MRP)
[bcdec-crash-mrp.zip](https://github.com/user-attachments/files/17656073/bcdec-crash-mrp.zip)
| bug,topic:rendering,topic:import,crash | low | Critical |
2,639,874,067 | vscode | My application getting unistalled once i close it |
Type: <b>Bug</b>
I need to install vscode evreytime to use it. it's getting completely disappeared from my mac after i quit. I don't know why. Kindly help me
VS Code version: Code 1.95.1 (Universal) (65edc4939843c90c34d61f4ce11704f09d3e5cb6, 2024-10-31T05:14:54.222Z)
OS version: Darwin arm64 24.1.0
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Apple M2 (8 x 2400)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|9, 12, 7|
|Memory (System)|8.00GB (0.59GB free)|
|Process Argv|--crash-reporter-id 90f16ad0-e0f5-432b-8b81-e4cb94c479bc|
|Screen Reader|no|
|VM|0%|
</details><details><summary>Extensions (10)</summary>
Extension|Author (truncated)|Version
---|---|---
gitlens|eam|15.6.3
pythonsnippets3|Eri|3.3.20
prettier-vscode|esb|11.0.0
debugpy|ms-|2024.12.0
python|ms-|2024.18.0
vscode-pylance|ms-|2024.11.1
vsliveshare|ms-|1.0.5941
pdf|tom|1.2.2
intellicode-api-usage-examples|Vis|0.2.9
vscodeintellicode|Vis|1.3.2
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
2i9eh265:30646982
962ge761:30959799
pythongtdpath:30769146
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
da93g388:31013173
dvdeprecation:31068756
dwnewjupyter:31046869
impr_priority:31102340
nativerepl2:31139839
refactort:31108082
pythonrstrctxt:31112756
cf971741:31144450
iacca1:31171482
notype1:31157159
5fd0e150:31155592
dwcopilot:31170013
g7688163:31175162
```
</details>
<!-- generated by issue reporter --> | install-update | low | Critical |
2,639,999,120 | pytorch | [ROCm] sorting torch.bool tensor viewed from torch.uint8 type produces incorrect results | ### 🐛 Describe the bug
Small tensor with radix sort:
```
a = torch.randint(0, 100, [100], dtype=torch.uint8, device="cuda")
>>> torch.sort(a)
torch.return_types.sort(
values=tensor([ 0, 0, 1, 1, 5, 5, 6, 6, 9, 10, 11, 12, 13, 14, 16, 17, 17, 18,
19, 19, 20, 23, 23, 25, 25, 25, 27, 28, 29, 29, 30, 32, 33, 33, 34, 35,
35, 37, 38, 38, 39, 40, 42, 42, 43, 46, 47, 48, 48, 50, 56, 56, 57, 59,
60, 61, 62, 62, 63, 64, 64, 67, 68, 68, 69, 69, 70, 70, 71, 72, 72, 74,
78, 79, 80, 82, 82, 82, 82, 82, 82, 83, 83, 84, 84, 86, 87, 88, 89, 91,
92, 93, 94, 94, 94, 94, 94, 95, 98, 99], device='cuda:0',
dtype=torch.uint8),
indices=tensor([15, 25, 49, 68, 7, 76, 18, 41, 39, 45, 46, 4, 57, 20, 59, 44, 67, 9,
16, 89, 21, 30, 40, 34, 84, 85, 90, 10, 12, 24, 69, 17, 8, 62, 94, 52,
78, 95, 26, 47, 58, 63, 82, 96, 66, 13, 88, 43, 99, 14, 37, 75, 61, 31,
79, 60, 48, 83, 80, 3, 50, 22, 2, 97, 0, 65, 27, 54, 53, 42, 87, 92,
71, 19, 93, 29, 32, 38, 51, 56, 77, 11, 23, 1, 73, 5, 6, 55, 28, 72,
86, 81, 35, 36, 64, 74, 98, 33, 70, 91], device='cuda:0'))
>>> torch.sort(a.view(torch.bool))
torch.return_types.sort(
values=tensor([ True, True, True, True, True, **False**, True, True, **False**, True,
True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True, True],
device='cuda:0'),
indices=tensor([ 1, 2, 3, 4, 10, 15, 17, 21, 25, 37, 42, 43, 50, 55, 59, 63, 73, 75,
79, 86, 87, 93, 97, 99, 0, 7, 8, 12, 24, 28, 34, 39, 44, 49, 57, 60,
61, 62, 65, 67, 68, 76, 81, 84, 85, 95, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 5, 9, 13, 14, 18, 20, 26, 27, 29, 32, 35, 36, 38, 41, 45, 47,
48, 51, 54, 56, 64, 69, 70, 71, 74, 77], device='cuda:0'))
```
For large tensor, the stable sort from cub produces wildly off results:
```
>>> a = torch.randint(0, 100, [8000], dtype=torch.uint8, device="cuda")
>>> torch.sort(a)
torch.return_types.sort(
values=tensor([ 0, 0, 0, ..., 99, 99, 99], device='cuda:0', dtype=torch.uint8),
indices=tensor([ 396, 451, 759, ..., 7751, 7771, 7834], device='cuda:0'))
>>> torch.sort(a.view(torch.bool))
torch.return_types.sort(
values=tensor([False, False, False, ..., False, False, True], device='cuda:0'),
indices=tensor([ 396, 451, 759,
..., 9187343241974906880, 9187343239835811840,
7167], device='cuda:0'))
```
### Versions
top of trunk
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd | module: rocm,triaged | low | Critical |
2,640,017,915 | kubernetes | tryUpdateNodeHealth may process old data in large-scale cluster scenarios. | ### What happened?
https://github.com/kubernetes/kubernetes/blob/154b756e2ed850d2e64baea269dbb749ac02a77d/pkg/controller/nodelifecycle/node_lifecycle_controller.go#L711-L729
The node information used by tryUpdateNodeHealth is obtained from nodes, err := nc.nodeLister.List(labels.Everything()). If there are a large number of nodes and the processing capability of kube-controller-manager is limited, tryUpdateNodeHealth may take a long time to process. The node information that causes the follow-up processing is not the latest, so I think this is a problem.
### What did you expect to happen?
tryUpdateNodeHealth should be the latest data each time it processes it.
### How can we reproduce it (as minimally and precisely as possible)?
Large-scale nodes
### Anything else we need to know?
_No response_
### Kubernetes version
<details>
```console
$ kubectl version
# paste output here
```
1.31
</details>
### Cloud provider
<details>
</details>
### OS version
<details>
```console
# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here
# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here
```
</details>
### Install tools
<details>
</details>
### Container runtime (CRI) and version (if applicable)
<details>
</details>
### Related plugins (CNI, CSI, ...) and versions (if applicable)
<details>
</details>
| sig/scalability,sig/node,kind/feature,triage/accepted | low | Major |
2,640,025,648 | pytorch | Assigning the weight of the torch.nn.Embedding object to other variables and then performing subsequent operations on it will cause unstable training | ### 🐛 Describe the bug
def __init__(xxx,xxx):
self.ent_embed = torch.nn.Embedding(self.p.num_ent, self.p.embed_dim, padding_idx=None)
xavier_normal_(self.ent_embed.weight)
def forward(xxx,xxx,xxx):
entity = self.ent_embed.weight
sub_entity = torch.index_select(entity, 0, sub).view(-1, 1, 10, 20)
#some operations on sub_entity
This operation will result in different training results each time.
### Versions
Collecting environment information...
PyTorch version: 1.13.1+cu116
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
Clang version: Could not collect
CMake version: version 3.23.1
Libc version: glibc-2.31
Python version: 3.9.18 (main, Sep 11 2023, 13:41:44) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-152-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3060
GPU 1: NVIDIA GeForce RTX 3060
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
架构: x86_64
CPU 运行模式: 32-bit, 64-bit
字节序: Little Endian
CPU: 24
在线 CPU 列表: 0-23
每个核的线程数: 2
每个座的核数: 12
座: 1
NUMA 节点: 1
厂商 ID: AuthenticAMD
CPU 系列: 25
型号: 33
型号名称: AMD Ryzen 9 5900X 12-Core Processor
步进: 0
CPU MHz: 3572.681
CPU 最大 MHz: 3700.0000
CPU 最小 MHz: 2200.0000
BogoMIPS: 7386.17
虚拟化: AMD-V
L1d 缓存: 32K
L1i 缓存: 32K
L2 缓存: 512K
L3 缓存: 32768K
NUMA 节点0 CPU: 0-23
标记: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid afmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] pytorch-lightning==1.9.4
[pip3] torch==1.13.1+cu116
[pip3] torchaudio==0.13.1+cu116
[pip3] torchmetrics==1.4.0
[pip3] torchvision==0.14.1+cu116
[conda] numpy 1.26.4 pypi_0 pypi
[conda] pytorch-lightning 1.9.4 pypi_0 pypi
[conda] torch 1.13.1+cu116 pypi_0 pypi
[conda] torchaudio 0.13.1+cu116 pypi_0 pypi
[conda] torchmetrics 1.4.0 pypi_0 pypi
[conda] torchvision 0.14.1+cu116 pypi_0 pypi
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki | needs reproduction,module: nn,triaged | low | Critical |
2,640,074,353 | tauri | [bug] In Android MainActivity is leaked if a foreground service is running and makes the app unusable on next launch | ### Describe the bug
It seems that Tauri Android app don't expect the application process to outlive the MainActivity, which is common in android world if you are running a foreground service
In this case when you relaunch the app, Tauri has two instances of MainActivity and RustWebview which causes __TAURI_INVOKE_KEY__ mismatch causing the app to become unsuable.
### Reproduction
Download and build the [sample project](https://github.com/m-byondlabs/screenshare-mobile-tauri). Please build it in release mode, as the repro steps require killing the app. Be sure to add your release key and configure local.properties to set up signing keys. Link to the repo: https://github.com/m-byondlabs/screenshare-mobile-tauri
Launch the app, click the "Start Share" button, and allow the prompt to start screen sharing.
You should see a screencast icon indicating that the foreground service is actively capturing the screen recording.
Swipe up to close the MainActivity; you’ll notice that the foreground service continues capturing the screen (this is intended and expected behavior).
Reopen the app and click "Start Share" or "Stop Share." You will observe a log indicating a __TAURI_INVOKE_KEY__ mismatch.
If you generate a heap dump, you’ll find two instances of MainActivity, meaning the previous activity is leaked and remains in memory.
I've verified that my code does not retain any references to the activity, so it appears that the Tauri Plugin manager or another code path in Tauri may be holding onto that instance.
### Expected behavior
Once the activity is destroyed, it should not stay in the memory
### Full `tauri info` output
```text
[✔] Environment
- OS: Mac OS 14.6.1 arm64 (X64)
✔ Xcode Command Line Tools: installed
✔ rustc: 1.81.0 (eeb90cda1 2024-09-04)
✔ cargo: 1.81.0 (2dbb1af80 2024-08-20)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-aarch64-apple-darwin (environment override by RUSTUP_TOOLCHAIN)
- node: 20.17.0
- npm: 10.8.2
[-] Packages
- tauri 🦀: 2.0.4
- tauri-build 🦀: 2.0.1
- wry 🦀: 0.46.1
- tao 🦀: 0.30.3
- tauri-cli 🦀: 2.0.2
- @tauri-apps/api : 2.0.2 (outdated, latest: 2.0.3)
- @tauri-apps/cli : 2.0.2 (outdated, latest: 2.0.4)
[-] Plugins
- tauri-plugin-shell 🦀: 2.0.1
- @tauri-apps/plugin-shell : 2.0.0 (outdated, latest: 2.0.1)
[-] App
- build-type: bundle
- CSP: unset
- frontendDist: ../dist
- devUrl: http://localhost:1420/
- framework: React
- bundler: Vite
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,status: needs triage,platform: Android | low | Critical |
2,640,097,339 | three.js | New TSL doc | ### Description
Currently the doc of threejs is very nice but it doesnt seems appropriated for all the small tsl function like `uv()` / `min()` / `vec2()` / `position()` / etc.. and I recently found myself diving into the Github research of this repo or into the large examples fews times a day while working with TSL.
### Solution
I suggest a part of the doc dedicated to the TSL with on the left the nodes list and on the right minimal code.
With a UX similar to current `examples` but more appropriated for such a small node like animejs did : https://animejs.com/documentation/
I believe it'll help a lot of devs and will ease a lot the learning curve of TSL.
### Alternatives
Add TSL pages in the docs with all the nodes, maybe sort with large category such as `math` `vertex` `fragment` `posteffect`
### Additional context
_No response_ | Suggestion,Documentation | low | Major |
2,640,110,157 | terminal | The tabs detach too easily. | ### Description of the new feature
This is more of a QoL thing against user mistakes and I think it should affect others too since I find it counterintuitive compared to the tabs we're used to on Browsers. Basically if you SLIGHTLY move a tab up or down it detouches from the window (unlike Browsers that it takes more dragging).
It is worsened by the window returning to a smaller size if you try to move it from maximization which leads to the mouse cursor moving from the center of the window bar next to a tab area potentially (making it more prone to mistakes).
### Proposed technical implementation details
Detouching should probably be harder to do. Make it so it needs more distance before it detouches a tab (ideally close to the popular Browsers' implementation). | Issue-Bug,Needs-Tag-Fix,External-Blocked-WinUI3 | low | Minor |
2,640,116,674 | vscode | Child node missing in the test explorer | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
Version: 1.95.1 (user setup)
Commit: 65edc4939843c90c34d61f4ce11704f09d3e5cb6
Date: 2024-10-31T05:14:54.222Z
Electron: 32.2.1
ElectronBuildId: 10427718
Chromium: 128.0.6613.186
Node.js: 20.18.0
V8: 12.8.374.38-electron.0
OS: Windows_NT x64 10.0.26100

As the screenshot shows, in test explorer, it's expected that the node `MetaAnnotationTest` should have child node. But actual it does not.
While in the editor and test results panel, the test `myFastTest()` is discovered.
During debug, it shows that `myFastTest()`is child of `MetaAnnotationTest`.

@connor4312 Do you have any idea what could be the reason that the child node is not displayed in the tree view? From my side this behavior happens in a random manner. So far I have no idea how to find the root cause of it.
| bug,testing | low | Critical |
2,640,118,596 | tauri | [bug] tauri plugin check permission error | ### Describe the bug
I follow this article to add permission for my plugin
https://tauri.app/develop/plugins/develop-mobile/#permissions
```kotlin
@TauriPlugin(
permissions = [
Permission(strings = [Manifest.permission.ACCESS_FINE_LOCATION], alias = "accessFileLocation")
]
)
class FunProxyPlugin(private val activity: Activity) : Plugin(activity){....}
```
``` ts
import { invoke, PermissionState } from '@tauri-apps/api/core';
type PermissionType = 'accessFileLocation' | 'otherPermission'; // 可以扩展其他权限类型
interface Permissions {
[key: string]: PermissionState;
}
const handlePermissionRequest = async (type: PermissionType) => {
const permission = await invoke<Permissions>('plugin:funproxy|checkPermissions');
const state = permission[type];
if (state === 'prompt-with-rationale') {
// 显示解释信息,告诉用户为什么需要这个权限
// 例如弹出对话框或提示用户权限的重要性
}
if (state.startsWith('prompt')) {
await invoke<Permissions>('plugin:funproxy|requestPermissions', { permissions: [type] });
return checkPermission(type); // 请求后重新检查
}
};
export const checkPermission = async (type: PermissionType) => {
try {
await handlePermissionRequest(type);
} catch (error) {
window.$message?.error(error as string);
}
};
export const initPermission = async () => {
// 这里可以添加更多权限类型
await checkPermission('accessFileLocation');
};
```
But I got this error
<img width="292" alt="image" src="https://github.com/user-attachments/assets/7e151fc5-2d2e-4894-9060-d81c288aff96">
### Reproduction
_No response_
### Expected behavior
_No response_
### Full `tauri info` output
```text
❯ cargo tauri info
[✔] Environment
- OS: Mac OS 15.1.0 arm64 (X64)
✔ Xcode Command Line Tools: installed
✔ rustc: 1.82.0 (f6e511eec 2024-10-15)
✔ cargo: 1.82.0 (8f40fc59f 2024-08-21)
✔ rustup: 1.27.1 (54dd3d00f 2024-04-24)
✔ Rust toolchain: stable-aarch64-apple-darwin (environment override by RUSTUP_TOOLCHAIN)
- node: 20.18.0
- pnpm: 9.12.3
- npm: 10.8.2
- bun: 1.1.24
[-] Packages
- tauri-cli 🦀: 2.0.2
[-] Plugins
[-] App
```
### Stack trace
_No response_
### Additional context
_No response_ | type: bug,status: needs triage | low | Critical |
2,640,225,743 | material-ui | [text field] Outlined input label is one pixel off | ### Steps to reproduce
Here are 4 focused inputs:

1. Outlined input as seen on https://mui.com/material-ui/react-text-field/
This input's styles have
```
transform: translate(14px, -9px) scale(0.75);
```
by default.
2. Same input, but I've set
```
transform: translate(15px, -9px) scale(0.75);
```
Just comparing input 1 and input 2, to my eye, the "margin" where there is no border looks more even in the version 2 than in the version 1. But, we can check that if we see where the "margin" comes from
3. Input 1, but with `fieldset` set to `visibility: visible` and label inside to `opacity: 1`, the white text is the aria label that also serves, thanks to it's padding, to create this "magin" in the border.
4. Input 2, but with fieldset visible.
It is quite clear in inputs 3 and 4 that the label is misaligned to the `fieldset`. The version in 4th is still not quite 100% there, but it's much less than a pixel off, so I'm not sure what can be done about it.
In order to fix the mismatch either the fieldset invisible version must be moved, or the visible version must be moved.
If the transtion is to be changed, here it is:
(but don't forget about the non-focused versions)
https://github.com/mui/material-ui/blob/412dcbf9d54b29d85353f1ff9947a78beb6ed7c1/packages/mui-material/src/InputLabel/InputLabel.js#L169
Another option is to change the NotchedOutline here:
https://github.com/mui/material-ui/blob/412dcbf9d54b29d85353f1ff9947a78beb6ed7c1/packages/mui-material/src/OutlinedInput/NotchedOutline.js#L16
from `padding: 0 8px` to `padding: 0 7px`, because 1px of horizontal size is provided by the border anyway. I think this is actually the most correct change.
**Search keywords**: Outlined input label misaligned | component: text field,package: material-ui,design,enhancement | low | Minor |
2,640,271,116 | go | build: build failure on x_arch-go1.23-linux-loong64 | ```
#!watchflakes
default <- builder == "x_arch-go1.23-linux-loong64" && repo == "arch" && mode == "build"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8732821898484307009)):
[W2024-10-28T23:39:34.541840+08:00 4004364 0 json_subtract.go:121] Unknown fields while parsing property namespace "": {"env":{}, "is_google":false, "mode":0}
2024/10/28 23:39:34 run starting
2024/10/28 23:44:32 installed tools
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,640,308,387 | pytorch | FPE in torch.nn.functional.pixel_shuffle | ### 🐛 Describe the bug
The following code:
```python
import torch
input = torch.empty((0, 0, 1, 0), dtype=torch.int16)
upscale_factor = torch.tensor(1732237826046558208)
torch.nn.functional.pixel_shuffle(input=input, upscale_factor=upscale_factor)
```
throws an FPE error when calling torch.nn.functional.pixel_shuffle:
```
[1] 2034 floating point exception (core dumped) python nn_functional_pixel_shuffle.py
```
### Versions
PyTorch version: 2.6.0.dev20241105+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 15.0.7 (https://github.com/llvm/llvm-project.git 8dfdcc7b7bf66834a761bd8de445840ef68e4d1a)
CMake version: version 3.26.4
Libc version: glibc-2.35
Python version: 3.10.15 (main, Oct 3 2024, 07:27:34) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-182-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.2.91
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2080 Ti
Nvidia driver version: 550.90.07
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.6.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8255C CPU @ 2.50GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
Stepping: 7
Frequency boost: enabled
CPU max MHz: 2501.0000
CPU min MHz: 1000.0000
BogoMIPS: 5000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.5 MiB (48 instances)
L1i cache: 1.5 MiB (48 instances)
L2 cache: 48 MiB (48 instances)
L3 cache: 71.5 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.1.2
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.6.77
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] optree==0.13.0
[pip3] pytorch-triton==3.1.0+cf34004b8a
[pip3] torch==2.6.0.dev20241105+cu121
[pip3] torchaudio==2.5.0.dev20241105+cu121
[pip3] torchvision==0.20.0.dev20241105+cu121
[pip3] triton==2.1.0
[conda] magma-cuda121 2.6.1 1 pytorch
[conda] mkl-include 2024.2.2 pypi_0 pypi
[conda] mkl-static 2024.2.2 pypi_0 pypi
[conda] numpy 2.1.2 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.6.77 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] pytorch-triton 3.1.0+cf34004b8a pypi_0 pypi
[conda] torch 2.6.0.dev20241105+cu121 pypi_0 pypi
[conda] torchaudio 2.5.0.dev20241105+cu121 pypi_0 pypi
[conda] torchvision 0.20.0.dev20241105+cu121 pypi_0 pypi
[conda] triton 2.1.0 pypi_0 pypi
cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki | module: crash,module: nn,triaged,module: edge cases | low | Critical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.