id int64 393k 2.82B | repo stringclasses 68
values | title stringlengths 1 936 | body stringlengths 0 256k ⌀ | labels stringlengths 2 508 | priority stringclasses 3
values | severity stringclasses 3
values |
|---|---|---|---|---|---|---|
2,586,325,591 | flutter | BottomSheet's dragHandle should support more personalized theme settings | BottomSheet or ModalBottomSheet is a frequently used component in mobile application development, and developers have high demands for its custom styles. Currently, BottomSheetThemeData has implemented a large part of the custom style support for BottomSheet, but the custom support for dragHandle is very conservative. If you want to achieve the following effect, it is very difficult now. You need to set showDragHandle to false, and then implement a similar DragHandle in the child of BottomSheet.
<img width="348" alt="image" src="https://github.com/user-attachments/assets/d2a6ef0f-e79a-4c95-89c7-c9606140bde2" />
I understand the flutter team's obsession with using Material M3 style for standard components, but most of the time when we develop enterprise-level applications, designers will define new component styles based on the corporate style. Therefore, supporting richer theme attributes to define styles at the SDK level can simplify a lot of our work, and it does not affect the default Material style of the component.
Here are the changes I made to BottomSheet and BottomSheetThemeData to support the DragHandle style shown above:
```dart
/// How to use
return MaterialApp(
theme: Theme.of(context).copyWith(
bottomSheetTheme: const BottomSheetThemeData(
dragHandleMargin: EdgeInsets.zero,
dragHandleSize: Size(60, 10),
dragHandleDecoration: BoxDecoration(
color: Colors.black87,
borderRadius: BorderRadius.vertical(
bottom: Radius.circular(40)
)
)
)
),
home: Scaffold(
appBar: AppBar(title: const Text('Bottom Sheet Sample')),
body: const BottomSheetExample(),
),
);
```
```dart
/// Modifications to BottomSheetThemeData (simplified code)
const BottomSheetThemeData({
this.backgroundColor,
this.surfaceTintColor,
this.elevation,
this.modalBackgroundColor,
this.modalBarrierColor,
this.shadowColor,
this.modalElevation,
this.shape,
this.showDragHandle,
this.dragHandleColor,
this.dragHandleSize,
this.dragHandleAlignment, // New properties
this.dragHandleDecoration, // New properties
this.dragHandleMargin, // New properties
this.clipBehavior,
this.constraints,
});
```
```dart
/// Modifications to the _DragHandle component (simplified code)
@override
Widget build(BuildContext context) {
final BottomSheetThemeData bottomSheetTheme = Theme.of(context).bottomSheetTheme;
final BottomSheetThemeData m3Defaults = _BottomSheetDefaultsM3(context);
final Size handleSize = dragHandleSize ?? bottomSheetTheme.dragHandleSize ?? m3Defaults.dragHandleSize!;
final BoxDecoration? decoration = bottomSheetTheme.dragHandleDecoration;
final EdgeInsetsGeometry? margin = bottomSheetTheme.dragHandleMargin;
final double defaultMargin = math.max(handleSize.height, kMinInteractiveDimension) - handleSize.height;
return MouseRegion(
onEnter: (PointerEnterEvent event) => handleHover(true),
onExit: (PointerExitEvent event) => handleHover(false),
child: Semantics(
label: MaterialLocalizations.of(context).modalBarrierDismissLabel,
container: true,
onTap: onSemanticsTap,
child: Align(
alignment: Alignment.topCenter,
child: Container(
height: handleSize.height,
width: handleSize.width,
margin: margin ?? EdgeInsets.only(top:defaultMargin/ 2),
decoration: decoration ??
BoxDecoration(
borderRadius: BorderRadius.circular(handleSize.height / 2),
color: MaterialStateProperty.resolveAs<Color?>(dragHandleColor, materialState) ??
MaterialStateProperty.resolveAs<Color?>(bottomSheetTheme.dragHandleColor, materialState) ??
m3Defaults.dragHandleColor,
),
),
),
),
);
}
```
The above modifications are just my initial thoughts. The key point is what the flutter development team members think about this. This is the premise for whether this event can be implemented. I hope to get your answers.
| c: new feature,framework,f: material design,c: proposal,P3,workaround available,team-design,triaged-design | low | Minor |
2,586,333,018 | vscode | Closing a view while dragging border makes the editor view unresponsive to mouse interaction |
Type: <b>Bug</b>
1. Split view
2. While dragging the border between views, press Ctrl+W (closing the current view)
Now VSCode is completely unreactive to mouse interaction in the areas of the two views. Mouse interaction still works in the rest (the sidebar, terminal, top menu).
Curiously, closing more files or opening new files does not fix the issue. Splitting the view makes the new split responsive, but leaves the old split unfixed.
VS Code version: Code 1.93.1 (38c31bc77e0dd6ae88a4e9cc93428cc27a56ba40, 2024-09-11T17:20:05.685Z)
OS version: Linux x64 6.8.0-107045-tuxedo
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|AMD Ryzen 7 8845HS w/ Radeon 780M Graphics (16 x 4502)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: disabled_software<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: disabled_off<br>webnn: disabled_off|
|Load (avg)|2, 1, 1|
|Memory (System)|60.62GB (47.92GB free)|
|Process Argv|--crash-reporter-id cbeb708d-b159-4344-9cd6-2cc641c7864d|
|Screen Reader|no|
|VM|0%|
|DESKTOP_SESSION|ubuntu|
|XDG_CURRENT_DESKTOP|Unity|
|XDG_SESSION_DESKTOP|ubuntu|
|XDG_SESSION_TYPE|wayland|
</details><details><summary>Extensions (24)</summary>
Extension|Author (truncated)|Version
---|---|---
language-x86-64-assembly|13x|3.1.4
copilot|Git|1.238.0
copilot-chat|Git|0.20.3
solidity|Jua|0.0.179
nim|kos|0.6.6
vscoq|max|2.2.1
git-graph|mhu|1.30.0
vscode-docker|ms-|1.29.3
csharp|ms-|2.50.25
vscode-dotnet-runtime|ms-|2.2.0
debugpy|ms-|2024.10.0
python|ms-|2024.14.1
vscode-pylance|ms-|2024.10.1
remote-ssh|ms-|0.115.0
remote-ssh-edit|ms-|0.87.0
cmake-tools|ms-|1.19.52
cpptools|ms-|1.21.6
makefile-tools|ms-|0.11.13
remote-explorer|ms-|0.4.3
vscode-xml|red|0.27.1
swift-lang|ssw|1.11.3
pdf|tom|1.2.2
cmake|twx|0.0.17
vscode-lldb|vad|1.11.0
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
2i9eh265:30646982
962ge761:30959799
pythongtdpath:30769146
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
accentitlementst:30995554
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
f3je6385:31013174
a69g1124:31058053
dvdeprecation:31068756
dwnewjupytercf:31046870
2f103344:31071589
impr_priority:31102340
nativerepl2:31139839
refactort:31108082
pythonrstrctxt:31112756
wkspc-onlycs-t:31132770
wkspc-ranged-t:31151552
cf971741:31144450
defaultse:31146405
iacca2:31156134
notype1cf:31157160
5fd0e150:31155592
```
</details>
<!-- generated by issue reporter --> | bug,workbench-editor-grid,splitview-widget | low | Critical |
2,586,344,034 | electron | [Bug]: "zoomIn" accelerators do no work on Windows or Linux | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
32.2.0
### What operating system(s) are you using?
Windows, Ubuntu
### Operating System Version
Ubuntu 24.04.1, Windows 10 22H2
### What arch are you using?
x64
### Last Known Working Electron version
_No response_
### Expected Behavior
Pressing `Ctrl` plus the key that `+` is on should zoom in just as pressing `Ctrl` plus the key that `-` is on zooms out correctly in the latest version of electron.
### Actual Behavior
Pressing `Ctrl` plus the key that `+` is on does not trigger the zoom in. Currently on `Ctrl` plus `Shift` plus `+` works to trigger zoom in. This is not consistent with the zoom out behavior.
### Testcase Gist URL
https://gist.github.com/jwetzell/5dd9488bc291da575ac3beff1442a641
### Additional Information
The auto-closed issue #40674 has a basic example to reproduce and more thorough detail of the problem. | platform/windows,platform/linux,bug :beetle:,status/confirmed,has-repro-gist,32-x-y | low | Critical |
2,586,381,834 | vscode | Custom HISTFILE not being honored | > I believe this has been fixed. Pls let me know if you can still reproduce in a more recent version
_Originally posted by @meganrogge in https://github.com/microsoft/vscode-remote-release/issues/7083#issuecomment-1373878654_
This issue is happening again with VSCode 1.94.0 and Remote - SSH v0.115.0 (I just installed this extension, don't know if it broke recently).
I have my HISTFILE defined in ~/.zshenv and when I connect to the machine via this extension, ~/.zsh_history gets recreated. (XDG_DATA_HOME is also defined in that file and it gets correctly used by the "Server Install Path" settings)
edit: Disabling shell integration in VSCode (as suggested in the original issue) seems to be a workaround for this issue. | bug,terminal-shell-zsh | low | Minor |
2,586,385,312 | tensorflow | tf.math.is_strictly_increasing's behavior is not clear on a (2,2) matrix | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
2.17.0
### Custom code
Yes
### OS platform and distribution
_No response_
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
When receiving this input:
```
x = tf.constant([[1,2],[2,3]])
```
`tf.math.is_strictly_increasing` outputs `False` instead of `True`.
Also, for a tensor with shape (1,3,3):
```
x = tf.constant([[[-0.3188535, -1.6029806, -1.5352179],
[-0.5704009, -0.2167283, 0.2548743 ],
[-0.14944994, 2.0107825, -0.09678416]]])
```
It's output is still `False` instead of `True` even when x's first dimension only has one element.
Based on the description "Elements of x are compared in row-major order.", it seems that elements in x are compared along row (i.e., the first dimension).
Therefore, to my understanding, if the first dimension contains only one element (such as 1x3x3 shape tensor), the output should be True. If the input is [[1,2],[3,4]], the output should also be `True` since the value in the first dimension is increasing (from `[1,2]` to `[3,4]`)
### Standalone code to reproduce the issue
```shell
import tensorflow as tf
x = tf.constant([[1,2],[2,3]])
print(tf.math.is_strictly_increasing(x)) # False
x = tf.constant([[[-0.3188535, -1.6029806, -1.5352179],
[-0.5704009, -0.2167283, 0.2548743 ],
[-0.14944994, 2.0107825, -0.09678416]]])
print(tf.math.is_strictly_increasing(x)) # False
```
```
### Relevant log output
_No response_ | stat:awaiting tensorflower,type:bug,comp:ops,2.17 | medium | Critical |
2,586,398,831 | electron | [Bug]: capturePage image quality degraded when capturing transformed webContents | ### Preflight Checklist
- [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project.
- [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to.
- [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a bug report that matches the one I want to file, without success.
### Electron Version
29.0.0-alpha.1 and up
### What operating system(s) are you using?
Ubuntu
### Operating System Version
23.04
### What arch are you using?
x64
### Last Known Working Electron version
28.x
### Expected Behavior
When calling capturePage on a transformed webContents, the image remains sharp.
### Actual Behavior
Up to Electron 29, Screenshots made with `capturePage()` of a `webContents` that has an applied transform (such as a `<webview>` with a `transform: scale()` in its CSS) were captured at the original size, creating sharp screenshots.
From Electron 29 onwards, it seems to capture not of the original view but of the transformed view. The size properties of the created nativeImage still reflect the `webContents` original size and this results in much blurrier images.
### Testcase Gist URL
https://gist.github.com/Kilian/3fdb897d2e3c290ed53b3c2402a8aeec
### Additional Information
1. Open the fiddle with Electron 28.x (any version) and click on the "capture screenshot" button.
2. See that the resulting image in the canvas below the button is relatively sharp
3. Start the fiddle with any Electron 29 (starting with alpha.1)
4. See that the resulting image in the canvas is much less sharp
### Screenshots
| Electron 28 | Electron 29 |
|------------------|-----------------|
|  |  |
| platform/linux,bug :beetle:,has-repro-gist,29-x-y,30-x-y,33-x-y,34-x-y | low | Critical |
2,586,398,917 | opencv | OpenCL-OpenGL interop context creation fails on recent Linux/NVIDIA-proprietary (error code: -9999) | ### System Information
OpenCV version: Current 4.x HEAD (8e5dbc03fe0c8264e667de5bbae4d0ab04dcab6b)
Operating System / Platform: openSUSE Tumbleweed-Slowroll.04 (Version: 20241002)
Compiler & compiler version: gcc (SUSE Linux) 14.2.1 20241007 [revision 4af44f2cf7d281f3e4f3957efce10e8b2ccb2ad3]
Kernel: 6.11.0-1-default
Nvidia Driver: Proprietary & Open NVIDIA > 535.43.09 (?)
OpenGL:
3.2.0 NVIDIA 560.35.03
NVIDIA GeForce RTX 4070 Ti/PCIe/SSE2
OpenCL Platforms:
* OpenCL 3.0 CUDA 12.6.65 = NVIDIA CUDA
GL sharing: true
VAAPI media sharing: false
### clinfo excerpt
Number of platforms 1
Platform Name NVIDIA CUDA
Platform Vendor NVIDIA Corporation
Platform Version OpenCL 3.0 CUDA 12.6.65
Platform Profile FULL_PROFILE
Platform Extensions cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_fp64 cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_icd cl_khr_gl_sharing cl_nv_compiler_options cl_nv_device_attribute_query cl_nv_pragma_unroll cl_nv_copy_opts cl_nv_create_buffer cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_device_uuid cl_khr_pci_bus_info cl_khr_external_semaphore cl_khr_external_memory cl_khr_external_semaphore_opaque_fd cl_khr_external_memory_opaque_fd
Platform Extensions with Version cl_khr_global_int32_base_atomics 0x400000 (1.0.0)
cl_khr_global_int32_extended_atomics 0x400000 (1.0.0)
cl_khr_local_int32_base_atomics 0x400000 (1.0.0)
cl_khr_local_int32_extended_atomics 0x400000 (1.0.0)
cl_khr_fp64 0x400000 (1.0.0)
cl_khr_3d_image_writes 0x400000 (1.0.0)
cl_khr_byte_addressable_store 0x400000 (1.0.0)
cl_khr_icd 0x400000 (1.0.0)
cl_khr_gl_sharing 0x400000 (1.0.0)
cl_nv_compiler_options 0x400000 (1.0.0)
cl_nv_device_attribute_query 0x400000 (1.0.0)
cl_nv_pragma_unroll 0x400000 (1.0.0)
cl_nv_copy_opts 0x400000 (1.0.0)
cl_nv_create_buffer 0x400000 (1.0.0)
cl_khr_int64_base_atomics 0x400000 (1.0.0)
cl_khr_int64_extended_atomics 0x400000 (1.0.0)
cl_khr_device_uuid 0x400000 (1.0.0)
cl_khr_pci_bus_info 0x400000 (1.0.0)
cl_khr_external_semaphore 0x9000 (0.9.0)
cl_khr_external_memory 0x9000 (0.9.0)
cl_khr_external_semaphore_opaque_fd 0x9000 (0.9.0)
cl_khr_external_memory_opaque_fd 0x9000 (0.9.0)
Device Extensions cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_fp64 cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_icd cl_khr_gl_sharing cl_nv_compiler_options cl_nv_device_attribute_query cl_nv_pragma_unroll cl_nv_copy_opts cl_nv_create_buffer cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_device_uuid cl_khr_pci_bus_info cl_khr_external_semaphore cl_khr_external_memory cl_khr_external_semaphore_opaque_fd cl_khr_external_memory_opaque_fd
Device Extensions with Version cl_khr_global_int32_base_atomics 0x400000 (1.0.0)
cl_khr_global_int32_extended_atomics 0x400000 (1.0.0)
cl_khr_local_int32_base_atomics 0x400000 (1.0.0)
cl_khr_local_int32_extended_atomics 0x400000 (1.0.0)
cl_khr_fp64 0x400000 (1.0.0)
cl_khr_3d_image_writes 0x400000 (1.0.0)
cl_khr_byte_addressable_store 0x400000 (1.0.0)
cl_khr_icd 0x400000 (1.0.0)
cl_khr_gl_sharing 0x400000 (1.0.0)
cl_nv_compiler_options 0x400000 (1.0.0)
cl_nv_device_attribute_query 0x400000 (1.0.0)
cl_nv_pragma_unroll 0x400000 (1.0.0)
cl_nv_copy_opts 0x400000 (1.0.0)
cl_nv_create_buffer 0x400000 (1.0.0)
cl_khr_int64_base_atomics 0x400000 (1.0.0)
cl_khr_int64_extended_atomics 0x400000 (1.0.0)
cl_khr_device_uuid 0x400000 (1.0.0)
cl_khr_pci_bus_info 0x400000 (1.0.0)
cl_khr_external_semaphore 0x9000 (0.9.0)
cl_khr_external_memory 0x9000 (0.9.0)
cl_khr_external_semaphore_opaque_fd 0x9000 (0.9.0)
cl_khr_external_memory_opaque_fd 0x9000 (0.9.0)
### glxinfo excerpt
OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: NVIDIA GeForce RTX 4070 Ti/PCIe/SSE2
OpenGL core profile version string: 4.6.0 NVIDIA 560.35.03
OpenGL core profile shading language version string: 4.60 NVIDIA
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL core profile extensions:
GL_AMD_multi_draw_indirect, GL_AMD_seamless_cubemap_per_texture,
GL_AMD_vertex_shader_layer, GL_AMD_vertex_shader_viewport_index,
GL_ARB_ES2_compatibility, GL_ARB_ES3_1_compatibility,
GL_ARB_ES3_2_compatibility, GL_ARB_ES3_compatibility,
GL_ARB_arrays_of_arrays, GL_ARB_base_instance, GL_ARB_bindless_texture,
GL_ARB_blend_func_extended, GL_ARB_buffer_storage,
GL_ARB_clear_buffer_object, GL_ARB_clear_texture, GL_ARB_clip_control,
GL_ARB_color_buffer_float, GL_ARB_compressed_texture_pixel_storage,
GL_ARB_compute_shader, GL_ARB_compute_variable_group_size,
GL_ARB_conditional_render_inverted, GL_ARB_conservative_depth,
GL_ARB_copy_buffer, GL_ARB_copy_image, GL_ARB_cull_distance,
GL_ARB_debug_output, GL_ARB_depth_buffer_float, GL_ARB_depth_clamp,
GL_ARB_depth_texture, GL_ARB_derivative_control,
GL_ARB_direct_state_access, GL_ARB_draw_buffers,
GL_ARB_draw_buffers_blend, GL_ARB_draw_elements_base_vertex,
GL_ARB_draw_indirect, GL_ARB_draw_instanced, GL_ARB_enhanced_layouts,
GL_ARB_explicit_attrib_location, GL_ARB_explicit_uniform_location,
GL_ARB_fragment_coord_conventions, GL_ARB_fragment_layer_viewport,
GL_ARB_fragment_program, GL_ARB_fragment_program_shadow,
GL_ARB_fragment_shader, GL_ARB_fragment_shader_interlock,
GL_ARB_framebuffer_no_attachments, GL_ARB_framebuffer_object,
GL_ARB_framebuffer_sRGB, GL_ARB_geometry_shader4,
GL_ARB_get_program_binary, GL_ARB_get_texture_sub_image, GL_ARB_gl_spirv,
GL_ARB_gpu_shader5, GL_ARB_gpu_shader_fp64, GL_ARB_gpu_shader_int64,
GL_ARB_half_float_pixel, GL_ARB_half_float_vertex, GL_ARB_imaging,
GL_ARB_indirect_parameters, GL_ARB_instanced_arrays,
GL_ARB_internalformat_query, GL_ARB_internalformat_query2,
GL_ARB_invalidate_subdata, GL_ARB_map_buffer_alignment,
GL_ARB_map_buffer_range, GL_ARB_multi_bind, GL_ARB_multi_draw_indirect,
GL_ARB_multisample, GL_ARB_multitexture, GL_ARB_occlusion_query,
GL_ARB_occlusion_query2, GL_ARB_parallel_shader_compile,
GL_ARB_pipeline_statistics_query, GL_ARB_pixel_buffer_object,
GL_ARB_point_parameters, GL_ARB_point_sprite, GL_ARB_polygon_offset_clamp,
GL_ARB_post_depth_coverage, GL_ARB_program_interface_query,
GL_ARB_provoking_vertex, GL_ARB_query_buffer_object,
GL_ARB_robust_buffer_access_behavior, GL_ARB_robustness,
GL_ARB_sample_locations, GL_ARB_sample_shading, GL_ARB_sampler_objects,
GL_ARB_seamless_cube_map, GL_ARB_seamless_cubemap_per_texture,
GL_ARB_separate_shader_objects, GL_ARB_shader_atomic_counter_ops,
GL_ARB_shader_atomic_counters, GL_ARB_shader_ballot,
GL_ARB_shader_bit_encoding, GL_ARB_shader_clock,
GL_ARB_shader_draw_parameters, GL_ARB_shader_group_vote,
GL_ARB_shader_image_load_store, GL_ARB_shader_image_size,
GL_ARB_shader_objects, GL_ARB_shader_precision,
GL_ARB_shader_storage_buffer_object, GL_ARB_shader_subroutine,
GL_ARB_shader_texture_image_samples, GL_ARB_shader_texture_lod,
GL_ARB_shader_viewport_layer_array, GL_ARB_shading_language_100,
GL_ARB_shading_language_420pack, GL_ARB_shading_language_include,
GL_ARB_shading_language_packing, GL_ARB_shadow, GL_ARB_sparse_buffer,
GL_ARB_sparse_texture, GL_ARB_sparse_texture2,
GL_ARB_sparse_texture_clamp, GL_ARB_spirv_extensions,
GL_ARB_stencil_texturing, GL_ARB_sync, GL_ARB_tessellation_shader,
GL_ARB_texture_barrier, GL_ARB_texture_border_clamp,
GL_ARB_texture_buffer_object, GL_ARB_texture_buffer_object_rgb32,
GL_ARB_texture_buffer_range, GL_ARB_texture_compression,
GL_ARB_texture_compression_bptc, GL_ARB_texture_compression_rgtc,
GL_ARB_texture_cube_map, GL_ARB_texture_cube_map_array,
GL_ARB_texture_env_add, GL_ARB_texture_env_combine,
GL_ARB_texture_env_crossbar, GL_ARB_texture_env_dot3,
GL_ARB_texture_filter_anisotropic, GL_ARB_texture_filter_minmax,
GL_ARB_texture_float, GL_ARB_texture_gather,
GL_ARB_texture_mirror_clamp_to_edge, GL_ARB_texture_mirrored_repeat,
GL_ARB_texture_multisample, GL_ARB_texture_non_power_of_two,
GL_ARB_texture_query_levels, GL_ARB_texture_query_lod,
GL_ARB_texture_rectangle, GL_ARB_texture_rg, GL_ARB_texture_rgb10_a2ui,
GL_ARB_texture_stencil8, GL_ARB_texture_storage,
GL_ARB_texture_storage_multisample, GL_ARB_texture_swizzle,
GL_ARB_texture_view, GL_ARB_timer_query, GL_ARB_transform_feedback2,
GL_ARB_transform_feedback3, GL_ARB_transform_feedback_instanced,
GL_ARB_transform_feedback_overflow_query, GL_ARB_transpose_matrix,
GL_ARB_uniform_buffer_object, GL_ARB_vertex_array_bgra,
GL_ARB_vertex_array_object, GL_ARB_vertex_attrib_64bit,
GL_ARB_vertex_attrib_binding, GL_ARB_vertex_buffer_object,
GL_ARB_vertex_program, GL_ARB_vertex_shader,
GL_ARB_vertex_type_10f_11f_11f_rev, GL_ARB_vertex_type_2_10_10_10_rev,
GL_ARB_viewport_array, GL_ARB_window_pos, GL_ATI_draw_buffers,
GL_ATI_texture_float, GL_ATI_texture_mirror_once,
GL_EXTX_framebuffer_mixed_formats, GL_EXT_Cg_shader, GL_EXT_abgr,
GL_EXT_bgra, GL_EXT_bindable_uniform, GL_EXT_blend_color,
GL_EXT_blend_equation_separate, GL_EXT_blend_func_separate,
GL_EXT_blend_minmax, GL_EXT_blend_subtract, GL_EXT_compiled_vertex_array,
GL_EXT_depth_bounds_test, GL_EXT_direct_state_access,
GL_EXT_draw_buffers2, GL_EXT_draw_instanced, GL_EXT_draw_range_elements,
GL_EXT_fog_coord, GL_EXT_framebuffer_blit, GL_EXT_framebuffer_multisample,
GL_EXT_framebuffer_multisample_blit_scaled, GL_EXT_framebuffer_object,
GL_EXT_framebuffer_sRGB, GL_EXT_geometry_shader4,
GL_EXT_gpu_program_parameters, GL_EXT_gpu_shader4,
GL_EXT_import_sync_object, GL_EXT_memory_object, GL_EXT_memory_object_fd,
GL_EXT_multi_draw_arrays, GL_EXT_multiview_texture_multisample,
GL_EXT_multiview_timer_query, GL_EXT_packed_depth_stencil,
GL_EXT_packed_float, GL_EXT_packed_pixels, GL_EXT_pixel_buffer_object,
GL_EXT_point_parameters, GL_EXT_polygon_offset_clamp,
GL_EXT_post_depth_coverage, GL_EXT_provoking_vertex,
GL_EXT_raster_multisample, GL_EXT_rescale_normal, GL_EXT_secondary_color,
GL_EXT_semaphore, GL_EXT_semaphore_fd, GL_EXT_separate_shader_objects,
GL_EXT_separate_specular_color, GL_EXT_shader_image_load_formatted,
GL_EXT_shader_image_load_store, GL_EXT_shader_integer_mix,
GL_EXT_shadow_funcs, GL_EXT_sparse_texture2, GL_EXT_stencil_two_side,
GL_EXT_stencil_wrap, GL_EXT_texture3D, GL_EXT_texture_array,
GL_EXT_texture_buffer_object, GL_EXT_texture_compression_dxt1,
GL_EXT_texture_compression_latc, GL_EXT_texture_compression_rgtc,
GL_EXT_texture_compression_s3tc, GL_EXT_texture_cube_map,
GL_EXT_texture_edge_clamp, GL_EXT_texture_env_add,
GL_EXT_texture_env_combine, GL_EXT_texture_env_dot3,
GL_EXT_texture_filter_anisotropic, GL_EXT_texture_filter_minmax,
GL_EXT_texture_integer, GL_EXT_texture_lod, GL_EXT_texture_lod_bias,
GL_EXT_texture_mirror_clamp, GL_EXT_texture_object, GL_EXT_texture_sRGB,
GL_EXT_texture_sRGB_R8, GL_EXT_texture_sRGB_decode,
GL_EXT_texture_shadow_lod, GL_EXT_texture_shared_exponent,
GL_EXT_texture_storage, GL_EXT_texture_swizzle, GL_EXT_timer_query,
GL_EXT_transform_feedback2, GL_EXT_vertex_array, GL_EXT_vertex_array_bgra,
GL_EXT_vertex_attrib_64bit, GL_EXT_window_rectangles,
GL_EXT_x11_sync_object, GL_IBM_rasterpos_clip,
GL_IBM_texture_mirrored_repeat, GL_KHR_blend_equation_advanced,
GL_KHR_blend_equation_advanced_coherent, GL_KHR_context_flush_control,
GL_KHR_debug, GL_KHR_no_error, GL_KHR_parallel_shader_compile,
GL_KHR_robust_buffer_access_behavior, GL_KHR_robustness,
GL_KHR_shader_subgroup, GL_KTX_buffer_region,
GL_NVX_blend_equation_advanced_multi_draw_buffers,
GL_NVX_conditional_render, GL_NVX_gpu_memory_info, GL_NVX_nvenc_interop,
GL_NVX_progress_fence, GL_NV_ES1_1_compatibility,
GL_NV_ES3_1_compatibility, GL_NV_alpha_to_coverage_dither_control,
GL_NV_bindless_multi_draw_indirect,
GL_NV_bindless_multi_draw_indirect_count, GL_NV_bindless_texture,
GL_NV_blend_equation_advanced, GL_NV_blend_equation_advanced_coherent,
GL_NV_blend_minmax_factor, GL_NV_blend_square, GL_NV_clip_space_w_scaling,
GL_NV_command_list, GL_NV_compute_program5,
GL_NV_compute_shader_derivatives, GL_NV_conditional_render,
GL_NV_conservative_raster, GL_NV_conservative_raster_dilate,
GL_NV_conservative_raster_pre_snap,
GL_NV_conservative_raster_pre_snap_triangles,
GL_NV_conservative_raster_underestimation, GL_NV_copy_depth_to_color,
GL_NV_copy_image, GL_NV_depth_buffer_float, GL_NV_depth_clamp,
GL_NV_draw_texture, GL_NV_draw_vulkan_image, GL_NV_explicit_multisample,
GL_NV_feature_query, GL_NV_fence, GL_NV_fill_rectangle,
GL_NV_float_buffer, GL_NV_fog_distance, GL_NV_fragment_coverage_to_color,
GL_NV_fragment_program, GL_NV_fragment_program2,
GL_NV_fragment_program_option, GL_NV_fragment_shader_barycentric,
GL_NV_fragment_shader_interlock, GL_NV_framebuffer_mixed_samples,
GL_NV_framebuffer_multisample_coverage, GL_NV_geometry_shader4,
GL_NV_geometry_shader_passthrough, GL_NV_gpu_multicast,
GL_NV_gpu_program4, GL_NV_gpu_program4_1, GL_NV_gpu_program5,
GL_NV_gpu_program5_mem_extended, GL_NV_gpu_program_fp64,
GL_NV_gpu_program_multiview, GL_NV_gpu_shader5, GL_NV_half_float,
GL_NV_internalformat_sample_query, GL_NV_light_max_exponent,
GL_NV_memory_attachment, GL_NV_memory_object_sparse, GL_NV_mesh_shader,
GL_NV_multisample_coverage, GL_NV_multisample_filter_hint,
GL_NV_occlusion_query, GL_NV_packed_depth_stencil,
GL_NV_parameter_buffer_object, GL_NV_parameter_buffer_object2,
GL_NV_path_rendering, GL_NV_path_rendering_shared_edge,
GL_NV_pixel_data_range, GL_NV_point_sprite, GL_NV_primitive_restart,
GL_NV_primitive_shading_rate, GL_NV_query_resource,
GL_NV_query_resource_tag, GL_NV_register_combiners,
GL_NV_register_combiners2, GL_NV_representative_fragment_test,
GL_NV_robustness_video_memory_purge, GL_NV_sample_locations,
GL_NV_sample_mask_override_coverage, GL_NV_scissor_exclusive,
GL_NV_shader_atomic_counters, GL_NV_shader_atomic_float,
GL_NV_shader_atomic_float64, GL_NV_shader_atomic_fp16_vector,
GL_NV_shader_atomic_int64, GL_NV_shader_buffer_load,
GL_NV_shader_storage_buffer_object, GL_NV_shader_subgroup_partitioned,
GL_NV_shader_texture_footprint, GL_NV_shader_thread_group,
GL_NV_shader_thread_shuffle, GL_NV_shading_rate_image,
GL_NV_stereo_view_rendering, GL_NV_texgen_reflection,
GL_NV_texture_barrier, GL_NV_texture_compression_vtc,
GL_NV_texture_dirty_tile_map, GL_NV_texture_env_combine4,
GL_NV_texture_multisample, GL_NV_texture_rectangle,
GL_NV_texture_rectangle_compressed, GL_NV_texture_shader,
GL_NV_texture_shader2, GL_NV_texture_shader3, GL_NV_timeline_semaphore,
GL_NV_transform_feedback, GL_NV_transform_feedback2,
GL_NV_uniform_buffer_std430_layout, GL_NV_uniform_buffer_unified_memory,
GL_NV_vdpau_interop, GL_NV_vdpau_interop2, GL_NV_vertex_array_range,
GL_NV_vertex_array_range2, GL_NV_vertex_attrib_integer_64bit,
GL_NV_vertex_buffer_unified_memory, GL_NV_vertex_program,
GL_NV_vertex_program1_1, GL_NV_vertex_program2,
GL_NV_vertex_program2_option, GL_NV_vertex_program3,
GL_NV_viewport_array2, GL_NV_viewport_swizzle, GL_OVR_multiview,
GL_OVR_multiview2, GL_S3_s3tc, GL_SGIS_generate_mipmap,
GL_SGIS_texture_lod, GL_SGIX_depth_texture, GL_SGIX_shadow,
GL_SUN_slice_accum
### Detailed description
OpenCL-OpenGL context creation fails in [opengl.cpp](https://github.com/opencv/opencv/blob/8e5dbc03fe0c8264e667de5bbae4d0ab04dcab6b/modules/core/src/opengl.cpp#L1706) with error code -9999 (which seems to be their vendor specific way to say a buffer has been exceeded).
The last NVIDIA driver this worked with I personally used was 535.43.09-proprietary (no open driver there yet). I don't remember the exact kernel and rolling back to that very old kernel is not an option. Also trying out different kernel/driver combinations is difficult because is hindered because of issues like that (https://github.com/NVIDIA/open-gpu-kernel-modules/issues/642) but at least i tried version 555 and 550 of the nvidia proprietary driver and 560 of the proprietary & open flavor.
There doesn't exist a test for clgl yet so I will write one. That nvidia has opened the graphics driver doesn't help much since the opencl implementation is part of the cuda toolkit. At the moment i use V4D to try clgl and it still works on intel: https://github.com/kallaballa/V4D/blob/929c4f6540a1118f61f32e883c3005a91b3c57f0/modules/v4d/src/detail/framebuffercontext.cpp#L217
### Steps to reproduce
```bash
apt install clinfo libqt5opengl5-dev freeglut3-dev ocl-icd-opencl-dev libavcodec-dev libavdevice-dev libavfilter-dev libavformat-dev libavutil-dev libpostproc-dev libswresample-dev libswscale-dev libglfw3-dev libstb-dev libglew-dev cmake make git-core build-essential opencl-clhpp-headers pkg-config zlib1g-dev doxygen libxinerama-dev libxcursor-dev libxi-dev libva-dev yt-dlp wget intel-opencl-icd ca-certificates
git clone https://github.com/opencv/opencv.git
git clone --branch next_release https://github.com/kallaballa/V4D.git
mkdir opencv/build
cd opencv/build
cmake -DCMAKE_CXX_FLAGS="-DCL_TARGET_OPENCL_VERSION=120" -DINSTALL_BIN_EXAMPLES=OFF -DCMAKE_BUILD_TYPE=Release -DCV_TRACE=OFF -DBUILD_SHARED_LIBS=ON -DWITH_OPENGL=ON -DOPENCV_ENABLE_EGL=OFF -DOPENCV_ENABLE_GLX=ON -DOPENCV_FFMPEG_ENABLE_LIBAVDEVICE=ON -DWITH_QT=ON -DWITH_FFMPEG=ON -DOPENCV_FFMPEG_SKIP_BUILD_CHECK=ON -DWITH_VA=OFF -DWITH_VA_INTEL=OFF -DWITH_1394=OFF -DWITH_ADE=OFF -DWITH_VTK=OFF -DWITH_EIGEN=OFF -DWITH_GTK=OFF -DWITH_GTK_2_X=OFF -DWITH_IPP=OFF -DWITH_JASPER=OFF -DWITH_WEBP=OFF -DWITH_OPENEXR=OFF -DWITH_OPENVX=OFF -DWITH_OPENNI=OFF -DWITH_OPENNI2=OFF-DWITH_TBB=OFF -DWITH_TIFF=OFF -DWITH_OPENCL=ON -DWITH_OPENCL_SVM=OFF -DWITH_OPENCLAMDFFT=OFF -DWITH_OPENCLAMDBLAS=OFF -DWITH_GPHOTO2=OFF -DWITH_LAPACK=OFF -DWITH_ITT=OFF -DWITH_QUIRC=ON -DBUILD_ZLIB=OFF -DBUILD_opencv_apps=OFF -DBUILD_opencv_calib3d=ON -DBUIlD_opencv_ccalib=OFF -DBUILD_opencv_dnn=ON -DBUILD_opencv_features2d=ON -DBUILD_opencv_flann=ON -DBUILD_opencv_gapi=OFF -DBUILD_opencv_ml=OFF -DBUILD_opencv_photo=ON -DBUILD_opencv_imgcodecs=ON -DBUILD_opencv_shape=OFF -DBUILD_opencv_videoio=ON -DBUILD_opencv_videostab=OFF -DBUILD_opencv_highgui=ON -DBUILD_opencv_superres=OFF -DBUILD_opencv_stitching=ON -DBUILD_opencv_java=OFF -DBUILD_opencv_js=OFF -DBUILD_opencv_python2=OFF -DBUILD_opencv_python3=OFF -DBUILD_opencv_alphamat=OFF -DBUILD_opencv_aruco=OFF -DBUILD_opencv_barcode=OFF -DBUILD_opencv_bgsegm=OFF -DBUILD_opencv_bioinspired=OFF -DBUILD_opencv_ccalib=ON -DBUILD_opencv_cnn_3dobj=OFF -DBUILD_opencv_cudaarithm=OFF -DBUILD_opencv_cudabgsegm=OFF -DBUILD_opencv_cudacodec=OFF -DBUILD_opencv_cudafeatures2d=OFF -DBUILD_opencv_cudafilters=OFF -DBUILD_opencv_cudaimgproc=OFF -DBUILD_opencv_cudalegacy=OFF -DBUILD_opencv_cudaobjdetect=OFF -DBUILD_opencv_cudaoptflow=OFF -DBUILD_opencv_cudastereo=OFF -DBUILD_opencv_cudawarping=OFF -DBUILD_opencv_cudev=OFF -DBUILD_opencv_cvv=OFF -DBUILD_opencv_datasets=OFF -DBUILD_opencv_dnn_objdetect=OFF -DBUILD_opencv_dnns_easily_fooled=OFF -DBUILD_opencv_dnn_superres=OFF -DBUILD_opencv_dpm=OFF -DBUILD_opencv_face=ON -DBUILD_opencv_freetype=OFF -DBUILD_opencv_fuzzy=OFF -DBUILD_opencv_hdf=OFF -DBUILD_opencv_hfs=OFF -DBUILD_opencv_img_hash=OFF -DBUILD_opencv_intensity_transform=OFF -DBUILD_opencv_julia=OFF -DBUILD_opencv_line_descriptor=OFF -DBUILD_opencv_matlab=OFF -DBUILD_opencv_mcc=OFF -DBUILD_opencv_optflow=ON -DBUILD_opencv_ovis=OFF -DBUILD_opencv_phase_unwrapping=OFF -DBUILD_opencv_plot=ON -DBUILD_opencv_quality=OFF -DBUILD_opencv_rapid=OFF -DBUILD_opencv_README.md=OFF -DBUILD_opencv_reg=OFF -DBUILD_opencv_rgbd=OFF -DBUILD_opencv_saliency=OFF -DBUILD_opencv_sfm=OFF -DBUILD_opencv_shape=OFF -DBUILD_opencv_stereo=OFF -DBUILD_opencv_structured_light=OFF -DBUILD_opencv_superres=OFF -DBUILD_opencv_surface_matching=OFF -DBUILD_opencv_text=OFF -DBUILD_opencv_tracking=ON -DBUILD_opencv_videostab=OFF -DBUILD_opencv_viz=OFF -DBUILD_opencv_wechat_qrcode=OFF -DBUILD_opencv_xfeatures2d=OFF -DBUILD_opencv_ximgproc=ON -DBUILD_opencv_xobjdetect=OFF -DBUILD_opencv_xphoto=OFF -DBUILD_opencv_world=OFF -DBUILD_EXAMPLES=ON -DBUILD_PACKAGE=OFF -DBUILD_TESTS=ON -DBUILD_PERF_TESTS=ON -DBUILD_DOCS=OFF -DWITH_PTHREADS_PF=ON -DCV_ENABLE_INTRINSICS=ON -DBUILD_opencv_video=ON -DBUILD_opencv_v4d=ON -DGBFX_CONFIG_MULTITHREADED=OFF -DBGFX_CONFIG_PASSIVE=ON -DOPENCV_EXTRA_MODULES_PATH="../../V4D/modules/" ..
make -j´nproc` example_v4d_display_image_fb
bin/example_v4d_display_image_fb
```
```
[ WARN:0@0.163] global framebuffercontext.cpp:287 init CL-GL sharing failed: %sOpenCV(4.10.0-dev) /home/elchaschab/tmp/opencv/modules/core/src/opengl.cpp:1719: error: (-222:Unknown error code -222) OpenCL: Can't create context for OpenGL interop in function 'initializeContextFromGL'
[ WARN:0@0.164] global samples.cpp:61 findFile cv::samples::findFile('lena.jpg') => '/home/elchaschab/tmp/opencv/build/..//samples/data/lena.jpg'
[ INFO:0@0.165] V4D v4d.hpp:1963 run Starting with 1 workers
[ WARN:0@0.166] V4D v4d.hpp:1976 run Temporary setting log level to warning.
OpenGL:
3.2.0 NVIDIA 560.35.03
NVIDIA GeForce RTX 4070 Ti/PCIe/SSE2
OpenCL Platforms:
* OpenCL 3.0 CUDA 12.6.65 = NVIDIA CUDA
GL sharing: true
VAAPI media sharing: false
[ INFO:0@0.166] V4D v4d.hpp:332 run Display thread started.
[ WARN:1@0.224] global framebuffercontext.cpp:287 init CL-GL sharing failed: %sOpenCV(4.10.0-dev) /home/elchaschab/tmp/opencv/modules/core/src/opengl.cpp:1719: error: (-222:Unknown error code -222) OpenCL: Can't create context for OpenGL interop in function 'initializeContextFromGL'
```
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [X] I updated to the latest OpenCV version and the issue is still there
- [ ] There is reproducer code and related data files (videos, images, onnx, etc) | bug,category: core,category: ocl | medium | Critical |
2,586,428,434 | tensorflow | tf.math.special.bessel_* has inconsistent result with scipy | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
2.17.0
### Custom code
Yes
### OS platform and distribution
_No response_
### Mobile device
_No response_
### Python version
_No response_
### Bazel version
_No response_
### GCC/compiler version
_No response_
### CUDA/cuDNN version
_No response_
### GPU model and memory
_No response_
### Current behavior?
Based on the documentation, special function such as bessel_y0 should have consistent result with scipy. However, when receiving `-inf`, it has inconsistent results with scipy. Please check the reproducible for details.
### Standalone code to reproduce the issue
```shell
import scipy
import numpy as np
import tensorflow as tf
x = tf.constant(-np.inf, dtype='float64')
print("TF:", tf.math.special.bessel_y1(x))
print("Scipy: ", scipy.special.y1(x))
print("TF:", tf.math.special.bessel_y0(x))
print("Scipy: ", scipy.special.y0(x))
print("TF:", tf.math.special.bessel_k0(x))
print("Scipy: ", scipy.special.k0(x))
print("TF:", tf.math.special.bessel_k1(x))
print("Scipy: ", scipy.special.k1(x))
```
### Relevant log output
```shell
TF: tf.Tensor(-inf, shape=(), dtype=float64)
Scipy: nan
TF: tf.Tensor(-inf, shape=(), dtype=float64)
Scipy: nan
TF: tf.Tensor(inf, shape=(), dtype=float64)
Scipy: nan
TF: tf.Tensor(inf, shape=(), dtype=float64)
Scipy: nan
```
| stat:awaiting tensorflower,type:bug,comp:ops,2.17 | medium | Critical |
2,586,480,401 | pytorch | Any plan to support flash attention 3 for hopper GPUs? | ### 🚀 The feature, motivation and pitch
Flash Attention 3 (https://github.com/Dao-AILab/flash-attention) has been in beta for some time. I tested it on H100 GPUs with CUDA 12.3 and also attempted a simple merge with PyTorch. I understand that it will take some time before the stable release is ready, but I believe some structural refactoring can be started first. e.g., The FA3 highly relies on the hopper architecture. Therefore, for other architectures, the `enable_flash` option of the SDP kernel still needs to go to FA2 backend.
I wonder if anyone has already planned or been working on this. As I can see, @drisspg is still working on adding 2.6.2. I am willing to contribute but don't want to repeat any effort.
### Alternatives
_No response_
### Additional context
_No response_
cc @jbschlosser @bhosmer @cpuhrsch @erichan1 @drisspg @mikaylagawarecki | triaged,module: sdpa | low | Major |
2,586,511,036 | go | proposal: net/http: configurable shutdown idle timeout for Server | ### Proposal Details
Currently, when gracefully terminating an http1 server using `http.Server#Shutdown()`, it will:
1. Close all listeners
2. Call user-supplied shutdown hooks in goroutines (w/o waiting for them to finish - https://github.com/golang/go/issues/32116)
3. Treat active connections that didn't receive request headers for 5 seconds - as idle (https://github.com/golang/go/issues/22682, https://github.com/golang/go/issues/59037)
4. Close idle connections
5. Loop from 2 until all connections are idle and closed
I'd like to hear the community's thoughts about minor changes to the points above,
which I believe could help implement a safer graceful termination flow:
### 1. User-supplied hooks
Would it make more sense to wait for the hooks before exiting `Shutdown()` ?
otherwise we don't guarantee the hooks will execute before `Shutdown()` is finished, and potentially the process is terminated.
#### Proposal
- Manage hooks in waitgroup in `http.Server#Shutdown()`
- Wait for waitgroup to finish before returning
### 2. Hard-coded 5 seconds for treating active connections as idle
The code mentions:
```
// Issue 22682: treat StateNew connections as if
// they're idle if we haven't read the first request's
// header in over 5 seconds.
```
#### Proposal
- Same logic as today, but utilize `ReadHeaderTimeout` and fall-back to the default of 5 seconds if it isn't set
### 3. Closing idle connections
For http1, when connections are reused, they are marked as idle as soon as a response is served;
this means that reused connections can toggle rapidly between active / idle when serving requests, and get shutdown if caught idle in the loop above.
I think we should take a small nuance into account for graceful termination - a client might just be before re-using an idle connection, while the server closes it, leading to an EOF on the client side;
for non idempotent requests (say, POST), this wouldn't be retried automatically by clients - the request is usually dropped, even if there's a new server already accepting requests (say, a version roll of a server, gracefully terminating the former).
This is the nature of http1, of course.
I would like to propose a minor change that would help servers better control this scenario, as I've found, could appear in critical applications (e.g. security-enforcing k8s admission controllers)
#### Proposal
Allow controlling the window of - when an idle connection is treated as idle - by introducing a `time.Duration`, where only connections that had been idle for at least that long will be closed (similar to `IdleTimeout`, but for closing)
- Introduce a new field to `http.Server` - `CloseIdleMinDuration`
- In `closeIdleConns`, get the connection state change time from `conn#getState`
- Close the connection only if `CloseIdleMinDuration` had passed since the state's unixSec
- The loop continues until it's done or the ctx is cancelled
With this, the server can configure that, say, it would only close connections that were idle for 30 seconds - increasing the odds that the connection is truly idle, and is not about to be active in a brief moment.
This has similar, but different semantics to `IdleTimeout`, which would close connections at runtime - as it would only scope it to the termination period;
this could make more sense for http1, as we can never fully guarantee a connection is not just about to be re-used.
Appreciate your Time !
---
golang-nuts discussion: https://groups.google.com/g/golang-nuts/c/G6Ct8kzZiRU | Proposal | low | Minor |
2,586,609,251 | deno | truncate inlay hint by setting | I would like to request a feature to truncate inlay hints via a setting, similar to how VTSLS handles it. This would allow for cleaner, more concise hints in the editor, improving readability.
Could you add a setting to control inlay hint truncation?
https://github.com/yioneko/vtsls/commit/95b51bde14b098ffb4760630821027c6a2fe84da | upstream,suggestion,lsp | low | Minor |
2,586,609,679 | storybook | [Bug]: Storybook ArgsTypes not extracting component imported types props | ### Describe the bug
I have a simple vue button component, and the props type is imported from another type
example component:

But why does the storybook props table not extract the ButtonVariant & ButtonSize types?

any help, or did I set it up wrong?
Thank you
### Reproduction link
https://github.com/ardiansah47/storybook-issue
### Reproduction steps
1. Clone the repository
2. npm install
3. npm run storybook
### System
Storybook Environment Info:
System:
OS: macOS 15.0.1
CPU: (8) arm64 Apple M1
Shell: 5.9 - /bin/zsh
Binaries:
Node: 20.17.0 - ~/.nvm/versions/node/v20.17.0/bin/node
npm: 10.8.2 - ~/.nvm/versions/node/v20.17.0/bin/npm <----- active
pnpm: 9.1.4 - /opt/homebrew/bin/pnpm
Browsers:
Chrome: 129.0.6668.91
Edge: 117.0.2045.55
Safari: 18.0.1
npmPackages:
@storybook/addon-essentials: ^8.3.5 => 8.3.5
@storybook/addon-interactions: ^8.3.5 => 8.3.5
@storybook/addon-links: ^8.3.5 => 8.3.5
@storybook/addon-onboarding: ^8.3.5 => 8.3.5
@storybook/blocks: ^8.3.5 => 8.3.5
@storybook/test: ^8.3.5 => 8.3.5
@storybook/vue3: ^8.3.5 => 8.3.5
@storybook/vue3-vite: ^8.3.5 => 8.3.5
storybook: ^8.3.5 => 8.3.5
### Additional context
_No response_ | bug,vue,argtypes,docgen | low | Critical |
2,586,620,981 | go | internal/coverage/cfile: TestCoverageApis/emitToDir failures | ```
#!watchflakes
default <- pkg == "internal/coverage/cfile" && test == "TestCoverageApis/emitToDir"
```
Issue created automatically to collect these failures.
Example ([log](https://ci.chromium.org/b/8734316695817360561)):
=== RUN TestCoverageApis/emitToDir
=== PAUSE TestCoverageApis/emitToDir
=== CONT TestCoverageApis/emitToDir
emitdata_test.go:166: running: /home/swarming/.swarming/w/ir/x/t/TestCoverageApis1302527043/001/build1/harness.exe -tp emitToDir -o /home/swarming/.swarming/w/ir/x/t/TestCoverageApis1302527043/001/emitToDir-edir-y with rdir=/home/swarming/.swarming/w/ir/x/t/TestCoverageApis1302527043/001/emitToDir-rdir-y and GOCOVERDIR=false
emitdata_test.go:166: running: /home/swarming/.swarming/w/ir/x/t/TestCoverageApis1302527043/001/build1/harness.exe -tp emitToDir -o /home/swarming/.swarming/w/ir/x/t/TestCoverageApis1302527043/001/emitToDir-edir-x with rdir=/home/swarming/.swarming/w/ir/x/t/TestCoverageApis1302527043/001/emitToDir-rdir-x and GOCOVERDIR=true
emitdata_test.go:232:
internal error in coverage meta-data tracking:
encountered bad pkgID: 0 at slot: 3432 fnID: 6 numCtrs: 1
list of hard-coded runtime package IDs needs revising.
[see the comment on the 'rtPkgs' var in
...
panic: runtime error: slice bounds out of range [:4294975528] with capacity 27058
goroutine 1 gp=0xc000002380 m=0 mp=0x6f9da0 [running]:
panic({0x5a08e0?, 0xc00001a198?})
/home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/panic.go:806 +0x2c5 fp=0xc000076968 sp=0xc0000768b8 pc=0x4edea5
runtime.goPanicSliceAcap(0x100002028, 0x69b2)
/home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/panic.go:141 +0xb4 fp=0xc0000769a8 sp=0xc000076968 pc=0x46e214
internal/coverage/cfile.(*emitState).VisitFuncs(0xc0000de000, 0xc00000e078)
/home/swarming/.swarming/w/ir/x/w/goroot/src/internal/coverage/cfile/emit.go:478 +0x1154 fp=0xc000076c00 sp=0xc0000769a8 pc=0x5759b4
internal/coverage/encodecounter.(*CoverageDataWriter).writeCounters(0xc0000120f0, {0x5dc960, 0xc0000de000}, 0xc00009a040)
...
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
/home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/proc.go:435 +0x24a fp=0xc000063e20 sp=0xc000063e00 pc=0x4ee5aa
runtime.runfinq()
/home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/mfinal.go:193 +0x3ce fp=0xc000063fe0 sp=0xc000063e20 pc=0x42bf6e
runtime.goexit({})
/home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/asm_amd64.s:1700 +0x1 fp=0xc000063fe8 sp=0xc000063fe0 pc=0x4f7fe1
created by runtime.createfing in goroutine 1
/home/swarming/.swarming/w/ir/x/w/goroot/src/runtime/mfinal.go:163 +0x86
emitdata_test.go:233: running 'harness -tp emitDir': exit status 2
--- FAIL: TestCoverageApis/emitToDir (0.04s)
— [watchflakes](https://go.dev/wiki/Watchflakes)
| NeedsInvestigation | low | Critical |
2,586,621,295 | flutter | [Impeller] remove filter graph and replace with saveLayer/restore operations | Impeller's filter graph is analgous to the saveLayer/restore system that is present in the canvas dispatcher. I believe that we could replace the filter graph with this system for a smaller/simpler renderer. Essentially:
Today we have a system that conditionally wraps entites with FilterEntities based on paint state:
```
Paint paint;
Entity render_entity;
if (paint.image_filter) {
render_entity = paint.image_filter.wrap(render_entity);
}
render(render_entity);
```
I am proposing replacing that with:
```
Paint paint;
Entity render_entity;
if (paint.image_filter) {
SaveLayer(...);
}
render(render_entity);
if (paint.image_filter) {
Restore();
}
```
Along with https://github.com/flutter/engine/pull/55843, this would allow us to remove more impeller code to delegate to display list. Essentially the filter entites would only need to handle other texture inputs, as the saveLayer with single contents is exactly analgous to entity.getSnapshot();
The only exception here is filter entities applied to textures, which should avoid writing the texture contents to a new save layer. I believe that can be special cased.
Questions:
## Depth/Clipping?
Should work fine, all filter entities are unclipped, just the final resul.t
## Order of operations?
I believe saveLayers apply color/image filters in different order. We just need to make that configurable. | P3,e: impeller,team-engine,triaged-engine | low | Minor |
2,586,661,195 | ui | [bug]: Playground Model Combobox HoverCard bugs | ### Describe the bug
https://ui.shadcn.com/examples/playground
Two bugs I found for this playground example.
1. The HoverCard of the Model will not update after "No Models found." shows

2. When clicking on the HoverContent, the HoverContent will flash!

### Affected component/components
Popover, Combobox, HoverCard
### How to reproduce
1. Go to: https://ui.shadcn.com/examples/playground
2. Click on Model
3. Type something until "No Models found." shows and then hover on the CommandItem again, you'll find that the HoverCard will not update anymore
4. Also another bug is the flashing issue, when clicking on the HoverContent, the HoverContent will flash!
### Codesandbox/StackBlitz link
https://github.com/shadcn-ui/ui/blob/main/apps/www/app/(app)/examples/playground/components/model-selector.tsx
### Logs
_No response_
### System Info
```bash
Chrome Version 129.0.6668.101 (Official Build) (64-bit)
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,586,664,344 | next.js | [turbopack]: Can't specify issuer with @svgr/webpack loader, causes invalid url transformation in css | ### Link to the code that reproduces this issue
https://github.com/MaciejWiatr/svgr-nextjs-css-url-repro
### To Reproduce
1. Create new nextjs app with turbopack enabled for dev server (`next dev --turbo`)
2. Add `@svgr/webpack` as an dependency and loader to nextjs config like the following
```js
/** @type {import('next').NextConfig} */
const nextConfig = {
reactStrictMode: true,
experimental: {
turbo: {
rules: {
"*.svg": {
loaders: ["@svgr/webpack"],
as: "*.js",
},
},
},
},
};
```
3. Install external library that utilizes css svg url imports, i.e flag-icons `npm install flag-icons` ( i will use it as an example for next steps)
4. Add html element: `<span className="fi fi-gr"></span>`
5. Start development server `npm run dev`
6. Inspect generated css:

7. Notice the invalid svg -> js transformation and missing flag icon in the page itself
### Current vs. Expected behavior
Current behavior: Svgr transforms all svg imports to js files breaking ones in css
Expected: I should be able to specify what files and when are transformed via i.e issuer. This is possible in webpack-land and svgr has a documented way of avoiding this bug:
https://react-svgr.com/docs/webpack/#use-svg-in-css-files
### Provide environment information
```bash
Operating System:
Platform: win32
Arch: x64
Version: Windows 10 Home
Available memory (MB): 32713
Available CPU cores: 12
Binaries:
Node: 22.2.0
npm: N/A
Yarn: N/A
pnpm: N/A
Relevant Packages:
next: 14.2.15 // Latest available version is detected (14.2.15).
eslint-config-next: N/A
react: 18.3.1
react-dom: 18.3.1
typescript: 5.6.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
Developer Experience, Turbopack, Webpack
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
_No response_ | bug,Webpack,Turbopack,linear: turbopack | low | Critical |
2,586,689,776 | ui | [bug]: The component at https://ui.shadcn.com/r/colors/violet.json was not found. | ### Describe the bug
I followed instructions to add the _select_ component https://ui.shadcn.com/docs/components/select
When I install : `pnpm dlx shadcn@latest add select`, I have directly an error:
```
pnpm dlx shadcn@latest add select ─╯
.../Library/pnpm/store/v3/tmp/dlx-63619 | +180 ++++++++++++++++++
.../Library/pnpm/store/v3/tmp/dlx-63619 | Progress: resolved 180, reused 180, downloaded 0, added 180, done
✔ Checking registry.
✔ Installing dependencies.
⠋ Updating files.
Something went wrong. Please check the error below for more details.
If the problem persists, please open an issue on GitHub.
The component at https://ui.shadcn.com/r/colors/violet.json was not found.
It may not exist at the registry. Please make sure it is a valid component.
ERROR Command failed with exit code 1: shadcn add select
pnpm: Command failed with exit code 1: shadcn add select
at makeError (/snapshot/dist/pnpm.cjs)
at handlePromise (/snapshot/dist/pnpm.cjs)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
at async Object.handler [as dlx] (/snapshot/dist/pnpm.cjs)
at async /snapshot/dist/pnpm.cjs
at async main (/snapshot/dist/pnpm.cjs)
at async runPnpm (/snapshot/dist/pnpm.cjs)
at async /snapshot/dist/pnpm.cjs
```
### Affected component/components
Select
### How to reproduce
```sh
pnpm dlx shadcn@latest add select
```
### Codesandbox/StackBlitz link
_No response_
### Logs
_No response_
### System Info
```bash
Mac M1, iOS 18
"react": "^18.3.1",
"react-dom": "^18.3.1",
```
### Before submitting
- [X] I've made research efforts and searched the documentation
- [X] I've searched for existing issues | bug | low | Critical |
2,586,708,486 | go | proposal: cmd/go: make build cache trimming cutoff configurable | ### Proposal Details
The current policy assumes that caching is being used by a single developer. However, there are scenarios where the cache is shared across multiple developers and the cache size accumulation can have meaningful impact.
For example, CI where there can be many developers contributing code in a single day and leveraging a shared cache (e.g. GitHub Actions Cache where there is a cache size limit and also time spent loading and saving the cache). In these cases, 5 days of cache accumulation can have meaningful impacts.
Thank you for your consideration! | Proposal | low | Major |
2,586,709,059 | flutter | [Impeller] Work around compiler errors on amd64 bots. | We are running [into errors on presub on amd64 bots](https://ci.chromium.org/ui/p/flutter/builders/try/Mac%20Engine%20Drone/1217442/infra) we don't know how to debug. A workaround is to switch to Apple Silicon bots instead (and maybe file a radar). | P1,team-engine,triaged-engine | medium | Critical |
2,586,725,920 | terminal | Consolidate Tray Icons OR (alternatively) Distinguish between Elevated and Unelevated Instances | <!--
🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨
I ACKNOWLEDGE THE FOLLOWING BEFORE PROCEEDING:
1. If I delete this entire template and go my own path, the core team may close my issue without further explanation or engagement.
2. If I list multiple bugs/concerns in this one issue, the core team may close my issue without further explanation or engagement.
3. If I write an issue that has many duplicates, the core team may close my issue without further explanation or engagement (and without necessarily spending time to find the exact duplicate ID number).
4. If I leave the title incomplete when filing the issue, the core team may close my issue without further explanation or engagement.
5. If I file something completely blank in the body, the core team may close my issue without further explanation or engagement.
All good? Then proceed!
-->
# Description of the new feature/enhancement
Currently, if we have two instances of Terminal running (one elevated and one not), we get two _identical_ icons in the task bar. This can be a slight inconvenience when Terminal is set to minimize to tray, as often, it requires that we activate both windows to determine which is the one with which we actually want to interact:

An improvement could be as simple as a change of color, or a small shield badge to differentiate the two icons. Or even better, (if it's even possible) consolidate the icons into one, and allow the user to select the desired window from a context menu.
<!--
A clear and concise description of what the problem is that the new feature would solve.
Describe why and how a user would use this new functionality (if applicable).
-->
# Proposed technical implementation details (optional)
<!--
A clear and concise description of what you want to happen.
-->
| Issue-Feature,Help Wanted,Area-UserInterface,Product-Terminal | low | Critical |
2,586,734,159 | deno | Support `workspace:` dependencies with an alias in a package.json | Version: Deno 2.0.0
I thought try out new Deno2.0 in my project where i have configured the workspace with local package aliasing like the following i have been using with pnpm.
```javascript
"eslint-config-standard-kf": "workspace:eslint-config-standard-kf@latest",
```
But it did not work as expected throwing following error in console
```
error: Failed to install from package.json
Caused by:
0: Invalid version requirement
1: Unexpected character.
eslint-config-standard-kf@latest
~
```
| bug,workspaces | low | Critical |
2,586,761,790 | godot | Concurent threaded Instantiating of nodes from same PackedScene causes crash: "Trying to unreference a SafeRefCount" | ### Tested versions
- 4.3 https://github.com/godotengine/godot/commit/d40fc50f086d14583c7cc979ed4e5363ac223717
-
### System information
Win 10
### Issue description
I have 2 treads using one PackedScene resource that try to instantiate scenes at same time.
Upd: PackedScene doesn't have to be same object, can be different objects instances but have to be loaded from same scene path.
That causes:
```
RASH_COND_MSG(count.get() == 0,
"Trying to unreference a SafeRefCount which is already zero is wrong and a symptom of it being misused.\n"
"Upon a SafeRefCount reaching zero any object whose lifetime is tied to it, as well as the ref count itself, must be destroyed.\n"
"Moreover, to guarantee that, no multiple threads should be racing to do the final unreferencing to zero.");
```
Callstack
```
godot.windows.editor.dev.x86_64.exe!SafeRefCount::_check_unref_safety() Line 187 (..\Godot\core\templates\safe_refcount.h:187)
godot.windows.editor.dev.x86_64.exe!SafeRefCount::unrefval() Line 214 (..\Godot\core\templates\safe_refcount.h:214)
godot.windows.editor.dev.x86_64.exe!RefCounted::unreference() Line 77 (..\Godot\core\object\ref_counted.cpp:77)
godot.windows.editor.dev.x86_64.exe!Ref<Resource>::unref() Line 209 (..\Godot\core\object\ref_counted.h:209)
godot.windows.editor.dev.x86_64.exe!Ref<Resource>::~Ref<Resource>() Line 223 (..\Godot\core\object\ref_counted.h:223)
godot.windows.editor.dev.x86_64.exe!KeyValue<Ref<Resource>,Ref<Resource>>::~KeyValue<Ref<Resource>,Ref<Resource>>() (Unknown Source:0)
godot.windows.editor.dev.x86_64.exe!HashMapElement<Ref<Resource>,Ref<Resource>>::~HashMapElement<Ref<Resource>,Ref<Resource>>() (Unknown Source:0)
godot.windows.editor.dev.x86_64.exe!HashMapElement<Ref<Resource>,Ref<Resource>>::`scalar deleting destructor'(unsigned int) (Unknown Source:0)
godot.windows.editor.dev.x86_64.exe!memdelete<HashMapElement<Ref<Resource>,Ref<Resource>>>(HashMapElement<Ref<Resource>,Ref<Resource>> * p_class) Line 116 (..\Godot\core\os\memory.h:116)
godot.windows.editor.dev.x86_64.exe!DefaultTypedAllocator<HashMapElement<Ref<Resource>,Ref<Resource>>>::delete_allocation(HashMapElement<Ref<Resource>,Ref<Resource>> * p_allocation) Line 221 (..\Godot\core\os\memory.h:221)
godot.windows.editor.dev.x86_64.exe!HashMap<Ref<Resource>,Ref<Resource>,HashMapHasherDefault,HashMapComparatorDefault<Ref<Resource>>,DefaultTypedAllocator<HashMapElement<Ref<Resource>,Ref<Resource>>>>::clear() Line 266 (..\Godot\core\templates\hash_map.h:266)
godot.windows.editor.dev.x86_64.exe!HashMap<Ref<Resource>,Ref<Resource>,HashMapHasherDefault,HashMapComparatorDefault<Ref<Resource>>,DefaultTypedAllocator<HashMapElement<Ref<Resource>,Ref<Resource>>>>::~HashMap<Ref<Resource>,Ref<Resource>,HashMapHasherDefault,HashMapComparatorDefault<Ref<Resource>>,DefaultTypedAllocator<HashMapElement<Ref<Resource>,Ref<Resource>>>>() Line 616 (..\Godot\core\templates\hash_map.h:616)
godot.windows.editor.dev.x86_64.exe!SceneState::instantiate(SceneState::GenEditState p_edit_state) Line 607 (..\Godot\scene\resources\packed_scene.cpp:607)
godot.windows.editor.dev.x86_64.exe!PackedScene::instantiate(PackedScene::GenEditState p_edit_state) Line 2093 (..\Godot\scene\resources\packed_scene.cpp:2093)
godot.windows.editor.dev.x86_64.exe!SceneState::instantiate(SceneState::GenEditState p_edit_state) Line 232 (..\Godot\scene\resources\packed_scene.cpp:232)
godot.windows.editor.dev.x86_64.exe!PackedScene::instantiate(PackedScene::GenEditState p_edit_state) Line 2093 (..\Godot\scene\resources\packed_scene.cpp:2093)
godot.windows.editor.dev.x86_64.exe!SceneState::instantiate(SceneState::GenEditState p_edit_state) Line 199 (..\Godot\scene\resources\packed_scene.cpp:199)
godot.windows.editor.dev.x86_64.exe!PackedScene::instantiate(PackedScene::GenEditState p_edit_state) Line 2093 (..\Godot\scene\resources\packed_scene.cpp:2093)
godot.windows.editor.dev.x86_64.exe!call_with_ptr_args_retc_helper<PackedScene,Node *,enum PackedScene::GenEditState,0>(PackedScene * p_instance, Node *(const PackedScene::*)(PackedScene::GenEditState) p_method, const void * * p_args, void * r_ret, IndexSequence<0> __formal) Line 340 (..\Godot\core\variant\binder_common.h:340)
godot.windows.editor.dev.x86_64.exe!call_with_ptr_args_retc<PackedScene,Node *,enum PackedScene::GenEditState>(PackedScene * p_instance, Node *(const PackedScene::*)(PackedScene::GenEditState) p_method, const void * * p_args, void * r_ret) Line 588 (..\Godot\core\variant\binder_common.h:588)
godot.windows.editor.dev.x86_64.exe!MethodBindTRC<PackedScene,Node *,enum PackedScene::GenEditState>::ptrcall(Object * p_object, const void * * p_args, void * r_ret) Line 641 (..\Godot\core\object\method_bind.h:641)
godot.windows.editor.dev.x86_64.exe!gdextension_object_method_bind_ptrcall(const void * p_method_bind, void * p_instance, const void * const * p_args, void * p_ret) Line 1222 (..\Godot\core\extension\gdextension_interface.cpp:1222)
```
### Steps to reproduce
Have one PackedScene shared between threads. Instantiate it until a crash.
### Minimal reproduction project (MRP)
N/A | bug,topic:core | low | Critical |
2,586,781,066 | ollama | Hugging Face Idefics3 | There is a new Idefics3 model on Hugginf Face, based on Llama3:
https://huggingface.co/docs/transformers/main/en/model_doc/idefics3#idefics3
Any chance you can add this to Ollama? | model request | low | Minor |
2,586,814,848 | godot | Lightmap bake crash at the beginning of the baking | ### Tested versions
v4.4.dev.custom_build [708acdf1d]
### System information
Ubuntu 22.04
Edit: Nvidia 2060 6gb VRAM
### Issue description
Trying to bake lightin in the TPS demo gets this error at the beginning of the bake. Previous builds had no problem baking
```
handle_crash: Program crashed with signal 4
Engine version: Godot Engine v4.4.dev.custom_build (708acdf1d440d9dcc7daa1fc5a457f1a2e125181)
Dumping the backtrace. Please include this when reporting the bug to the project developer.
[1] /lib/x86_64-linux-gnu/libc.so.6(+0x42990) [0x73ae1d042990] (??:0)
[2] RenderingDeviceDriverVulkan::command_queue_execute_and_present(RenderingDeviceDriver::CommandQueueID, VectorView<RenderingDeviceDriver::SemaphoreID>, VectorView<RenderingDeviceDriver::CommandBufferID>, VectorView<RenderingDeviceDriver::SemaphoreID>, RenderingDeviceDriver::FenceID, VectorView<RenderingDeviceDriver::SwapChainID>) (/media/juan/NTFS1/dev/godot/drivers/vulkan/rendering_device_driver_vulkan.cpp:2456 (discriminator 2))
[3] RenderingDevice::_execute_frame(bool) (/media/juan/NTFS1/dev/godot/servers/rendering/rendering_device.cpp:5868 (discriminator 3))
[4] RenderingDevice::submit() (/media/juan/NTFS1/dev/godot/servers/rendering/rendering_device.cpp:5646)
[5] LightmapperRD::bake(Lightmapper::BakeQuality, bool, float, int, int, float, float, int, bool, bool, Lightmapper::GenerateProbes, Ref<Image> const&, Basis const&, bool (*)(float, String const&, void*, bool), void*, float) (/media/juan/NTFS1/dev/godot/modules/lightmapper_rd/lightmapper_rd.cpp:1734)
[6] LightmapGI::bake(Node*, String, bool (*)(float, String const&, void*, bool), void*) (/media/juan/NTFS1/dev/godot/scene/3d/lightmap_gi.cpp:1118 (discriminator 1))
[7] LightmapGIEditorPlugin::_bake_select_file(String const&) (/media/juan/NTFS1/dev/godot/editor/plugins/lightmap_gi_editor_plugin.cpp:71 (discriminator 2))
[8] LightmapGIEditorPlugin::_bake() (/media/juan/NTFS1/dev/godot/editor/plugins/lightmap_gi_editor_plugin.cpp:123 (discriminator 2))
[9] void call_with_variant_args_helper<__UnexistingClass>(__UnexistingClass*, void (__UnexistingClass::*)(), Variant const**, Callable::CallError&, IndexSequence<>) (/media/juan/NTFS1/dev/godot/./core/variant/binder_common.h:309)
[10] void call_with_variant_args_dv<__UnexistingClass>(__UnexistingClass*, void (__UnexistingClass::*)(), Variant const**, int, Callable::CallError&, Vector<Variant> const&) (/media/juan/NTFS1/dev/godot/./core/variant/binder_common.h:452)
[11] MethodBindT<>::call(Object*, Variant const**, int, Callable::CallError&) const (/media/juan/NTFS1/dev/godot/./core/object/method_bind.h:345 (discriminator 1))
[12] Object::callp(StringName const&, Variant const**, int, Callable::CallError&) (/media/juan/NTFS1/dev/godot/core/object/object.cpp:813 (discriminator 1))
[13] Callable::callp(Variant const**, int, Variant&, Callable::CallError&) const (/media/juan/NTFS1/dev/godot/core/variant/callable.cpp:69 (discriminator 1))
[14] Object::emit_signalp(StringName const&, Variant const**, int) (/media/juan/NTFS1/dev/godot/core/object/object.cpp:1201)
[15] Node::emit_signalp(StringName const&, Variant const**, int) (/media/juan/NTFS1/dev/godot/scene/main/node.cpp:3974)
[16] Error Object::emit_signal<>(StringName const&) (/media/juan/NTFS1/dev/godot/./core/object/object.h:920)
[17] BaseButton::_pressed() (/media/juan/NTFS1/dev/godot/scene/gui/base_button.cpp:139)
[18] BaseButton::on_action_event(Ref<InputEvent>) (/media/juan/NTFS1/dev/godot/scene/gui/base_button.cpp:179)
[19] BaseButton::gui_input(Ref<InputEvent> const&) (/media/juan/NTFS1/dev/godot/scene/gui/base_button.cpp:69 (discriminator 2))
[20] Control::_call_gui_input(Ref<InputEvent> const&) (/media/juan/NTFS1/dev/godot/scene/gui/control.cpp:1823)
[21] Viewport::_gui_call_input(Control*, Ref<InputEvent> const&) (/media/juan/NTFS1/dev/godot/scene/main/viewport.cpp:1576)
[22] Viewport::_gui_input_event(Ref<InputEvent>) (/media/juan/NTFS1/dev/godot/scene/main/viewport.cpp:1837 (discriminator 2))
[23] Viewport::push_input(Ref<InputEvent> const&, bool) (/media/juan/NTFS1/dev/godot/scene/main/viewport.cpp:3176 (discriminator 2))
[24] Window::_window_input(Ref<InputEvent> const&) (/media/juan/NTFS1/dev/godot/scene/main/window.cpp:1680)
[25] void call_with_variant_args_helper<Window, Ref<InputEvent> const&, 0ul>(Window*, void (Window::*)(Ref<InputEvent> const&), Variant const**, Callable::CallError&, IndexSequence<0ul>) (/media/juan/NTFS1/dev/godot/./core/variant/binder_common.h:304 (discriminator 2))
[26] void call_with_variant_args<Window, Ref<InputEvent> const&>(Window*, void (Window::*)(Ref<InputEvent> const&), Variant const**, int, Callable::CallError&) (/media/juan/NTFS1/dev/godot/./core/variant/binder_common.h:419)
[27] CallableCustomMethodPointer<Window, void, Ref<InputEvent> const&>::call(Variant const**, int, Variant&, Callable::CallError&) const (/media/juan/NTFS1/dev/godot/./core/object/callable_method_pointer.h:111)
[28] Callable::callp(Variant const**, int, Variant&, Callable::CallError&) const (/media/juan/NTFS1/dev/godot/core/variant/callable.cpp:57)
[29] Variant Callable::call<Ref<InputEvent> >(Ref<InputEvent>) const (/media/juan/NTFS1/dev/godot/./core/variant/variant.h:893)
[30] DisplayServerX11::_dispatch_input_event(Ref<InputEvent> const&) (/media/juan/NTFS1/dev/godot/platform/linuxbsd/x11/display_server_x11.cpp:4063 (discriminator 2))
[31] DisplayServerX11::_dispatch_input_events(Ref<InputEvent> const&) (/media/juan/NTFS1/dev/godot/platform/linuxbsd/x11/display_server_x11.cpp:4040)
[32] Input::_parse_input_event_impl(Ref<InputEvent> const&, bool) (/media/juan/NTFS1/dev/godot/core/input/input.cpp:803)
[33] Input::flush_buffered_events() (/media/juan/NTFS1/dev/godot/core/input/input.cpp:1084)
[34] DisplayServerX11::process_events() (/media/juan/NTFS1/dev/godot/platform/linuxbsd/x11/display_server_x11.cpp:5200)
[35] OS_LinuxBSD::run() (/media/juan/NTFS1/dev/godot/platform/linuxbsd/os_linuxbsd.cpp:960)
[36] /media/juan/NTFS1/dev/godot/bin/godot.linuxbsd.editor.dev.x86_64(main+0x190) [0x60232bcce1e9] (/media/juan/NTFS1/dev/godot/platform/linuxbsd/godot_linuxbsd.cpp:85)
[37] /lib/x86_64-linux-gnu/libc.so.6(+0x28150) [0x73ae1d028150] (??:0)
[38] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x89) [0x73ae1d028209] (??:0)
[39] /media/juan/NTFS1/dev/godot/bin/godot.linuxbsd.editor.dev.x86_64(_start+0x25) [0x60232bccdf95] (??:?)
-- END OF BACKTRACE --
================================================================
```
### Steps to reproduce
Bake Lightmaps with 5 bounces, low quality, 1.5 texel density
### Minimal reproduction project (MRP)
TPSDemo | bug,topic:rendering,needs testing,crash,regression,topic:3d | low | Critical |
2,586,816,244 | Python | Feature Request: Add LSTM Algorithm to Neural Network Algorithms | ### Feature description
Add LSTM Algorithm to Neural Network Algorithms
**Feature Description:**
I would like to propose adding an LSTM (Long Short-Term Memory) algorithm to the existing neural network algorithms in the repository. LSTMs are a type of recurrent neural network (RNN) that excel in handling sequential and time-series data, making them particularly valuable for tasks such as language modeling, text generation, and time-series forecasting.
**Proposed Improvements:**
1. **Implementation of LSTM**: Develop a comprehensive LSTM class that includes essential functionalities such as:
- Forward propagation through LSTM layers.
- Backpropagation through time (BPTT) for training.
- Methods for saving and loading the model.
- Support for various activation functions (sigmoid, tanh, softmax).
2. **Example Usage**: Include example usage code demonstrating how to train the LSTM on a dataset, such as predicting the next character in Shakespeare's text.
3. **Documentation**: Provide detailed documentation on the LSTM algorithm's implementation, explaining its structure, hyperparameters, and training process.
4. **Unit Tests**: Implement unit tests to ensure the correctness and robustness of the LSTM functionality.
**Rationale:**
Adding LSTM capabilities will enhance the versatility of the neural network algorithms available in this repository, allowing users to tackle a wider range of problems involving sequential data. Given the growing importance of time-series analysis and natural language processing, this addition would significantly benefit the community.
| enhancement | low | Minor |
2,586,822,857 | godot | HTCXR XR Elite does not load the android editor in XR Mode | ### Tested versions
- Reproducible https://github.com/godotengine/godot/commit/db66bd35af704fe0d83ba9348b8c50a48e51b2ba
### System information
HTC XR Elite
### Issue description
The HTC XR Elite does not load the editor in XR. The editor works in "flat" editor mode.
See also Bastian's commentary on how to implement on the manifests. https://github.com/godotengine/godot/issues/97907
Split from https://github.com/godotengine/godot/issues/97907
### Steps to reproduce
1. Download from github actions the android editor build
2. Notice that there is not htcvive xrelite build
3. The XRElite can run the android flat build
3. The XRElite fails in android openxr
### Minimal reproduction project (MRP)
https://github.com/user-attachments/files/17271320/demo-bug.zip | bug,platform:android,topic:editor,topic:porting,topic:xr | low | Critical |
2,586,849,313 | opencv | Can not use cv2.videocapture in MAC OS SEQUOIA 15.0.1 | ### System Information
OpenCV version: 4.10.0
Operating System: Mac OS 15.0.1
Python Version Python 3.11.10
### Detailed description
If you try to call cv2.Videocapture on a Mac running Sequoia 15.0.1 you are presented with the following error and not able to access the camera:
AVCaptureDeviceTypeExternal is deprecated for Continuity Cameras. Please use AVCaptureDeviceTypeContinuityCamera and add NSCameraUseContinuityCameraDeviceType to your Info.plist.
### Steps to reproduce
webcam_video_stream = cv2.VideoCapture(0)
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [X] I updated to the latest OpenCV version and the issue is still there
- [X] There is reproducer code and related data files (videos, images, onnx, etc) | bug,platform: ios/osx | low | Critical |
2,586,850,780 | flutter | Windows arm64 bots in devicelab are restarting during tests | For example: https://ci.chromium.org/ui/p/dart/builders/ci.sandbox/vm-win-debug-arm64/1115/infra
per @aam the system logs say:
```
The process C:\WINDOWS\SYSTEM32\shutdown.exe (FLUTTER-WIN-19) has initiated the restart of computer FLUTTER-WIN-19 on behalf of user flutter-win-19\swarming for the following reason: No title for this reason could be found
Reason Code: 0x800000ff
Shutdown Type: restart
Comment:
```
And the reboots seem to coincide with
```
Successfully scheduled Software Protection service for re-start... Reason: RulesEngine
```
in Windows logs. | team-infra,P1,triaged-infra | medium | Critical |
2,586,888,420 | godot | Crash calling function on null pointer of already freed GDExtension | ### Tested versions
4.2.2, 4.3-stable, 4.3 6699ae7897658e44efc3cfb2cba91c11a8f5aa6a
Master untested but the problem is visible in the code.
godot-cpp 4.2-cherrypicks-7
### System information
Windows 11/64, RTX 3070, Vulkan
### Issue description
I believe there are two bugs here:
1. Somewhere in gdextension `Dictionary[String] -> TypedArray<GDextensionResource>` it isn't properly freeing the GDextResource when it is reassigned to a new dictionary pointer with `dict = Dictionary();` That's triggering the engine to go down an unexpected code path on cleanup triggering the second bug:
2. In the engine, ObjectDB::cleanup() code and Resource::GDCLASS macro assume `_extension` is a null pointer. However under the above circumstances and maybe others this assumption is wrong and the engine attempts to call `is_class("Node")` on a gdextension resource where the extension has already been freed. The function call on a null pointer causes a crash.
The second problem is here in this sequence and call stack:
```
register_core_types.cpp::unregister_core_types() {
...
memdelete(gdextension_manager);
...
ObjectDB::cleanup(); // crash
}
ObjectDB::cleanup() {
...
if (OS::get_singleton()->is_stdout_verbose()) {
...
if (obj->is_class("Node")) // crash
}
Resource::GDClass macro extended inline
virtual bool is_class(const String &p_class) const override {
if (_get_extension() &&
_get_extension()->is_class(p_class)) { // crash
return true;
}
return (p_class == (#m_class)) ? true : m_inherits::is_class(p_class);
}
```
In the Resource macro, _get_extension() returns a non-null pointer to already freed memory. It was freed in unregister_core_types(). Since it's freed, it's very difficult to figure out what object was being queried, but with other debugging I've determined this situation is a Terrain3DResource. So, Godot freed the library, _extension is not-null and invalid, then the Resource macro attempts to call a function from the null pointer resulting in the crash.
Deleting gdextensions before everything is shutdown has caused more than one issue. Related https://github.com/godotengine/godot/issues/95310. The fix for this didn't solve the fundamental problem.


-------
Here's more information about this. It may be a gdextension bug.
Using godot-cpp 4.2-cherrypicks-7
* The object it's crashing on querying is a Terrain3DRegion, which is just a basic custom resource with only native data types. A region stores our data and image maps for a geographical section of the terrain. https://github.com/TokisanGames/Terrain3D/blob/main/src/terrain_3d_region.h
* The engine crashes in this code block https://github.com/godotengine/godot/blob/4.3/core/object/object.cpp#L2287-L2305
* If --verbose is specified (but also when I run it in my msvc debugger) this code runs. Terrain3D has already been unloaded at this point. The Terrain3DRegion object still has a pointer to it in _extension. is_class() runs the Resource::GDCLASS macro which calls a function from the null pointer.
* Of note, whichever objects that are triggered in these conditions, `object_slots[i].validator` is true. The instances haven't been cleared yet. Not all of my gdextension resources or even Terrain3DRegions fall under these conditions. I can load up regions from disk and not have an issue. The conditions are set only once we start editing data, which means region duplicates, EditorUndoRedoManager and so on.
* I have cleared the undo history on exit before the offending code runs but it still crashes.
* Testing each component, I've determined the cause for the conditions comes down to this:
```c++
void Terrain3DEditor::start_operation(const Vector3 &p_global_position) {
//_undo_data = Dictionary(); // ** causes latent crash **
_undo_data.clear(); // workaround
```
We've been using the first line to get a new pointer and make our backup work properly. This alone sets up the conditions for a crash. If I replace the first with the second line it no longer crashes.
We also have this other bit of code where `_undo_data` is used that can also be adjusted to prevent the crash:
```c++
void Terrain3DEditor::_store_undo() {
...
Dictionary redo_data;
_undo_data["edited_regions"] = _original_regions; // ** causes latent crash **
redo_data["edited_regions"] = _edited_regions;
```
I can `return` right before the _undo_data line or comment it out and it won't crash. If I return right after or leave it uncommented, it will crash.
`_original_regions` and `_edited_regions` are both `TypedArray<Terrain3DRegion>`.
`_undo_data` and `redo_data` are both Dictionaries. They store essentially the same type of data. The only fundamental differences between them are the first one is a class member, and it gets reset by assignment.
* I was able to fix the problem on our end by using `_undo_data.clear()` in the first block. And use `_undo_data.duplicate()` in the second block where we pass the data to the undo/redo manager.
### Steps to reproduce
* Download the artifact below to a new project.
* Open and restart twice.
* Run the demo in a debug engine build or in a debugger.
* Click Terrain3D, then the foliage brush and start painting. Cover a few 32x32 areas. Keep it loaded at least 5 seconds.
On quit it should crash in Resource, on the GDClass macro.
### Minimal reproduction project (MRP)
Artifact https://github.com/TokisanGames/Terrain3D/actions/runs/11317731831 | bug,topic:gdextension,crash | low | Critical |
2,586,889,229 | langchain | ChatPromtpTemplate doesn't accept PDF data as bytes | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
pdf_data = base64.b64encode(httpx.get("https://dagrs.berkeley.edu/sites/default/files/2020-01/sample.pdf").content).decode("utf-8")
prompt = ChatPromptTemplate([
("system", "You are a helpful assistant "),
("human", [
{"type": "media", "mime_type": "application/pdf", "data": pdf_data},
{"type": "text", "text": "{user_input}"}
])
])
```
### Error Message and Stack Trace (if applicable)
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[4], [line 12](vscode-notebook-cell:?execution_count=4&line=12)
[1](vscode-notebook-cell:?execution_count=4&line=1) model = ChatGoogleGenerativeAI(
[2](vscode-notebook-cell:?execution_count=4&line=2) model="gemini-1.5-flash",
[3](vscode-notebook-cell:?execution_count=4&line=3) temperature=0.1,
(...)
[6](vscode-notebook-cell:?execution_count=4&line=6) max_retries=2
[7](vscode-notebook-cell:?execution_count=4&line=7) )
[9](vscode-notebook-cell:?execution_count=4&line=9) pdf_data = base64.b64encode(
[10](vscode-notebook-cell:?execution_count=4&line=10) httpx.get("http://www.dagrs.berkley.edu/sites/default/files/2020-01/sample.pdf").content).decode("utf-8")
---> [12](vscode-notebook-cell:?execution_count=4&line=12) prompt = ChatPromptTemplate([
[13](vscode-notebook-cell:?execution_count=4&line=13) ("system", "You are a helpful assistant "),
[14](vscode-notebook-cell:?execution_count=4&line=14) ("human", [
[15](vscode-notebook-cell:?execution_count=4&line=15) {"type": "media", "mime_type": "application/pdf", "data": pdf_data},
[16](vscode-notebook-cell:?execution_count=4&line=16) {"type": "text", "text": "{user_input}"}
[17](vscode-notebook-cell:?execution_count=4&line=17) ])
[18](vscode-notebook-cell:?execution_count=4&line=18) ])
File c:\Users\alan-\miniconda3\Lib\site-packages\langchain_core\prompts\chat.py:992, in ChatPromptTemplate.__init__(self, messages, template_format, **kwargs)
[938](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:938) def __init__(
[939](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:939) self,
[940](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:940) messages: Sequence[MessageLikeRepresentation],
(...)
[943](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:943) **kwargs: Any,
[944](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:944) ) -> None:
[945](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:945) """Create a chat prompt template from a variety of message formats.
[946](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:946)
[947](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:947) Args:
(...)
[990](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:990)
[991](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:991) """
--> [992](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:992) _messages = [
[993](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:993) _convert_to_message(message, template_format) for message in messages
[994](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:994) ]
[996](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:996) # Automatically infer input variables from messages
[997](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:997) input_vars: set[str] = set()
File c:\Users\alan-\miniconda3\Lib\site-packages\langchain_core\prompts\chat.py:993, in <listcomp>(.0)
[938](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:938) def __init__(
[939](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:939) self,
[940](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:940) messages: Sequence[MessageLikeRepresentation],
(...)
[943](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:943) **kwargs: Any,
[944](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:944) ) -> None:
[945](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:945) """Create a chat prompt template from a variety of message formats.
[946](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:946)
[947](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:947) Args:
(...)
[990](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:990)
[991](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:991) """
[992](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:992) _messages = [
--> [993](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:993) _convert_to_message(message, template_format) for message in messages
[994](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:994) ]
[996](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:996) # Automatically infer input variables from messages
[997](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:997) input_vars: set[str] = set()
File c:\Users\alan-\miniconda3\Lib\site-packages\langchain_core\prompts\chat.py:1454, in _convert_to_message(message, template_format)
[1452](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:1452) message_type_str, template = message
[1453](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:1453) if isinstance(message_type_str, str):
-> [1454](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:1454) _message = _create_template_from_message_type(
[1455](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:1455) message_type_str, template, template_format=template_format
[1456](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:1456) )
[1457](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:1457) else:
[1458](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:1458) _message = message_type_str(
[1459](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:1459) prompt=PromptTemplate.from_template(
[1460](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:1460) cast(str, template), template_format=template_format
[1461](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:1461) )
[1462](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:1462) )
File c:\Users\alan-\miniconda3\Lib\site-packages\langchain_core\prompts\chat.py:1365, in _create_template_from_message_type(message_type, template, template_format)
[1351](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:1351) """Create a message prompt template from a message type and template string.
[1352](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:1352)
[1353](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:1353) Args:
(...)
[1362](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:1362) ValueError: If unexpected message type.
[1363](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:1363) """
[1364](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:1364) if message_type in ("human", "user"):
-> [1365](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:1365) message: BaseMessagePromptTemplate = HumanMessagePromptTemplate.from_template(
[1366](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:1366) template, template_format=template_format
[1367](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:1367) )
[1368](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:1368) elif message_type in ("ai", "assistant"):
[1369](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:1369) message = AIMessagePromptTemplate.from_template(
[1370](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:1370) cast(str, template), template_format=template_format
[1371](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:1371) )
File c:\Users\alan-\miniconda3\Lib\site-packages\langchain_core\prompts\chat.py:565, in _StringImageMessagePromptTemplate.from_template(cls, template, template_format, partial_variables, **kwargs)
[563](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:563) prompt.append(img_template_obj)
[564](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:564) else:
--> [565](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:565) raise ValueError(f"Invalid template: {tmpl}")
[566](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:566) return cls(prompt=prompt, **kwargs)
[567](file:///C:/Users/alan-/miniconda3/Lib/site-packages/langchain_core/prompts/chat.py:567) else:
ValueError: Invalid template: {'type': 'media', 'mime_type': 'application/pdf', 'data':
```
### Description
I'm trying to send a PDF file as bytes to Gemini using the ChatPromptTemplate method.
Sending the PDF to Gemini seems to works fine as per [#4589](https://github.com/BerriAI/litellm/issues/4589) and [#215](https://github.com/langchain-ai/langchain-google/issues/215)
But when using the ChatPromptTemplate, it appears that the code [here](https://github.com/langchain-ai/langchain/blob/8dc4bec9477e25df5149e68124eb19c3ec2494d0/libs/core/langchain_core/prompts/chat.py#L532) is not ready to deal with PDFs type in the prompt template.
### System Info
langchain==0.3.2
langchain-community==0.3.1
langchain-core==0.3.9
langchain-google-genai==2.0.0
langchain-text-splitters==0.3.0
langgraph==0.2.34
langgraph-checkpoint==2.0.0
langsmith==0.1.131 | 🤖:bug | low | Critical |
2,586,910,930 | pytorch | TD skips relevant tests | ### 🐛 Describe the bug
I expect commit https://github.com/pytorch/pytorch/pull/137899/commits/03015dd53503a7acdc8ea7d4690a9d7e4e982ac3 of https://github.com/pytorch/pytorch/pull/137899 fail while running `test_dtypes_special_i1e_cpu`m but CI signal is green, as target determinator decided this test is irrelevant
### Versions
CI
cc @seemethere @pytorch/pytorch-dev-infra | module: ci,triaged,module: correctness (silent) | low | Critical |
2,586,944,194 | godot | WebXR: Objects rendered transparent when in immersive-ar on Android | ### Tested versions
- Reproducible in Godot 4.3 (stable) and Godot 4.4.dev3
### System information
Sony Xperia 10 III - Andorid 13 - Compatibility - Google Chrome Version 129.0.6668.100 (latest as of time of writing)
### Issue description
I try to create a minimal WebXR AR application that shows 3D models to the user, blended with the real world environment. To do so, I followed the instructions on the [tutorial in the docs](https://docs.godotengine.org/en/stable/tutorials/xr/index.html#basic-tutorial) especially the [AR / Passthrough](https://docs.godotengine.org/en/stable/tutorials/xr/ar_passthrough.html), as well as David Snopke's tutorials.
However, when I run the app and enter AR, objects are rendered transparent, the closer their color is to black. (Reading the WebXR standard and checking the environment blend modes, it looks like the rendering behaves as "additive" used for see-through devices (like the Holo Lens). However, it should behave like environment blend mode "alpha-blend", i.e. use the alpha channel to determine transparency). The blend mode reported by WebXR, however, in fact is "alpha-blend" which to my knowledge is correct.
Actual outcome:

Expected outcome (hacked together in Gimp):

Since I'm fairly new to WebXR in Godot it may also me doing something wrong :upside_down_face: However, I tried many different things, settings, renders, configuration suggested at different places and always get the same result. I also talked to @dsnopek at GodotCon 24 (shamelessly tagging you here, hope you have had / will have a safe trip back home :) ) about this and he mentioned it _could_ be a regression. Searched the closed issues and this sounds kinda related to me: https://github.com/godotengine/godot/issues/75581
### Steps to reproduce
I have hosted the MRP at github pages on https://www.simonkerler.de/godot-webxr-mrp/dist/ :
- Open the link with Chrome on your smartphone
- Hit the "Enter AR" button
- Wait for the scene being blended on the camera pass-through
- See that the boxes are not opaque but transparent
If you run the MRP from within the editor, don't forget to expose the web server to 0.0.0.0 and turn on HTTPS to make it accessible from your smartphone.
### Minimal reproduction project (MRP)
I created an MRP here: https://github.com/namelessvoid/godot-webxr-mrp
Just open it in the editor. There is a single scene, containing four boxes of different color, world environment, lighting and WebXROrigin with an WebXRCamera. | bug,platform:web,topic:xr | low | Major |
2,586,960,086 | godot | "Run instances" uses default main scene instead of main scenes defined by feature tags | ### Tested versions
- v4.4.dev3.mono
- v4.4.dev3
### System information
Godot v4.4.dev3.mono - Windows 10.0.19045 - Single-window, 2 monitors - Vulkan (Forward+) - dedicated NVIDIA GeForce RTX 4060 Ti (NVIDIA; 32.0.15.6094) - AMD Ryzen 7 7800X3D 8-Core Processor (16 threads)
### Issue description
Instances defined in "Run instances" use default main scene instead of main scenes defined by feature tags, when those feature tags are applied to the instances.
### Steps to reproduce
* Define export targets with custom feature tags
* Add project settings overrides for `application/run/main_scene` for each feature tag
* Add respective feature tags to each instance in `Debug > Customize Run Instances...`
* Run in editor
* See both instances using the default main scene
### Minimal reproduction project (MRP)
[testfeaturescenes.zip](https://github.com/user-attachments/files/17369118/testfeaturescenes.zip)
| bug,discussion,topic:editor | low | Critical |
2,586,994,132 | rust | contextless autocompletions for rustdoc search | recently i've come up with a "stateful prefix search" algorithm that allows fast prefix-based autocompletions from a pool of identifiers.
this algorithm should be fast enough to be responsive even on old devices.
this would provide a autocompletion popover with ~6 possibilites for identifiers.
these identifiers would be built from all crate names in the documentation workspace, all identifiers in the search index, and all rustdoc kind filters. | C-enhancement,A-rustdoc-search,T-rustdoc-frontend | low | Minor |
2,587,000,095 | tauri | [bug] Scaling Tauri webview window on windows 11 | ### Describe the bug
On windows 11 Tauri webview window size is not the same as define in config.
Size in tauri config: 715x505
When I run app on my client's windows 11 pc, tauri window has size around 590x400. Looks like problem with scaling on that machine. When running app on windows 10 (different pc) its working good. On macos its working good.
Im running on windows applicaiton builded on GH actions (target windows-latest)
Im not sure if thats the reason but found that there was a bug on windows webview2 -> https://github.com/MicrosoftEdge/WebView2Feedback/issues/1700
Currently my workaround for that is checking size on of react container and resizing window if size is less than definied in config.
I was trying:
- to change scale on windows 11 - still not work
- change resolution - not work
- disconected external monitor and run on laptop's screen - not work
### Reproduction
Run application with window size 715x505 on windows 11.
### Expected behavior
Tauri window should be the same as in tauri.config.json
### Full `tauri info` output
```text
[⚠] Environment
- OS: Mac OS 14.3.0 X64
✔ Xcode Command Line Tools: installed
✔ rustc: 1.80.1 (3f5fd8dd4 2024-08-06) (Homebrew)
✔ cargo: 1.80.1
⚠ rustup: not installed!
If you have rust installed some other way, we recommend uninstalling it
then use rustup instead. Visit https://rustup.rs/
⚠ Rust toolchain: couldn't be detected!
Maybe you don't have rustup installed? if so, Visit https://rustup.rs/
- node: 20.10.0
- pnpm: 9.1.1
- yarn: 1.22.22
- npm: 10.2.3
[-] Packages
- tauri [RUST]: 1.6.6
- tauri-build [RUST]: 1.5.2
- wry [RUST]: 0.24.10
- tao [RUST]: 0.16.9
- @tauri-apps/api [NPM]: 1.5.6 (outdated, latest: 2.0.2)
- @tauri-apps/cli [NPM]: 1.5.14 (outdated, latest: 1.6.3)
[-] App
- build-type: bundle
- CSP: unset
- distDir: ../dist
- devPath: http://localhost:1420/
- framework: React
- bundler: Vite
```
### Stack trace
_No response_
### Additional context
Discord support post: https://discord.com/channels/616186924390023171/1294718226109956160 | type: bug,platform: Windows,status: needs triage | low | Critical |
2,587,033,306 | godot | `Control::set_offsets_preset` ignores custom minimum size of the control | ### Tested versions
v4.3.stable.mono.official [77dcf97d8]
### System information
archlinux
### Issue description
[Relevant code](https://github.com/godotengine/godot/blob/af77100e394dcaca609b15bef815ed17475e51ed/scene/gui/control.cpp#L1191)
In the method linked above, `get_minimum_size` method is used instead of combined minimum size, resulting in erroneous offset state after method call. Other methods that call this method with preset mode other than `PRESET_MODE_KEEP_SIZE`, notably `set_anchors_and_offsets_preset`, may also be affected by this issue.
### Steps to reproduce
Try to call the method on control with custom minimum size.
### Minimal reproduction project (MRP)
[offset-thing.zip](https://github.com/user-attachments/files/17369146/offset-thing.zip)
Simple side-by-side setup showing two rectangles being programmatically aligned with `set_anchors_preset` and `set_offsets_preset` call on `_ready`. (This is essentially identical to calling `set_anchors_and_offsets_preset`.) White area show the intended position of the rectangles getting aligned.
The red rectangle has custom min size of (100, 100) and default min size. The blue rectangle has no custom min size, but has min size of (100, 100) defined in code. The blue one is able to produce desired result, while the red one produces odd behavior.

| bug,topic:gui | low | Minor |
2,587,068,877 | bitcoin | wallet: rpc: `settxfee` sets the wallet feerate not fee | The wallet RPC `settxfee` sets the fee rate for a wallet.
Current help text:
```
Set the transaction fee rate in BTC/kvB for this wallet. Overrides the global -paytxfee command line parameter.
Can be deactivated by passing 0 as the fee. In that case automatic fee selection will be used by default.
Arguments:
1. amount (numeric or string, required) The transaction fee rate in BTC/kvB
Result:
true|false (boolean) Returns true if successful
```
This is a misnomer, as stated here https://github.com/bitcoin/bitcoin/pull/29278#discussion_r1526664705 so should instead be `setfeerate`
@jonatack suggested a safer approach to avoid breaking things
(see: https://github.com/bitcoin/bitcoin/pull/20484#issuecomment-734786305). I think this is better approach than just renaming the `settxfee` RPC to `setfeerate`?
- Add `setfeerate` RPC which is a mirror of `settxfee` but in `sat/vB`.
- Keep `settxfee` hidden, but prefer the `setfeerate` RPC in future use.
- Eventually deprecate `settxfee`.
This issue is limited to fixing the ambiguity in `settxfee`.
| Wallet,RPC/REST/ZMQ | low | Minor |
2,587,114,883 | flutter | [ios][engine]add integration tests for platform view touches | The platform view touch gesture has pretty complex logic but it has no integration test.
A few possible ideas:
- able to enable/disable some/all gestures on platform view
- GestureDetector able to get the touch callbacks
- if flutter side consumes all the touches, platform view shouldn't receive any touches
- if flutter side consumes some gestures (e.g. swipe up/down), platform view should receive remaining touches (pinch to scale)
- if flutter side doesn't consume touches, then platform view should handle them
See context: https://github.com/flutter/engine/pull/55724
| platform-ios,a: platform-views,P2,team-ios,triaged-ios | low | Minor |
2,587,117,924 | terminal | [1.23] PatternTree crash after closing two panes in oen tab | ### Windows Terminal version
Canary 2851
### Windows build number
_No response_
### Other Software
_No response_
### Steps to reproduce
Dunno - closed two shells inside panes in the same tab with `^D`
### Expected Behavior
_No response_
### Actual Behavior
```
Microsoft.Terminal.Control.dll!Microsoft::Console::Render::Renderer::TriggerRedraw(const Microsoft::Console::Types::Viewport & region) Line 262 C++
Microsoft.Terminal.Control.dll!Microsoft::Terminal::Core::Terminal::_InvalidateFromCoords(const til::point start, const til::point end) Line 808 C++
[Inline Frame] Microsoft.Terminal.Control.dll!Microsoft::Terminal::Core::Terminal::_InvalidatePatternTree::__l2::<lambda_1>::operator()(const interval_tree::Interval<til::point,unsigned __int64> &) Line 774 C++
> [Inline Frame] Microsoft.Terminal.Control.dll!std::for_each(std::_Vector_const_iterator<std::_Vector_val<std::_Simple_types<interval_tree::Interval<til::point,unsigned __int64>>>>) Line 435 C++
Microsoft.Terminal.Control.dll!interval_tree::IntervalTree<til::point,unsigned __int64>::visit_all<`Microsoft::Terminal::Core::Terminal::_InvalidatePatternTree'::`2'::<lambda_1>>(Microsoft::Terminal::Core::Terminal::_InvalidatePatternTree::__l2::<lambda_1> f) Line 308 C++
[Inline Frame] Microsoft.Terminal.Control.dll!Microsoft::Terminal::Core::Terminal::_InvalidatePatternTree() Line 771 C++
Microsoft.Terminal.Control.dll!Microsoft::Terminal::Core::Terminal::UpdatePatternsUnderLock() Line 1192 C++
Microsoft.Terminal.Control.dll!winrt::Microsoft::Terminal::Control::implementation::ControlCore::_setupDispatcherAndCallbacks::__l2::<lambda_1>::operator()() Line 200 C++
[Inline Frame] Microsoft.Terminal.Control.dll!std::_Func_class<void>::operator()() Line 920 C++
[Inline Frame] Microsoft.Terminal.Control.dll!std::invoke(std::function<void __cdecl(void)> &) Line 1704 C++
[Inline Frame] Microsoft.Terminal.Control.dll!std::_Apply_impl(std::function<void __cdecl(void)> &) Line 1076 C++
[Inline Frame] Microsoft.Terminal.Control.dll!std::apply(std::function<void __cdecl(void)> &) Line 1087 C++
[Inline Frame] Microsoft.Terminal.Control.dll!til::throttled_func<1,0>::_trailing_edge() Line 196 C++
Microsoft.Terminal.Control.dll!til::throttled_func<1,0>::_timer_callback(_TP_CALLBACK_INSTANCE * __formal, void * context, _TP_TIMER * __formal) Line 184 C++
``` | Area-Output,Issue-Bug,Severity-Crash,Product-Terminal | low | Critical |
2,587,148,584 | flutter | Change alignment of last item in CarouselView.weighted | ### Use case
b/372578986
For example, if `flexWeights` is [3, 2, 1] and `consumeMaxWeight` is false, the last child can never get full size which consumes weight 3.
### Proposal
When the last item is in "preview" (with weight 1) and we continue to scroll, we should support a reversed layout so the last item can show its "full size" (with weight 3).
https://github.com/user-attachments/assets/2ef0ff31-122e-483e-a2e3-1686ed5871a7 | f: material design,P2,team-design,triaged-design | low | Minor |
2,587,157,684 | vscode | Configurable fast scroll to keys other than Alt | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
Currently, fast scrolling is only possible with the Alt key. There should be an option to use other keys, such as Ctrl, for fast scrolling, ideally as a configurable keybinding
[Reference to the hardcoded use of Alt](https://github.com/microsoft/vscode/blob/e13f1fdf43892980a5b463a2cdc12befa5e4b70c/src/vs/base/browser/ui/scrollbar/scrollableElement.ts#L434)
```typescript
if (e.browserEvent && e.browserEvent.altKey) {
// fastScrolling
deltaX = deltaX * this._options.fastScrollSensitivity;
deltaY = deltaY * this._options.fastScrollSensitivity;
}
``` | feature-request,editor-scrollbar | low | Major |
2,587,188,050 | deno | Renaming repo root folder makes installed npm packages unfindable | Version: Deno 2.0.0
`deno init` a repo., `deno install` at least an npm package into the repo that is used in `main.ts`
`deno run main.ts` works.
cd out of folder, rename folder, cd into folder again
`deno run main.ts` now yields
error: Could not find "zod" in a node_modules folder. Deno expects the node_modules/ directory to be up to date. Did you forget to run `deno install`?
Executed `deno install`. No feedback is given in the CLI.
Rerunning main gives the same error.
Current workaround: delete folder `node_modules` and then run `deno install`
| needs investigation | low | Critical |
2,587,192,203 | godot | CSG collision is broken | ### Tested versions
-tested in 4.3 stable, 4.2.2 stable, partially tested in 4.4 dev 3
### System information
Windows 10
### Issue description
CSG collision gives all sorts of inconsistent results or no results at all: collision tests starting inside CSG shapes do not show a result with both ray cast and shape cast and using the nodes and by code. Collision tests starting from outside CSG shapes work with raycast and shapecast nodes, but not by code. Replace the CSG with a staticmesh box and everything works fine.
BTW I don't know what you guys did to godot, but I blue screened 5 times while trying to test this in 4.3 and 4.4, I'm not able to fully test all combinations of CSG and physics tests any further due to blue screens. I'm still on 4.2.2 because of regressions in 4.3 and onward.
### Steps to reproduce
Open the minimal reproduction project and hit run, see the printed results. Move the CSG shape in the -z direction by 2 meters and hit run, see the results. Replace the CSG with a staticmesh box at the origin and hit run, see the results.
### Minimal reproduction project (MRP)
[csgcollisiontest.zip](https://github.com/user-attachments/files/17369944/csgcollisiontest.zip)
| topic:core,topic:physics | low | Critical |
2,587,204,841 | vscode | Copy Relative Path With Line Number | Requesting to revisit - https://github.com/microsoft/vscode/pull/118509 | feature-request,editor-clipboard | low | Minor |
2,587,219,739 | rust | Tracking Issue for `const_sockaddr_setters` | Feature gate: `#![feature(const_sockaddr_setters)]`
This is a tracking issue using the `set_ip` and `set_port` methods on `SocketAddr` types in `const` contexts.
### Public API
```rust
// core::net
impl SocketAddr {
pub const fn set_ip(&mut self, new_ip: IpAddr);
pub const fn set_port(&mut self, new_port: u16);
}
impl SocketAddrV4 {
pub const fn set_ip(&mut self, new_ip: Ipv4Addr);
pub const fn set_port(&mut self, new_port: u16);
}
impl SocketAddrV6 {
pub const fn set_ip(&mut self, new_ip: Ipv6Addr);
pub const fn set_port(&mut self, new_port: u16);
}
```
### Steps / History
- [x] Implementation: https://github.com/rust-lang/rust/pull/131715
- [ ] Final comment period (FCP)[^1]
- [ ] Stabilization PR
### Unresolved Questions
- None yet.
[^1]: https://std-dev-guide.rust-lang.org/feature-lifecycle/stabilization.html
| T-libs-api,C-tracking-issue | low | Minor |
2,587,250,550 | kubernetes | [FG:InPlacePodVerticalScaling] Add UpdatePodSandboxResources CRI method | See https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/1287-in-place-update-pod-resources/README.md#cri-changes for more detail
/kind feature
/sig node
/milestone v1.32
/priority important-longterm
/triage accepted | sig/node,kind/feature,priority/important-longterm,triage/accepted | low | Major |
2,587,253,597 | kubernetes | [FG:InPlacePodVerticalScaling] Implement resize for sidecar containers | Resize of sidecar containers should work the same as resize of regular containers. Resize of non-restartable init containers is still not allowed.
/kind feature
/sig node
/priority important-longterm
/milestone v1.32
/triage accepted | sig/node,kind/feature,priority/important-longterm,triage/accepted | low | Major |
2,587,265,656 | kubernetes | [FG:InPlacePodVerticalScaling] Add kubelet_resize_requests_total metric | See https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/1287-in-place-update-pod-resources/README.md#instrumentation for details.
The KEP uses the name `kubelet_container_resize_requests_total`, but I think we should drop the `container` part, since it's measured at the pod level. KEP should be updated to match this.
/kind feature
/sig node instrumentation
/priority important-longterm
/milestone v1.32
/triage accepted | sig/node,kind/feature,sig/instrumentation,priority/important-longterm,triage/accepted | low | Minor |
2,587,270,753 | pytorch | Bazel builds intermittently fail while trying to download mkl from anaconda %) | ### 🐛 Describe the bug
See this [log](https://github.com/pytorch/pytorch/actions/runs/11334452938/job/31520699526) for example:
```
2024-10-14T20:31:02.4058502Z /var/lib/jenkins/.cache/bazel/_bazel_jenkins/fdf6d09bf4b4f04a71e2a7dfceb40620/external/bazel_tools/tools/build_defs/repo/http.bzl:372:31: in <toplevel>
2024-10-14T20:31:02.4653140Z [35mWARNING: [0m/var/lib/jenkins/.cache/bazel/_bazel_jenkins/fdf6d09bf4b4f04a71e2a7dfceb40620/external/cpp-httplib/BUILD.bazel:3:11: in includes attribute of cc_library rule @cpp-httplib//:cpp-httplib: ignoring invalid absolute path '/'. Since this rule was created by the macro 'cc_library', the error might have been caused by the macro implementation
2024-10-14T20:31:15.8188119Z [32mAnalyzing:[0m 31 targets (84 packages loaded, 11640 targets configured)
2024-10-14T20:31:15.8260044Z [32mINFO: [0mRepository mkl instantiated at:
2024-10-14T20:31:15.8260663Z /var/lib/jenkins/workspace/WORKSPACE:183:13: in <toplevel>
2024-10-14T20:31:15.8261241Z Repository rule http_archive defined at:
2024-10-14T20:31:15.8262300Z /var/lib/jenkins/.cache/bazel/_bazel_jenkins/fdf6d09bf4b4f04a71e2a7dfceb40620/external/bazel_tools/tools/build_defs/repo/http.bzl:372:31: in <toplevel>
2024-10-14T20:31:15.8285868Z [35mWARNING: [0mDownload from https://anaconda.org/anaconda/mkl/2020.0/download/linux-64/mkl-2020.0-166.tar.bz2 failed: class java.io.IOException GET returned 503 Service Unavailable
2024-10-14T20:31:15.8333905Z [31m[1mERROR: [0mAn error occurred during the fetch of repository 'mkl':
2024-10-14T20:31:15.8334558Z Traceback (most recent call last):
2024-10-14T20:31:15.8335790Z File "/var/lib/jenkins/.cache/bazel/_bazel_jenkins/fdf6d09bf4b4f04a71e2a7dfceb40620/external/bazel_tools/tools/build_defs/repo/http.bzl", line 132, column 45, in _http_archive_impl
2024-10-14T20:31:15.8337024Z download_info = ctx.download_and_extract(
```
Which comes from those lines
https://github.com/pytorch/pytorch/blob/aef3591998f4e46f7ade3914e6ad758619954672/WORKSPACE#L183-L191
### Versions
CI
cc @seemethere @pytorch/pytorch-dev-infra | module: ci,triaged | low | Critical |
2,587,280,330 | pytorch | associative scan is incorrect for certain shapes/kwargs | I got this repro from noticing some test failures locally (details here: https://github.com/pytorch/pytorch/pull/136670#issuecomment-2412515964)
```
import torch
from torch._higher_order_ops.associative_scan import associative_scan
def f(a):
return associative_scan(lambda x, y: x + y, a, dim=1, reverse=True, combine_mode="generic")
a = torch.arange(18, dtype=torch.float32, device='cuda').reshape(2, 9)
# I got this output from running with `_fake_associative_scan` defined here:
# https://github.com/pytorch/pytorch/blob/main/test/functorch/test_control_flow.py#L1579
expected_out = torch.tensor([
[ 36., 36., 35., 33., 30., 26., 21., 15., 8.],
[117., 108., 98., 87., 75., 62., 48., 33., 17.],
], dtype=torch.float32, device='cuda')
out = f(a)
print(torch.allclose(out, expected_out))
# output is:
# tensor([[ 36., 15., 35., 0., 30., 0., 21., 0., 8.],
[117., 33., 98., 36., 75., 33., 48., 26., 17.]],
device='cuda:0')
```
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @chauhang @penguinwu @ydwu4 @yf225 | high priority,triaged,oncall: pt2,module: higher order operators | low | Critical |
2,587,326,300 | node | `parallel/test-runner-output` is flaky | ### Test
`parallel/test-runner-output`
### Platform
Windows
### Console output
```console
not ok 2761 parallel/test-runner-output
---
duration_ms: 5831.75900
severity: fail
exitcode: 1
stack: |-
â¶ test runner output
â test-runner/output/abort.js (5044.243602ms)
â test-runner/output/abort-runs-after-hook.js (5024.467414ms)
â test-runner/output/abort_suite.js (5000.590488ms)
â test-runner/output/abort_hooks.js (4979.5575ms)
â test-runner/output/describe_it.js (4977.140656ms)
â test-runner/output/describe_nested.js (4925.394205ms)
â test-runner/output/eval_dot.js (4901.098336ms)
â test-runner/output/eval_spec.js (4837.353ms)
â test-runner/output/eval_tap.js (4710.94876ms)
â test-runner/output/filtered-suite-delayed-build.js (4681.152445ms)
â test-runner/output/filtered-suite-order.mjs (4642.805511ms)
â test-runner/output/filtered-suite-throws.js (4612.421303ms)
â test-runner/output/hooks.js (4622.774072ms)
â test-runner/output/hooks_spec_reporter.js (4589.467662ms)
â test-runner/output/skip-each-hooks.js (4526.797983ms)
â test-runner/output/suite-skip-hooks.js (4502.179664ms)
â test-runner/output/timeout_in_before_each_should_not_affect_further_tests.js (4422.895785ms)
â test-runner/output/hooks-with-no-global-test.js (4306.540227ms)
â test-runner/output/global-hooks-with-no-tests.js (4279.560221ms)
â test-runner/output/before-and-after-each-too-many-listeners.js (4201.921211ms)
â test-runner/output/before-and-after-each-with-timeout-too-many-listeners.js (4158.995317ms)
â test-runner/output/force_exit.js (4130.514523ms)
â test-runner/output/global_after_should_fail_the_test.js (4101.715736ms)
â test-runner/output/no_refs.js (4050.275822ms)
â test-runner/output/no_tests.js (3976.924663ms)
â test-runner/output/only_tests.js (3872.183748ms)
â test-runner/output/dot_reporter.js (3810.210886ms)
â test-runner/output/junit_reporter.js (3816.323072ms)
â test-runner/output/spec_reporter_successful.js (3710.15109ms)
â test-runner/output/spec_reporter.js (3722.466172ms)
â test-runner/output/spec_reporter_cli.js (3607.488084ms)
â test-runner/output/source_mapped_locations.mjs (3511.73783ms)
â test-runner/output/lcov_reporter.js (3515.362787ms)
â test-runner/output/output.js (3432.210248ms)
â test-runner/output/output_cli.js (3385.594322ms)
â test-runner/output/name_and_skip_patterns.js (3116.385549ms)
â test-runner/output/name_pattern.js (3012.778989ms)
â test-runner/output/name_pattern_with_only.js (2944.807975ms)
â test-runner/output/skip_pattern.js (2479.866823ms)
â test-runner/output/unfinished-suite-async-error.js (2393.255322ms)
â test-runner/output/unresolved_promise.js (2214.530445ms)
â test-runner/output/default_output.js (2081.418465ms)
AssertionError [ERR_ASSERTION]: Expected values to be strictly equal:
+ actual - expected ... Lines skipped
'[32mâ should pass [90m(*ms)[39m[39m\n' +
'[31mâ should fail [90m(*ms)[39m[39m\n' +
...
' *[39m\n' +
' *[39m\n' +
+ ' [90m at async startSubtestAfterBootstrap (node:internal/test_runn'
- ' *[39m\n' +
- '\n' +
- '*\n' +
- '[31mâ should fail [90m(*ms)[39m[39m\n' +
- ' Error: fail\n' +
- ' *[39m\n' +
- ' *[39m\n' +
- ' *[39m\n' +
- ' *[39m\n' +
- ' *[39m\n' +
- ' *[39m\n' +
- ' *[39m\n' +
- ' *[39m\n' +
- ' *[39m\n' +
- ' *[39m\n' +
- '\n' +
- '*\n' +
- '[31mâ should pass but parent fail [90m(*ms)[39m[39m\n' +
- " [32m'test did not finish before its parent and was cancelled'[39m\n"
at assertSnapshot (/home/iojs/build/workspace/node-test-commit-aix/nodes/aix72-ppc64/test/common/assertSnapshot.js:56:12)
at async Module.spawnAndAssert (/home/iojs/build/workspace/node-test-commit-aix/nodes/aix72-ppc64/test/common/assertSnapshot.js:91:3)
at async TestContext.<anonymous> (file:///home/iojs/build/workspace/node-test-commit-aix/nodes/aix72-ppc64/test/parallel/test-runner-output.mjs:286:5)
at async Test.run (node:internal/test_runner/test:935:9)
at async Promise.all (index 41)
at async Suite.run (node:internal/test_runner/test:1320:7)
at async startSubtestAfterBootstrap (node:internal/test_runner/harness:297:3) {
generatedMessage: true,
code: 'ERR_ASSERTION',
actual: '[32mâ should pass [90m(*ms)[39m[39m\n[31mâ should fail [90m(*ms)[39m[39m\n Error: fail\n *[39m\n *[39m\n *[39m\n *[39m\n *[39m\n *[39m\n *[39m\n...',
expected: '[32mâ should pass [90m(*ms)[39m[39m\n[31mâ should fail [90m(*ms)[39m[39m\n Error: fail\n *[39m\n *[39m\n *[39m\n *[39m\n *[39m\n *[39m\n *[39m\n...',
operator: 'strictEqual'
}
â test-runner/output/arbitrary-output.js (1878.087741ms)
â test-runner/output/async-test-scheduling.mjs (1841.915158ms)
â test-runner/output/arbitrary-output-colored.js (2090.36852ms)
â test-runner/output/dot_output_custom_columns.js (1552.294798ms)
â test-runner/output/tap_escape.js (1461.577271ms)
â test-runner/output/test-runner-plan.js (1421.101897ms)
â test-runner/output/coverage_failure.js (1200.580922ms)
â test-runner/output/test-diagnostic-warning-without-test-only-flag.js (1083.959424ms)
â test-runner/output/coverage-width-40.mjs (1122.226143ms)
â test-runner/output/coverage-width-80.mjs (888.268393ms)
â test-runner/output/coverage-width-100.mjs (849.100504ms)
â test-runner/output/coverage-width-150.mjs (881.08359ms)
â test-runner/output/coverage-width-infinity.mjs (722.609533ms)
â test-runner/output/coverage-width-80-uncovered-lines.mjs (758.820816ms)
â test-runner/output/coverage-width-100-uncovered-lines.mjs (611.19314ms)
â test-runner/output/coverage-width-150-uncovered-lines.mjs (692.291311ms)
â test-runner/output/coverage-width-infinity-uncovered-lines.mjs (617.418721ms)
â test runner output (5372.954414ms)
â¹ tests 59
â¹ suites 1
â¹ pass 58
â¹ fail 1
â¹ cancelled 0
â¹ skipped 0
â¹ todo 0
â¹ duration_ms 5392.306818
â failing tests:
test at test/parallel/test-runner-output.mjs:295:5
â test-runner/output/default_output.js (2081.418465ms)
AssertionError [ERR_ASSERTION]: Expected values to be strictly equal:
+ actual - expected ... Lines skipped
'[32mâ should pass [90m(*ms)[39m[39m\n' +
'[31mâ should fail [90m(*ms)[39m[39m\n' +
...
' *[39m\n' +
' *[39m\n' +
+ ' [90m at async startSubtestAfterBootstrap (node:internal/test_runn'
- ' *[39m\n' +
- '\n' +
- '*\n' +
- '[31mâ should fail [90m(*ms)[39m[39m\n' +
- ' Error: fail\n' +
- ' *[39m\n' +
- ' *[39m\n' +
- ' *[39m\n' +
- ' *[39m\n' +
- ' *[39m\n' +
- ' *[39m\n' +
- ' *[39m\n' +
- ' *[39m\n' +
- ' *[39m\n' +
- ' *[39m\n' +
- '\n' +
- '*\n' +
- '[31mâ should pass but parent fail [90m(*ms)[39m[39m\n' +
- " [32m'test did not finish before its parent and was cancelled'[39m\n"
at assertSnapshot (/home/iojs/build/workspace/node-test-commit-aix/nodes/aix72-ppc64/test/common/assertSnapshot.js:56:12)
at async Module.spawnAndAssert (/home/iojs/build/workspace/node-test-commit-aix/nodes/aix72-ppc64/test/common/assertSnapshot.js:91:3)
at async TestContext.<anonymous> (file:///home/iojs/build/workspace/node-test-commit-aix/nodes/aix72-ppc64/test/parallel/test-runner-output.mjs:286:5)
at async Test.run (node:internal/test_runner/test:935:9)
at async Promise.all (index 41)
at async Suite.run (node:internal/test_runner/test:1320:7)
at async startSubtestAfterBootstrap (node:internal/test_runner/harness:297:3) {
generatedMessage: true,
code: 'ERR_ASSERTION',
actual: '[32mâ should pass [90m(*ms)[39m[39m\n[31mâ should fail [90m(*ms)[39m[39m\n Error: fail\n *[39m\n *[39m\n *[39m\n *[39m\n *[39m\n *[39m\n *[39m\n...',
expected: '[32mâ should pass [90m(*ms)[39m[39m\n[31mâ should fail [90m(*ms)[39m[39m\n Error: fail\n *[39m\n *[39m\n *[39m\n *[39m\n *[39m\n *[39m\n *[39m\n...',
operator: 'strictEqual'
}
...
```
### Build links
- https://ci.nodejs.org/job/node-test-binary-windows-js-suites/30580/RUN_SUBSET=0,nodes=win11-arm64-COMPILED_BY-vs2022-arm64/testReport/junit/(root)/parallel/test_runner_coverage/
### Additional information
I feel this is likely an issue with `replaceStackTrace` in `assertSnapshot`, since `startSubtestAfterBootstrap` is logged... | windows,flaky-test,test_runner | low | Critical |
2,587,332,337 | TypeScript | Debug Failure. No error for last overload signature | ### 🔎 Search Terms
Debug Failure. No error for last overload signature
possibly related to
* https://github.com/microsoft/TypeScript/issues/60202
* https://github.com/microsoft/TypeScript/issues/55217
### 🕗 Version & Regression Information
- This is the behavior in every version I tried, from 4.5.5 on through 5.7.0-dev.20241014. In 4.4 there's a different error, `RangeError: Maximum call stack size exceeded`.
### ⏯ Playground Link
[Playground link](https://www.typescriptlang.org/play/?ts=5.7.0-beta#code/PTAEDEEsCcGcBdQGMD2AHAnqFAzUBDUAG0gCNp9oMAoeDNAU1AGEUBbNFAOwa-gBEGOSF0jxI3AKIAPTtHiwAPABUAfKAC81UKGWgG0+LwAmsFu048+g4aPHdFInA2igZcheoD8b2SnlmAFygPABuLgDc1NRIRPiwZgCSXAj4REQMxqwc3LzwijYiYhJc+oYmZtmWeYV2JYr4XBiq6gDe2qAA5gyIBh6wABQAlMFVudZCRfZc7v4KBZN13G0dOtA9AK7QpVwb6QRmjRhROgC+1Oe09ExjVgKLxQ4A+rMB6hqg7TobsAwLto9SgYjFxTOYcndaoCGk0WgNVqBjA9psEodMOiNQMlUulMrc8v8piVVFFTlFqDgNlwkNNEYsGPi+MNRhZxvcAdMYc1Ph11vAtjs9kQDgQmiLGeyiQ4jiSLtFUClEPhOp11p18EZNHTbAzWXdhuSQFi2MqRExGih4AALFzIdBYXCga1MEjkSg0K6McHVCYckqvBQAJhU7w6emBFW9bLRJWDTltAdg3l8-VAwTCkXlcQSWMVaQyWT1eWDMe4ZRBYIlpa4wZlKx03V6fgCzKjkOR-ubQcJSy49Z0oD5ApCQpFRxOoHOlzoXqrHe4wZeXaTWq+oB+f2r5cjc79C65cIRSL3XFR864GOC2Pg+bxRb4JfPsrJ0T6cyd11AAEE0Ghq4HVw6Dce0BbdQUqe9JV7WtYVUeEB21KVT1AatL1zHECwlR8T2fck33kUBKWpWljzNH80EDVtyP-Hk1k2bYR32eJRWOOUYm4BACF-LVSJ4cjKKGKJ8F-AA6DcBmVVUGHVIxBOoIA)
### 💻 Code
```ts
// First copy of a library
type ComponentDefinitionExports<T> =
T extends ComponentDefinition<infer Exports> ? Exports : never;
class InstalledComponent<Definition extends ComponentDefinition<any>> {
get exports(): ComponentDefinitionExports<Definition> {
return null as any;
}
}
type ComponentDefinition<_Exports> = {
use<Definition extends ComponentDefinition<any>>(
definition: Definition
): InstalledComponent<Definition>;
};
function defineComponent(): ComponentDefinition<any> {
return null as any as ComponentDefinition<any>;
}
const aggregate = defineComponent();
// Imagine another copy of the library
type ComponentDefinitionExports2<T> =
T extends ComponentDefinition2<infer Exports> ? Exports : never;
class InstalledComponent2<Definition extends ComponentDefinition2<any>> {
get exports(): ComponentDefinitionExports2<Definition> {
return null as any;
}
}
type ComponentDefinition2<_Exports> = {
use<Definition extends ComponentDefinition2<any>>(
definition: Definition
): InstalledComponent2<Definition>;
};
export type AppDefinition2 = {
use<Definition extends ComponentDefinition2<any>>(
definition: Definition
): InstalledComponent2<Definition>;
};
export function defineApp2(): AppDefinition2 {
return null as any;
}
const app = defineApp2();
app.use(aggregate);
```
### 🙁 Actual behavior
tsc exits with error code 1 and write to stderr `Debug Failure. No error for last overload signature`
```
/Users/tomb/aggregate/node_modules/typescript/lib/tsc.js:113984
throw e;
^
Error: Debug Failure. No error for last overload signature
at resolveCall (/Users/tomb/aggregate/node_modules/typescript/lib/tsc.js:69834:19)
at resolveCallExpression (/Users/tomb/aggregate/node_modules/typescript/lib/tsc.js:70216:12)
at resolveSignature (/Users/tomb/aggregate/node_modules/typescript/lib/tsc.js:70599:16)
at getResolvedSignature (/Users/tomb/aggregate/node_modules/typescript/lib/tsc.js:70619:20)
at checkCallExpression (/Users/tomb/aggregate/node_modules/typescript/lib/tsc.js:70728:23)
at checkExpressionWorker (/Users/tomb/aggregate/node_modules/typescript/lib/tsc.js:73819:16)
at checkExpression (/Users/tomb/aggregate/node_modules/typescript/lib/tsc.js:73729:32)
at checkExpressionStatement (/Users/tomb/aggregate/node_modules/typescript/lib/tsc.js:76310:5)
at checkSourceElementWorker (/Users/tomb/aggregate/node_modules/typescript/lib/tsc.js:79237:16)
at checkSourceElement (/Users/tomb/aggregate/node_modules/typescript/lib/tsc.js:79097:7)
```
In the playground,
```
Uncaught (in promise) Error: Debug Failure. No error for last overload signature
at wR (tsWorker.js:341:318135)
...
```
### 🙂 Expected behavior
I'm not sure. I'd like this to pass typechecking.
### Additional information about the issue
I tried to minimize this but I imagine this is not the minimal repro. Please give this a better title :) | Bug | low | Critical |
2,587,373,766 | pytorch | DISABLED test_fsdp_unsupported_module_cls (__main__.TestFSDPMiscMultiThread) | Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_fsdp_unsupported_module_cls&suite=TestFSDPMiscMultiThread&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/31522119419).
Over the past 3 hours, it has been determined flaky in 5 workflow(s) with 5 failures and 5 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_fsdp_unsupported_module_cls`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1004, in wrapper
self._join_threads(self.threads, fn)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1135, in _join_threads
cls._check_return_codes(failed_ranks, timeout, fn)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1172, in _check_return_codes
raise RuntimeError(error_msg)
RuntimeError: Thread 1 exited with exception:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1098, in run_test_with_threaded_pg
getattr(self, test_name)()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 1006, in wrapper
fn()
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 2983, in wrapper
method(*args, **kwargs)
File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_distributed.py", line 184, in wrapper
return func(*args, **kwargs)
File "/var/lib/jenkins/workspace/test/distributed/fsdp/test_fsdp_misc.py", line 978, in test_fsdp_unsupported_module_cls
with self.assertWarnsRegex(UserWarning, regex):
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 295, in __exit__
self._raiseFailure("{} not triggered".format(exc_name))
File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 163, in _raiseFailure
raise self.test_case.failureException(msg)
AssertionError: UserWarning not triggered
To execute this test, run the following from the base repo dir:
python test/distributed/fsdp/test_fsdp_misc.py TestFSDPMiscMultiThread.test_fsdp_unsupported_module_cls
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
```
</details>
Test file path: `distributed/fsdp/test_fsdp_misc.py`
cc @XilunWu @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @clee2000 | oncall: distributed,triaged,module: flaky-tests,skipped | low | Critical |
2,587,377,374 | rust | Tracking Issue for `const_eq_ignore_ascii_case` | Feature gate: `#![feature(const_eq_ignore_ascii_case)]`
This is a tracking issue for const `eq_ignore_ascii_case` on `[u8]` and `str`.
<!--
Include a short description of the feature.
-->
### Public API
```rust
impl [u8] {
pub const fn eq_ignore_ascii_case(&self, other: &[u8]) -> bool;
}
impl str {
pub const fn eq_ignore_ascii_case(&self, other: &str) -> bool;
}
```
### Steps / History
- [x] Implementation: #131721
- [ ] Final comment period (FCP)[^1]
- [ ] Stabilization PR
### Unresolved Questions
- None yet.
[^1]: https://std-dev-guide.rust-lang.org/feature-lifecycle/stabilization.html
| T-libs-api,C-tracking-issue | low | Minor |
2,587,384,433 | rust | `--extern mycrate=path/to/my/crate/with/random.suffix` fails with "file name should be lib*.rlib or lib*.so" | macro.rs:
```rust
extern crate proc_macro;
use proc_macro::TokenStream;
#[proc_macro_attribute]
pub fn your_macro(_attr: TokenStream, item: TokenStream) -> TokenStream {
// Macro implementation
item
}
```
main.rs:
```rust
extern crate macros;
#[macros::your_macro]
fn main() {
println!("Hello, world!");
}
```
This works:
```shell
rustc --crate-type proc-macro macros.rs -o libmacros.dylib
rustc main.rs --extern macros=libmacros.dylib
```
This doesn't:
```shell
rustc --crate-type proc-macro macros.rs -o libmacros.so
rustc main.rs --extern macros=libmacros.so
```
```
error: extern location for macros is of an unknown type: libmacros.so
--> main.rs:4:1
|
4 | extern crate macros;
| ^^^^^^^^^^^^^^^^^^^^
error: file name should be lib*.rlib or lib*.dylib
--> main.rs:4:1
|
4 | extern crate macros;
| ^^^^^^^^^^^^^^^^^^^^
error[E0463]: can't find crate for `macros`
--> main.rs:4:1
|
4 | extern crate macros;
| ^^^^^^^^^^^^^^^^^^^^ can't find crate
error: aborting due to 3 previous errors
For more information about this error, try `rustc --explain E0463`.
```
This has come up in Rust-For-Linux where we'd like to always use the .so suffix, but this causes the build to fail on macOS. If we change the suffix to .dylib, it works on macOS, but not Linux. | T-compiler,C-bug,A-crates,A-rust-for-linux,E-needs-design | low | Critical |
2,587,395,853 | go | cmd/compile: inlining of range funcs should be more aggressive | Consider the following program:
```
package pkg
import (
"iter"
)
func trivialIterator() iter.Seq[int] {
return func(yield func(int) bool) {
yield(0)
}
}
func consumer() {
for range trivialIterator() {
foo()
foo()
}
}
//go:noinline
func foo() {}
```
Building it with `GOEXPERIMENT=newinliner` with `go1.24-cbdb3545ad` reports:
```
./foo.go:8:9: can inline trivialIterator.func1 with cost 60 as: func(func(int) bool) { yield(0) }
./foo.go:7:6: can inline trivialIterator with cost 17 as: func() iter.Seq[int] { return func literal }
./foo.go:21:6: cannot inline foo: marked go:noinline
./foo.go:13:6: cannot inline consumer: function too complex: cost 175 exceeds budget 160
./foo.go:14:2: cannot inline consumer-range1: function too complex: cost 190 exceeds budget 160
./foo.go:14:27: inlining call to trivialIterator with score -23
./foo.go:8:9: can inline consumer.trivialIterator.func1 with cost 60 as: func(func(int) bool) { yield(0) }
./foo.go:14:2: inlining call to consumer.trivialIterator.func1 with score 60
./foo.go:8:14: yield does not escape
./foo.go:8:9: func literal escapes to heap:
./foo.go:8:9: flow: ~r0 = &{storage for func literal}:
./foo.go:8:9: from func literal (spill) at ./foo.go:8:9
./foo.go:8:9: from return func literal (return) at ./foo.go:8:2
./foo.go:8:9: func literal escapes to heap
./foo.go:8:14: yield does not escape
./foo.go:14:2: consumer capturing by ref: #state1 (addr=false assign=true width=8)
./foo.go:14:2: func literal does not escape
./foo.go:14:27: func literal does not escape
```
Note that despite the utter trivialness of the loop body (`consumer-range1`), it cannot be inlined.
I think that inlining of range bodies deserves a special case and that the cost of the synthetic function is unimportant. If this were a "normal" loop, the body would inherently be part of the `consumer` function, no matter how complex it'd be. If we know that `yield` is only called in a single place syntactically, then we should be able to unconditionally inline the range func and produce code no more complex than for a normal loop (excluding the state tracking for range funcs, but that's fixed overhead).
I'm sure I'm missing something.
(The bot will find many related issues, but I don't think any of them directly mention this general case.)
| Performance,NeedsInvestigation,compiler/runtime | low | Major |
2,587,397,068 | flutter | Flutter Windows native implementation should bring any way to draw the system's button while customizing title bar to keep features like "snap layouts" | ### Use case
Flutter Windows native implementation should bring any way to draw the system's button while customizing title bar to keep features like "snap layouts"
There is some packages in pub.dev that helps customizing title bar by "disabling" the system's one, but this disables native features like [Windows 11 snap layouts](https://learn.microsoft.com/en-us/windows/apps/desktop/modernize/ui/apply-snap-layout-menu) and there is no way to render that.
### Proposal
Create an API to let the system draw the native buttons when "frameless" mode from window_manager package or bitsdojo_window_windows native hack to runner's main.cpp, keeping features like "snap layouts". | c: new feature,platform-windows,c: proposal,a: desktop,P3,team-windows,triaged-windows | low | Minor |
2,587,399,896 | pytorch | Better document that non-blocking GPU -> CPU memory requires device sync | ### 🐛 Describe the bug
Here is a minimal working example of the issue.
```
import torch
torch.manual_seed(0)
acc_blocking = None
acc_non_blocking = None
for j in range(3):
print(f"\n{j=}")
t_blocking = torch.randn(10_000, 1_000, dtype=torch.float16).cuda()
t_non_blocking = t_blocking + 5
print(f"{t_blocking[0, 0].item() = }")
print(f"minima of original tensors:\n\t{t_blocking.min().item()=}, {t_non_blocking.min().item()=}")
if j == 0:
acc_blocking = t_blocking.to(device="cpu", non_blocking=False)
acc_non_blocking = t_non_blocking.to(device="cpu", non_blocking=True)
else:
acc_blocking = torch.cat([acc_blocking, t_blocking.to(device="cpu", non_blocking=False)], dim=0)
acc_non_blocking = torch.cat([acc_non_blocking, t_non_blocking.to(device="cpu", non_blocking=True)], dim=0)
print(f"minima of accumulators:\n\t{acc_blocking.min().item()=}, {acc_non_blocking.min().item()=}")
```
Here is the terminal output.
```
j=0
t_blocking[0, 0].item() = -0.9013671875
minima of original tensors:
t_blocking.min().item()=-3.904296875, t_non_blocking.min().item()=1.095703125
minima of accumulators:
acc_blocking.min().item()=-3.904296875, acc_non_blocking.min().item()=1.095703125
j=1
t_blocking[0, 0].item() = 0.07061767578125
minima of original tensors:
t_blocking.min().item()=-3.904296875, t_non_blocking.min().item()=1.095703125
minima of accumulators:
acc_blocking.min().item()=-3.904296875, acc_non_blocking.min().item()=0.0
j=2
t_blocking[0, 0].item() = 0.662109375
minima of original tensors:
t_blocking.min().item()=-3.904296875, t_non_blocking.min().item()=1.095703125
minima of accumulators:
acc_blocking.min().item()=-3.904296875, acc_non_blocking.min().item()=0.0
```
As you can see, starting at j=1, `acc_non_blocking` clearly has the wrong minimum value. (It should always be the minimum of the minima of all previous instances of `t_non_blocking`.)
Here are a few further comments.
* This also occurs with bfloat16, but seemingly not with float32.
* This occurs with `10 ** 7` entries in the tensors (as above), but seemingly not with `10 ** 6` or fewer entries.
* This seems not to occur if the summand 5 is replaced by a smaller number that keeps `t_non_blocking.min().item()` negative. (For instance, it occurs when the summand is 3.91, but not when it's 3.90.)
* The fact that in each iteration the tensors' minima are all the same seems likely due to the imprecision of (b)float16. The top-left entry is printed just to confirm that the tensor is indeed changing at each iteration.
P.S. This is my first bug report, please let me know if anything is missing or could be improved.
CC: @dhruvbpai
### Versions
Collecting environment information...
PyTorch version: 2.1.2+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Amazon Linux 2 (x86_64)
GCC version: (GCC) 7.3.1 20180712 (Red Hat 7.3.1-17)
Clang version: Could not collect
CMake version: version 2.8.12.2
Libc version: glibc-2.26
Python version: 3.10.14 | packaged by conda-forge | (main, Mar 20 2024, 12:45:18) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-5.10.223-212.873.amzn2.x86_64-x86_64-with-glibc2.26
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
GPU 2: NVIDIA A100-SXM4-40GB
GPU 3: NVIDIA A100-SXM4-40GB
GPU 4: NVIDIA A100-SXM4-40GB
GPU 5: NVIDIA A100-SXM4-40GB
GPU 6: NVIDIA A100-SXM4-40GB
GPU 7: NVIDIA A100-SXM4-40GB
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz
Stepping: 7
CPU MHz: 3599.918
BogoMIPS: 5999.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 36608K
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke
Versions of relevant libraries:
[pip3] galore-torch==1.0
[pip3] numpy==1.26.4
[pip3] torch==2.1.2
[pip3] torchtyping==0.1.5
[pip3] torchvision==0.16.2
[pip3] triton==2.1.0
[conda] No relevant packages
cc @svekars @brycebortree @sekyondaMeta @ptrblck @msaroufim | module: docs,module: cuda,triaged,module: correctness (silent) | low | Critical |
2,587,427,211 | godot | AnimationNodeTransition transition_to_self doesnt fade when transitioning to self. | ### Tested versions
- reproducible in godot v4.3 (stable)
### System information
Windows 10 - Godot 4.3 (Stable) - Vulkan Forward+
### Issue description
When you make a transition node in an AnimationTree and set it to transition to self it does not fade, instead simply starting the animation over entirely unless you create a duplicate animation input.
This could be how its intended but feels wrong that when i want to fade an animation to itself on a AnimationTransitionNode i have to make a copy if that animation.
### Steps to reproduce
Create a new AnimationTree, create an AnimationNodeTransition and set to transition to self. Give it an Xfade. Have it transition to self while game is running. instead of fading to self it will simply restart the animation.
### Minimal reproduction project (MRP)
[animationtransitionnode.zip](https://github.com/user-attachments/files/17371131/animationtransitionnode.zip)
| bug,topic:animation | low | Minor |
2,587,428,204 | svelte | False positive `ownership_invalid_binding`? | ### Describe the bug
I'm getting the following message, telling me to consider `bind:` between the two components. However, I am, in fact, binding between the two.

### Reproduction
https://svelte.dev/playground/c27c1016f1f2423489568d9fb2271921?version=5.2.5
### Logs
```shell
[svelte] ownership_invalid_bindingsrc/lib/calendar-with-select.svelte passed a value to node_modules/.pnpm/bits-ui@1.0.0-next.17_svelte@5.0.0-next.265/node_modules/bits-ui/dist/bits/calendar/components/calendar.svelte with `bind:`, but the value is owned by node_modules/.pnpm/bits-ui@1.0.0-next.17_svelte@5.0.0-next.265/node_modules/bits-ui/dist/bits/calendar/components/calendar.svelte. Consider creating a binding between node_modules/.pnpm/bits-ui@1.0.0-next.17_svelte@5.0.0-next.265/node_modules/bits-ui/dist/bits/calendar/components/calendar.svelte and src/lib/calendar-with-select.svelte
```
### System Info
```shell
System:
OS: macOS 15.0
CPU: (12) arm64 Apple M2 Max
Memory: 92.23 MB / 32.00 GB
Shell: 5.9 - /bin/zsh
Binaries:
Node: 20.15.1 - ~/.nvm/versions/node/v20.15.1/bin/node
npm: 10.7.0 - ~/.nvm/versions/node/v20.15.1/bin/npm
pnpm: 9.6.0 - ~/Library/pnpm/pnpm
bun: 1.0.25 - ~/.bun/bin/bun
Browsers:
Edge: 129.0.2792.89
Safari: 18.0
npmPackages:
svelte: ^5.0.0-next.1 => 5.0.0-next.265
```
### Severity
annoyance | bug,awaiting submitter | medium | Critical |
2,587,452,500 | ant-design | Tree显示icon后未对齐且多行文本和图标未在同一行 | ### Reproduction link
[https://ant.design/components/tree-cn#tree-demo-line](https://ant.design/components/tree-cn#tree-demo-line)
### Steps to reproduce
总之就是看起来好乱


### What is expected?
对齐
### What is actually happening?
未对齐
| Environment | Info |
| --- | --- |
| antd | 5.21.4 |
| React | 18.3.1 |
| System | windows 11 |
| Browser | chrome latest |
<!-- generated by ant-design-issue-helper. DO NOT REMOVE --> | Inactive,improvement | low | Minor |
2,587,462,517 | flutter | StateError: Bad state: No running isolate (inspector is not set). | First appeared in 3.23.0
Appears to be exclusive to `flutter run --machine --start-paused -d chrome`.
Affects ~1.3% of clients that have reported a crash on 3.24.3 (which has a crash rate of ~1%)
Stack trace as of 3.24.3:
```
StateError: Bad state: No running isolate (inspector is not set).
at ChromeProxyService.inspector(chrome_proxy_service.dart:78)
at _waitForResumeEventToRunMain.<anonymous closure>(dwds_vm_client.dart:308)
at _rootRunUnary(zone.dart:1415)
at _CustomZone.runUnary(zone.dart:1308)
at _CustomZone.runUnaryGuarded(zone.dart:1217)
at _BufferingStreamSubscription._sendData(stream_impl.dart:365)
at _DelayedData.perform(stream_impl.dart:541)
at _PendingEvents.handleNext(stream_impl.dart:646)
at _PendingEvents.schedule.<anonymous closure>(stream_impl.dart:617)
at StackZoneSpecification._run(stack_zone_specification.dart:207)
at StackZoneSpecification._registerCallback.<anonymous closure>(stack_zone_specification.dart:114)
at _rootRun(zone.dart:1391)
at _CustomZone.run(zone.dart:1301)
at _CustomZone.runGuarded(zone.dart:1209)
at _CustomZone.bindCallbackGuarded.<anonymous closure>(zone.dart:1249)
at StackZoneSpecification._run(stack_zone_specification.dart:207)
at StackZoneSpecification._registerCallback.<anonymous closure>(stack_zone_specification.dart:114)
at _rootRun(zone.dart:1399)
at _CustomZone.run(zone.dart:1301)
at _CustomZone.runGuarded(zone.dart:1209)
at _CustomZone.bindCallbackGuarded.<anonymous closure>(zone.dart:1249)
at _microtaskLoop(schedule_microtask.dart:40)
at _startMicrotaskLoop(schedule_microtask.dart:49)
at _runPendingImmediateCallback(isolate_patch.dart:118)
at _RawReceivePort._handleMessage(isolate_patch.dart:185)
```
Potentially relevant files:
* https://github.com/dart-lang/webdev/blob/main/dwds/lib/src/dwds_vm_client.dart
* https://github.com/dart-lang/webdev/blob/main/dwds/lib/src/services/chrome_proxy_service.dart | c: crash,dependency: dart,P2,team-tool,triaged-tool,dependency:dart-triaged | low | Critical |
2,587,466,230 | deno | [Bug] fs.exists, Deno.readTextFile, Deno.writeTextFile Fail to Work on Memory Disk | Version: Deno 2.0.0
OS: Windows 10 x86_64
```txt
deno 2.0.0 (stable, release, x86_64-pc-windows-msvc)
v8 12.9.202.13-rusty
typescript 5.6.2
```
## Intro
I use a memory disk created by [ImDisk](https://sourceforge.net/projects/imdisk-toolkit/). Its setup is as follows. However, some Deno file system APIs fail to work on it.

## Reproduce the Issue
Here are the steps.
### Create a test file
```sh
cd /d y:\
echo test > a.txt
```
### Create a test typescript file
```typescript
import { exists } from "https://deno.land/std/fs/mod.ts";
Deno.readTextFile("y:\\a.txt")
.then((x) => console.log(x))
.catch((e) => console.error(e));
Deno.writeTextFile("y:\\test.txt", "test")
.then((x) => console.log(x))
.catch((e) => console.error(e));
exists("y:\\a.txt")
.then((x) => console.log(x))
.catch((e) => console.error(e));
```
### Execute the typescript file
The error message is as follows.
```txt
Error: Incorrect function. (os error 1): readfile 'y:\a.txt'
at Object.readTextFile (ext:deno_fs/30_fs.js:777:24)
at file:///C:/test.ts:3:6 {
code: "EISDIR"
}
Error: Incorrect function. (os error 1): writefile 'y:\test.txt'
at writeFile (ext:deno_fs/30_fs.js:835:13)
at Object.writeTextFile (ext:deno_fs/30_fs.js:877:12)
at file:///C:/test.ts:6:6 {
code: "EISDIR"
}
Error: Incorrect function. (os error 1): stat 'y:\a.txt'
at async Object.stat (ext:deno_fs/30_fs.js:407:15)
at async exists (https://deno.land/std@0.224.0/fs/exists.ts:112:18) {
code: "EISDIR"
}
```
## Summary
The impacted file system APIs are not restricted to the ones in the test. There might be more of them.
Actually, not all file system APIs fail. E.g. `Deno.copyFile`, `Deno.remove` work well. I guess because they leverage OS native API. | bug,windows,ext/fs | low | Critical |
2,587,500,612 | TypeScript | Using types from private class properties results in `any` types in `.d.ts` files | ### 🔎 Search Terms
class private any index access
### 🕗 Version & Regression Information
- This is the behavior in every version I tried, and I reviewed the FAQ for entries
### ⏯ Playground Link
https://www.typescriptlang.org/play/?#code/KYDwDg9gTgLgBAYwDYEMDOa4DEITgbwCg4S4woBLANxRmDgH1yIxhYBPALjjRkoDsA5nAC8cAOTiA3MVIII-XlACuCGNAAUKKIO44IAbXFMoLNjHbiAugEoCskgF9CzoA
### 💻 Code
```ts
export class Foo {
private _property: string = '';
constructor(arg: Foo['_property']) {
}
}
```
### 🙁 Actual behavior
Generated `.d.ts` is:
```ts
export declare class Foo {
private _property;
constructor(arg: Foo['_property']);
}
```
### 🙂 Expected behavior
Generated `.d.ts` is:
```ts
export declare class Foo {
private _property: string;
constructor(arg: Foo['_property']);
}
// or
export declare class Foo {
private _property;
constructor(arg: string);
}
// or
/* Typescript Errors on the code to tell you it's going to generate an `any` */
```
### Additional information about the issue
This behaviour is problematic because it creates a desync between consumers of `.ts` files and consumers of `.d.ts` files for the same code.
For example:
```ts
new Foo(1);
```
If the `Foo` type comes from the `.ts` file, then TS will error on this code as it can see that the argument type is `string`.
OTOH if the `Foo` type comes from the `.d.ts` file, then TS will **_NOT_** error on this code as it sees the argument type as `any`.
---
We have just uncovered this at Canva.
A user reported an error showing up in their IDE against our `master` branch (i.e. code that has passed CI as typechecked).
The code is structured such that the file with the error (A) is in a separate project to the file declaring the class (B).
This means that we have the exact scenario above where (A) consumes (B)'s `.d.ts` during our CLI builds, but (A) consume's (B)'s `.ts` within the IDE.
This pattern of declaring a type based on a private property's type is quite pervasive across our codebase and it's surprising that this is the first problem that's been actively revealed. | Bug | low | Critical |
2,587,509,036 | rust | Elided lifetime changes in `rust_2018_idioms` lint is very noisy and results in dramatically degraded APIs for Bevy | # Problem
With the upcoming release of Rust 2024 edition, we're concerned that `rust_2018_idioms` will be deny by default.
*[Editorial comment (TC): This is not an edition item for Rust 2024, and the edition is not accepting any new items, so we can say definitely that this will not be tied to the release of Rust 2024. See [here](https://github.com/rust-lang/rust/issues/131725#issuecomment-2416993025).]*
We investigated what these changes will entail for Bevy in https://github.com/bevyengine/bevy/pull/15916, and the impact is quite severe. Our primary user-facing system and query APIs are littered with meaningless lifetimes.
This is a much worse experience with no upside for us, and Bevy and its entire ecosystem will have to manually allow this lint on every project.
## Proposed solution
We would appreciate if the elided lifetimes lint could be split out from the rest of the `rust_2018_idioms` linting, which we generally liked the effect of.
Ideally this would be off by default as well, to avoid needing to teach new users to turn it off as a critical part of project setup. | A-lints,T-lang,C-discussion,L-elided_lifetimes_in_paths,A-edition-2018,I-lang-radar | medium | Critical |
2,587,583,025 | rust | Tracking Issue for bootstrap spurious rebuilds | This is a tracking issue for collecting spurious rebuild issues in bootstrap. Spurious rebuilds refer to rebuilds that seem unnecessary, e.g. if running `./x test run-make` twice in a row without modifying any sources rebuilds cargo. Note that is this not *always* the case, e.g. at the time when this issue was created, `mir-opt` tests build a special std with `mir-opt` `RUSTFLAGS`, which will need to be rebuilt if trying to go to stage2 because a stage2 rustc will expect a stage1 "standard" std build without `mir-opt` `RUSTFLAGS` (there are other solutions to that, not in the scope of this issue).
## Categories
NOTE: There is only currently one category of spurious rebuilds that I am acutely aware of, it's entirely possible to have other classes of causes for spurious rebuilds.
### Differing `RUSTFLAGS`
While some of these issues are closed, the fixes are usually stopgap solutions.
- [ ] #131636
- [x] #131437
- [x] #130108
- Remark: this is `RUSTFLAGS` too via `cargo.configure_linker`.
- [x] #126464
- [x] #123177
### Preventing spurious rebuilds due to differing `RUSTFLAGS`
To solve the class of spurious rebuilds due to differing `RUSTFLAGS`, we will need to properly handle them. Specifically, we need to (https://github.com/rust-lang/rust/issues/131636#issuecomment-2412844566):
- Centralize `RUSTFLAGS` handling to forbid naive "conditional" `RUSTFLAGS`.
- Conditional `RUSTFLAGS` should be propagated to bootstrap shims using environment variable (something like `IMPLICIT_RUSTFLAGS`) so they can't invalidate the build cache.
- Ensure bootstrap should never set any conditional rustflag explicitly on cargo.
### Discussion
https://rust-lang.zulipchat.com/#narrow/stream/326414-t-infra.2Fbootstrap/topic/Mechanism.20for.20better.20RUSTFLAGS.20handling | E-hard,T-bootstrap,C-bug,C-tracking-issue,E-needs-investigation | medium | Major |
2,587,593,657 | PowerToys | KeyboardManagerEngine can't run. | ### Microsoft PowerToys version
0.85.1
### Installation method
Microsoft Store
### Running as admin
Yes
### Area(s) with issue?
Keyboard Manager
### Steps to reproduce
When running for long periods of time, the keyboardManagerEngine will be killed
### ✔️ Expected Behavior
_No response_
### ❌ Actual Behavior
_No response_
### Other Software
_No response_ | Issue-Bug,Needs-Triage | low | Minor |
2,587,679,133 | flutter | Gboard Keyboard language change is not working when `enableSuggestions` is false | ### Steps to reproduce
Samsung Galaxy, Xiaomi, etc. GBoard keyboard language change not working.
1. Put focus on text field.
2. Click on language change button on bottom GBoard keyboard.
3. Not working.
- Setting the `enableSuggestions` option to `false` doesn't seem to work.
- We have verified that third-party keyboards work even when the `enableSuggestions` value is `false`.
### Expected results
The Keyboard language should be changed normally.
### Actual results
The Keyboard language change not working.
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return const MaterialApp(
home: MyHomePage(),
);
}
}
class MyHomePage extends StatefulWidget {
const MyHomePage({super.key});
@override
State<MyHomePage> createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: const Text("GBoard Change Language"),
),
body: const Center(
child: TextField(
enableSuggestions: false,
),
),
);
}
}
```
</details>
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
https://github.com/user-attachments/assets/53511451-313c-4b23-8d01-51e4428434d3
</details>
### Logs
<details open><summary>Logs</summary>
```console
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.24.3, on macOS 13.6.1 22G313 darwin-x64, locale ko-KR)
• Flutter version 3.24.3 on channel stable at /Users/josephnk/Library/flutter
• Upstream repository
https://github.com/flutter/flutter.git
• Framework revision 2663184aa7 (5 weeks ago), 2024-09-11 16:27:48 -0500
• Engine revision 36335019a8
• Dart version 3.5.3
• DevTools version 2.37.3
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /Users/josephnk/Library/Android/sdk
• Platform android-34, build-tools 34.0.0
• ANDROID_HOME = /Users/josephnk/Library/Android/sdk
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.9+0-17.0.9b1087.7-11185874)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 15.0.1)
• Xcode at /Applications/Xcode.app/Contents/Developer
• Build 15A507
• CocoaPods version 1.15.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2023.2)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨
https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨
https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.9+0-17.0.9b1087.7-11185874)
[✓] IntelliJ IDEA Ultimate Edition (version 2024.2.0.2)
• IntelliJ at /Applications/IntelliJ IDEA.app
• Flutter plugin version 82.0.3
• Dart plugin version 242.20629
[✓] VS Code (version 1.94.2)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension can be installed from:
🔨
https://marketplace.visualstudio.com/items?itemName=Dart-Code.flutter
[✓] Connected device (3 available)
• SM G977N (mobile) • R3CM40BAQ0Y • android-arm64 • Android 12 (API 31)
• macOS (desktop) • macos • darwin-x64 • macOS 13.6.1 22G313 darwin-x64
• Chrome (web) • chrome • web-javascript • Google Chrome 129.0.6668.100
! Error: Browsing on the local area network for iPhone11. Ensure the device is unlocked and attached with a cable or
associated with the same local area network as this Mac.
The device must be opted into Developer Mode to connect wirelessly. (code -27)
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| a: text input,platform-android,a: internationalization,has reproducible steps,P1,team-text-input,triaged-text-input,found in release: 3.24,found in release: 3.27 | medium | Critical |
2,587,720,138 | next.js | NextJs requests custom route when using `navigator.clipboard.writeText` | ### Link to the code that reproduces this issue
https://github.com/cuongle-hdwebsoft/nextjs-bug
### To Reproduce
1. Start the application in development mode
2. Go to homepage http://localhost:8080/
3. Click the button `Click me`, and wait until it alerts `Copy clipboard successfully`
4. Open server log in the terminal, it will appear `==> This route is called`
### Current vs. Expected behavior
Current behavior:
- When I use `navigator.clipboard.writeText` to copy the text, it always request API to my custom route
Expected behavior:
- It should not request API when using `navigator.clipboard.writeText`
### Provide environment information
```bash
Operating System:
Platform: darwin
Arch: arm64
Version: Darwin Kernel Version 24.0.0: Mon Aug 12 20:51:54 PDT 2024; root:xnu-11215.1.10~2/RELEASE_ARM64_T6000
Available memory (MB): 16384
Available CPU cores: 10
Binaries:
Node: 20.11.1
npm: 10.2.4
Yarn: 1.22.22
pnpm: 9.12.0
Relevant Packages:
next: 14.2.15 // Latest available version is detected (14.2.15).
eslint-config-next: 14.2.15
react: 18.3.1
react-dom: 18.3.1
typescript: 5.6.3
Next.js Config:
output: N/A
```
### Which area(s) are affected? (Select all that apply)
create-next-app
### Which stage(s) are affected? (Select all that apply)
next dev (local)
### Additional context
_No response_ | create-next-app,bug | low | Critical |
2,587,740,010 | pytorch | Network training silently fails with MPS backend | ### 🐛 Describe the bug
I'm working with a small conv1D network. The network trains as expected when running the script with a CPU (or CUDA) backend. But if the backend is switched to MPS, the network training seems to fail silently (no errors are raised, but the network doesn't learn anything as it did with CPU or CUDA backend). The difference between the network in the script trained using MPS vs. CPU are substantial, and even more so if the only ReLu layer in the network is removed (see code and output snippets below).
I tried to whittle down the code to isolate what is causing the issue, but wasn't really able to do that too well. The code (and the data required to train it) are on [this](https://gist.github.com/UjasShah/c7785e79ae1049f56ce5c39643498a00) public gist (you might have to scroll down quite a bit for the code). The code really just is 1) manipulating the data a bit to make it as the network expects it, 2) the network itself, 3) a masked cross entropy loss function and 4) some optional code to produce outputs from a trained network.
The data can also be gotten from [here](https://github.com/karpathy/makemore/blob/master/names.txt)
I've pasted the network training below to show the difference between MPS and CPU. Shown below are differences I've seen consistently over many runs.
CPU:
```
Using cpu device
step: 0, train loss: 3.22050, val loss: 3.23820
step: 1000, train loss: 2.15283, val loss: 2.39201
```
MPS:
```
Using mps device
step: 0, train loss: 3.23463, val loss: 3.25037
step: 1000, train loss: 2.80399, val loss: 2.92336
```
Another example of networks training differently is when I remove the only ReLu layer in the network
CPU:
```
Using cpu device
step: 0, train loss: 3.10652, val loss: 3.13682
step: 1000, train loss: 2.25550, val loss: 2.44400
```
MPS:
```
Using mps device
step: 0, train loss: 3.24397, val loss: 3.26047
step: 1000, train loss: 1952947712.00000, val loss: 2297497600.00000
```
Sorry for the long code! As I said, I tried to get to the exact operation causing this bug but wasn't able to. This is also my first GitHub issue so any pointers on making this issue better or triaging the bug would be welcome!
Also wanted to link [another](https://github.com/pytorch/pytorch/issues/137001) similar issue I saw (might not be a lot of help though).
### Versions
```
Collecting environment information...
PyTorch version: 2.4.1
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 15.0.1 (arm64)
GCC version: Could not collect
Clang version: 16.0.0 (clang-1600.0.26.3)
CMake version: Could not collect
Libc version: N/A
Python version: 3.9.6 (default, Aug 9 2024, 14:24:13) [Clang 16.0.0 (clang-1600.0.26.3)] (64-bit runtime)
Python platform: macOS-15.0.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M3
Versions of relevant libraries:
[pip3] numpy==2.0.2
[pip3] torch==2.4.1
[pip3] torchtext==0.18.0
[pip3] torchvision==0.19.1
[conda] Could not collect
```
cc @kulinseth @albanD @malfet @DenisVieriu97 @jhavukainen | needs reproduction,triaged,module: correctness (silent),module: mps | low | Critical |
2,587,762,022 | PowerToys | Adding the mouse highlighter mode command to the quick access menu | ### Description of the new feature / enhancement
In the quick access menu, could we not add the possibility to activate or deactivate the mouse highlighter mode.
This could look like a checkbox.
This use would be identical to the activation keyboard shortcut.
If we use the highlighter from time to time, the keyboard shortcut is forgotten and the action on the quick access menu solves this problem.

### Scenario when this would be used?
If we use the highlighter from time to time, the keyboard shortcut is forgotten and the action on the quick access menu solves this problem.
### Supporting information
_No response_ | Needs-Triage | low | Minor |
2,587,795,739 | vscode | No activated agent with id in remote window | <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: Yes/No
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
Version: 1.94.2
Commit: 384ff7382de624fb94dbaf6da11977bba1ecd427
Date: 2024-10-09T16:08:44.566Z
Electron: 30.5.1
ElectronBuildId: 10262041
Chromium: 124.0.6367.243
Node.js: 20.16.0
V8: 12.4.254.20-electron.0
OS: Darwin arm64 23.6.0
Steps to Reproduce:
1. I have a Copilot running on the remote extension host process, in remote window.
2. The chat participant is contributed by the local extension.
| bug,remote,confirmation-pending,chat | low | Critical |
2,587,799,122 | pytorch | inconsistency in ```torch.special.entr``` on CPU and GPU | ### 🐛 Describe the bug
getting different results on CPU and GPU when computing ```torch.special.entr```
``` python #
#include <iostream>
#include <torch/torch.h>
int main(){
torch::Tensor tensor = torch::tensor({
{
{
{{-1.4375, -1.1172}, {-0.3379, -0.2910}},
{{-0.7227, -0.6094}, {0.1611, 0.1992}}
},
{
{{0.0464, -1.6797}, {1.5156, -0.5039}},
{{-0.9531, -0.0569}, {-0.0757, 2.7969}}
}
},
{
{
{{-0.0444, -0.2441}, {0.8164, -2.0625}},
{{0.0046, -0.5547}, {-1.8750, 1.7422}}
},
{
{{1.9375, 0.2930}, {0.2480, -0.9180}},
{{-0.7305, 0.1426}, {0.0605, 0.2578}}
}
}
}, torch::kBFloat16);
auto tensor_cuda = tensor.cuda();
std::cout << "initialized tensor (CPU):\n" << tensor << std::endl;
auto result_cpu = torch::special::entr(tensor);
auto result_gpu = torch::special::entr(tensor_cuda);
std::cout << "CPU result: \n" << result_cpu << std::endl;
std::cout << "GPU result: \n" << result_gpu << std::endl;
bool inconsistent = !torch::allclose(result_cpu, result_gpu.cpu(), 1e-03, 1e-02);
std::cout << "inconsistency with atol=1e-02 and rtol=1e-03: " << std::boolalpha << inconsistent << std::endl;
}
```
outputs (take a look at matrix (2,1,2,.,.) in cpu and gpu results):
```
initialized tensor (CPU):
(1,1,1,.,.) =
-1.4375 -1.1172
-0.3379 -0.2910
(2,1,1,.,.) =
-0.0444 -0.2441
0.8164 -2.0625
(1,2,1,.,.) =
0.0464 -1.6797
1.5156 -0.5039
(2,2,1,.,.) =
1.9375 0.2930
0.2480 -0.9180
(1,1,2,.,.) =
-0.7227 -0.6094
0.1611 0.1992
(2,1,2,.,.) =
0.0046 -0.5547
-1.8750 1.7422
(1,2,2,.,.) =
-0.9531 -0.0569
-0.0757 2.7969
(2,2,2,.,.) =
-0.7305 0.1426
0.0605 0.2578
[ CPUBFloat16Type{2,2,2,2,2} ]
CPU result:
(1,1,1,.,.) =
-inf -inf
-inf -inf
(2,1,1,.,.) =
-inf -inf
0.1660 -inf
(1,2,1,.,.) =
0.1426 -inf
-0.6289 -inf
(2,2,1,.,.) =
-1.2812 0.3594
0.3457 -inf
(1,1,2,.,.) =
-inf -inf
0.2949 0.3223
(2,1,2,.,.) =
0.01 *
2.4780 -inf
-inf -96.4844
(1,2,2,.,.) =
-inf -inf
-inf -2.8906
(2,2,2,.,.) =
-inf 0.2773
0.1689 0.3496
[ CPUBFloat16Type{2,2,2,2,2} ]
GPU result:
(1,1,1,.,.) =
-inf -inf
-inf -inf
(2,1,1,.,.) =
-inf -inf
0.1660 -inf
(1,2,1,.,.) =
0.1426 -inf
-0.6289 -inf
(2,2,1,.,.) =
-1.2812 0.3594
0.3457 -inf
(1,1,2,.,.) =
-inf -inf
0.2949 0.3223
(2,1,2,.,.) =
0.01 *
2.4780 -inf
-inf -96.8750
(1,2,2,.,.) =
-inf -inf
-inf -2.8750
(2,2,2,.,.) =
-inf 0.2773
0.1699 0.3496
[ CUDABFloat16Type{2,2,2,2,2} ]
inconsistency with atol=1e-02 and rtol=1e-03: true
```
### Versions
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 16.0.4 (https://github.com/llvm/llvm-project ae42196bc493ffe877a7e3dff8be32035dea4d07)
CMake version: version 3.26.4
Libc version: glibc-2.35
Python version: 3.9.19 (main, May 6 2024, 19:43:03) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-105-generic-x86_64-with-glibc2.35
Is CUDA available: N/A
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 550.78
cuDNN version: Probably one of the following:
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn.so.8.9.2
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.9.2
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.9.2
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.9.2
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.9.2
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.9.2
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.9.2
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Gold 6248R CPU @ 3.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
Stepping: 7
CPU max MHz: 4000.0000
CPU min MHz: 1200.0000
BogoMIPS: 6000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 1.5 MiB (48 instances)
L1i cache: 1.5 MiB (48 instances)
L2 cache: 48 MiB (48 instances)
L3 cache: 71.5 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] optree==0.11.0
[pip3] torch==2.2.0a0+git9fa3350
[conda] magma-cuda121 2.6.1 1 pytorch
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-include 2023.1.0 h06a4308_46344
[conda] numpy 1.26.4 pypi_0 pypi
[conda] optree 0.11.0 pypi_0 pypi
[conda] torch 2.2.0a0+git9fa3350 dev_0
cc @mruberry @kshitij12345 | triaged,module: correctness (silent),module: special | low | Critical |
2,587,821,400 | excalidraw | Feature - Preview map | Hi,
As the size of whiteboards increase, the updates on viewport by dragging is not efficient for users.
A `preview map` for the whole whiteboard would be really helpful. And I would like to help implement this feature.
**Functional Requirements:**
1. Toggle Visibility
Open/Close Preview Map:
Users can click a designated button to open or close the preview map.
Provide a keyboard shortcut to toggle the preview map for quick access.
2. Display of Whiteboard Content
Complete Outline View:
The preview map displays outlines of all shapes and objects present on the whiteboard.
Maintain aspect ratio to prevent distortion of the whiteboard representation.
Real-time Updates:
The preview map updates in real-time to reflect additions, deletions, or modifications on the whiteboard.
3. Viewport Representation
Current Viewport Indicator:
The preview map includes a highlighted rectangle indicating the area currently visible in the main viewport.
Dynamic Updating:
As users navigate the whiteboard (e.g., pan or zoom), the viewport indicator updates correspondingly in the preview map.
4. Navigation via Preview Map
Click to Navigate:
Users can click on any area within the preview map to instantly move the main viewport to that location.
Drag to Pan:
Users can drag the viewport indicator within the preview map to pan the main viewport smoothly.
5. Performance Optimization
Efficient Rendering:
The preview map should render efficiently without causing lag, even on large or complex whiteboards.
Resource Management:
Implement optimizations to minimize CPU and memory usage.
| enhancement | low | Major |
2,587,844,639 | TypeScript | `keyof Readonly<string[]>` different from `keyof Readonly<T>` where T is string[] | ### 🔎 Search Terms
keyof Readonly array
### 🕗 Version & Regression Information
Latest to date
### ⏯ Playground Link
https://www.typescriptlang.org/play/?#code/C4TwDgpgBAKgjFAvFAShAhgEwPYDsA2IAPAM7ABOAlrgOYDaAugHwDcUA9O1ORjgSFDJVajALAAoUJFQJkAawghsAM1hw2nKAqWqeWPIUEVq9BhIlToAaUUB5ZWn38iMJki2KVqXgeKuWFuDQKABM7jYg9o58hKTGIswaXIAy5B46RsKmEkA
### 💻 Code
```ts
type T1 = Readonly<string[]>; // readonly string[]
type R1 = keyof T1; // keyof readonly string[]
type KeyOfReadonly<T> = keyof Readonly<T>;
type R2 = KeyOfReadonly<string[]>; // ❌ keyof string[]
```
### 🙁 Actual behavior
doesn't work
### 🙂 Expected behavior
works
### Additional information about the issue
What can I say, I really hope this is not the intended behavior. | Needs Investigation | low | Minor |
2,587,872,583 | vscode | VSCode is taking too long to execute first python test |
Type: <b>Bug</b>
VSCode takes too long to execute first python test after refreshing the tests. This time increases quadratically with the number of tests in working directory. A side effect of this is that in repos with large number of tests especially parametrized tests, it drives the CPU consumption of extension host process to 100% and causes VSCode window to freeze. It only works again after reloading the window.
|Number of tests|Time taken to run first test (s)
|---|---|
|10000|8.6|
|20000|14.1|
|30000|46.1|
|40000|94.6|
|50000|194.5|
|60000|344.3|

Note that the time taken by the run_adapter script to discover tests is proportional to the number of tests so this issue is in the VSCode UI itself.
<b>Reproducer</b>
Workspace settings:
```
{
"python.testing.pytestEnabled": true,
"python.testing.cwd": "src/python",
}
```
Directory structure:
```
.
└── src
└── python
├── __pycache__
│ ├── test_example.cpython-311-pytest-7.4.3.pyc
│ └── test_sample.cpython-311-pytest-7.4.3.pyc
├── test_example.py
└── test_sample.py
```
`test_example.py` contains:
```
import pytest
count = 10000
@pytest.mark.parametrize("x", [i for i in range(count)])
def test_parametrize(x):
pass
```
`test_sample.py` contains:
```
def test_a():
pass
```
Steps to reproduce:
1. Click on refresh tests icon.
2. Once the test tree loads and `test_a` is present, click on the run test icon.
VS Code version: Code 1.93.0 (4849ca9bdf9666755eb463db297b69e5385090e3, 2024-09-04T13:02:38.431Z)
OS version: Windows_NT x64 10.0.19045
Modes:
Remote OS version: Linux x64 4.18.0-553.16.1.el8_10.x86_64
Remote OS version: Linux x64 4.18.0-553.16.1.el8_10.x86_64
Remote OS version: Linux x64 4.18.0-553.16.1.el8_10.x86_64
Remote OS version: Linux x64 4.18.0-553.16.1.el8_10.x86_64
Remote OS version: Linux x64 4.18.0-553.16.1.el8_10.x86_64
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|12th Gen Intel(R) Core(TM) i9-12900 (24 x 2419)|
|GPU Status|2d_canvas: enabled<br>canvas_oop_rasterization: enabled_on<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>opengl: enabled_on<br>rasterization: enabled<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: enabled<br>video_encode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled<br>webgpu: enabled<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|31.70GB (17.77GB free)|
|Process Argv||
|Screen Reader|no|
|VM|0%|
|Item|Value|
|---|---|
|Remote|SSH: dev-hyd|
|OS|Linux x64 4.18.0-553.16.1.el8_10.x86_64|
|CPUs|Intel(R) Xeon(R) Gold 6142 CPU @ 2.60GHz (32 x 3388)|
|Memory (System)|376.07GB (135.51GB free)|
|VM|0%|
</details><details><summary>Extensions (7)</summary>
Extension|Author (truncated)|Version
---|---|---
remote-ssh|ms-|0.115.0
remote-ssh-edit|ms-|0.87.0
remote-explorer|ms-|0.4.3
black-formatter|ms-|2024.4.0
debugpy|ms-|2024.12.0
python|ms-|2024.14.1
vscode-pylance|ms-|2024.10.1
</details>
<!-- generated by issue reporter --> | bug,perf,testing,python | low | Critical |
2,587,877,440 | pytorch | Flight recorder cannot mark the state of the batch P2P OP entry as completed. | ### 🐛 Describe the bug
We use the `torchrun --standalone --nproc-per-node=8 test.py` on a single node. After the `batch_isend_irecv` is completed, we dump the entries and find that the state of `nccl:send` and `nccl:recv` are also "scheduled". The reason may be that the following codes in ProcessGroupNCCL.cpp does not set the ncclEndEvent_ to batch P2P OPs.
https://github.com/pytorch/pytorch/blob/56cc22eb01639ebd1ca3bbe8ba0381cd8fdbcff8/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp#L3213-L3232
The codes to reproduce the issue.
```Python
import os
import time
import pickle
import torch
import torch.distributed as dist
os.environ["TORCH_NCCL_TRACE_BUFFER_SIZE"] = "200"
def dump_nccl_trace(rank):
trace = torch._C._distributed_c10d._dump_nccl_trace(
includeStackTraces=False,
onlyActive=False,
)
trace = pickle.loads(trace)
print(f"Rank = {rank}, trace = {trace}")
def test_batch_p2p():
rank = int(os.getenv("RANK", "0"))
local_rank = int(os.getenv("LOCAL_RANK", "0"))
world_size = int(os.getenv("WORLD_SIZE", "1"))
send_tensor = torch.arange(2, dtype=torch.float32) + 2 * rank
recv_tensor = torch.randn(2, dtype=torch.float32)
send_tensor = send_tensor.to(local_rank)
recv_tensor = recv_tensor.to(local_rank)
send_op = dist.P2POp(dist.isend, send_tensor, (rank + 1) % world_size)
recv_op = dist.P2POp(dist.irecv, recv_tensor, (rank - 1 + world_size)%world_size)
reqs = dist.batch_isend_irecv([send_op, recv_op])
for req in reqs:
req.wait()
torch.cuda.synchronize()
print(f"Rank = {rank}, tensor = {recv_tensor}")
dump_nccl_trace(rank)
if __name__ == "__main__":
dist.init_process_group(backend="nccl")
test_batch_p2p()
```
The trace of flight recorder.
```
{
"version": "2.1",
"pg_config": {
"0": {
"name": "0",
"desc": "default_pg",
"ranks": "[0, 1, 2, 3, 4, 5, 6, 7]"
}
},
"entries": [
{
"record_id": 0,
"pg_id": 0,
"process_group": [
"0",
"default_pg"
],
"collective_seq_id": 1,
"p2p_seq_id": 0,
"op_id": 1,
"profiling_name": "nccl:send 0->1",
"time_created_ns": 1728976538249745700,
"input_sizes": [
[
2
]
],
"input_dtypes": [
"Float"
],
"output_sizes": [
[
2
]
],
"output_dtypes": [
"Float"
],
"state": "scheduled",
"time_discovered_started_ns": null,
"time_discovered_completed_ns": null,
"retired": false,
"is_p2p": true
},
{
"record_id": 1,
"pg_id": 0,
"process_group": [
"0",
"default_pg"
],
"collective_seq_id": 1,
"p2p_seq_id": 0,
"op_id": 2,
"profiling_name": "nccl:recv 0<-7",
"time_created_ns": 1728976538250029300,
"input_sizes": [
[
2
]
],
"input_dtypes": [
"Float"
],
"output_sizes": [
[
2
]
],
"output_dtypes": [
"Float"
],
"state": "scheduled",
"time_discovered_started_ns": null,
"time_discovered_completed_ns": null,
"retired": false,
"is_p2p": true
},
{
"record_id": 2,
"pg_id": 0,
"process_group": [
"0",
"default_pg"
],
"collective_seq_id": 1,
"p2p_seq_id": 0,
"op_id": 2,
"profiling_name": "nccl:coalesced",
"time_created_ns": 1728976538250107000,
"input_sizes": [],
"input_dtypes": [],
"output_sizes": [],
"output_dtypes": [],
"state": "completed",
"time_discovered_started_ns": 1728976541309922000,
"time_discovered_completed_ns": 1728976541309923600,
"retired": false,
"is_p2p": false
}
]
}
```
### Versions
Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] onnx==1.16.2
[pip3] torch==2.4.0
[pip3] torchaudio==2.4.0
[pip3] torchlibrosa==0.1.0
[pip3] torchvision==0.19.0
[pip3] triton==3.0.0
cc @XilunWu @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed,triaged | low | Critical |
2,587,881,173 | godot | Advanced audio editor for loops/beat setup broken/different for .ogg and ,mp3s, ogg file freezes editor with offset change (not ignored as editor claims in tooltip) | ### Tested versions
- Reproduceable in 4.3
### System information
Windows 10 - Godot 4.3
### Issue description
Strange behavior with Ogg and MP3 bpm looping and Offset together. The offset control claims to ignore if set when BPM is also set in its tooltip. However, it does work with them together in the editor, runtime test, and exported .exe. If offset is set past the loop end in BPM checked, the MP3 tries and buzzes; it can just be stopped or started again. Oggs freeze whatever .exe instantiated the playback.
I also just noticed while messing a bit that the eggs sometimes don't hang whatever instantiates it, making playback impossible in the AudioStreamPlayer being used. However, this leads to instability in the "instantiator" that will eventually freeze it.
### Steps to reproduce
In the editor, when you double click an .ogg or .mp3 file, or click on the Advanced button in the Import tab, it opens the dialog for editing loops, bpm, beats, and bars.
If you hover over the offset it claims it is ignored if the BPM is set. This is not true. If you set it for MP3s it works just fine in the dialog and at runtime, when the clip or beat count (bpm loop end) is reached, if the offset is before that, it will start the loop over at that offset, otherwise it will try and buzz as it's trying to play after the end of the loop but cannot.
If you do the same with .ogg files, it will work the same unless you set up like the latter described above, the offset after the loop end, it then hangs Godot Editor, probably forever until you end task either by task manager or windows telling you it is not responding dialog. This behavior is the same for the editor dialog, testing rutime in the editor, and will also freeze the exported .exe standalone in the same way with oggs.
### Minimal reproduction project (MRP)
N/A | bug,topic:editor,topic:audio,topic:import | low | Critical |
2,587,915,021 | godot | Does a 1 tile region in the tilemap node span the entire region? | ### Tested versions
v3.6.stable.official [de2f0f147]
### System information
w10 64
### Issue description
Watch the video, when I set the region to a single tile, the wooden box graphic still draws correctly, but the region only has one tile selected, where are the other tiles being obtained to draw the entire box?
The correct thing would be for the entire wooden box to only be drawn correctly if the region spans the required tiles.
The bug:
If the atlas/region only has one tile, how is it possible that the entire table is drawn?
Expected behavior:
If the atlas/region only has one tile, only one tile of the table should be drawn, however, the table is drawn completely and correctly.
https://github.com/user-attachments/assets/ae3be475-f524-4d47-8945-4db016647799
### Steps to reproduce
See the video
### Minimal reproduction project (MRP)
... | topic:editor,topic:2d | low | Critical |
2,587,919,095 | flutter | LogicalKeySet Not Working on Linux Environment | ### Steps to reproduce
1. Create a new Flutter project.
2. Implement the Shortcuts widget with a specific key combination.
3. Test the functionality on a Windows machine (works as expected).
4. Test the same functionality on a Linux machine (does not work).
### Expected results
The shortcuts should trigger the associated actions on both Windows and Linux environments.
### Actual results
The shortcuts only trigger the actions on Windows. On Linux, there is no response when the key combination is pressed.
### Code sample
The official demo of shortcuts is enough to demonstrate this:
https://dartpad.dev/?embed=true&split=60&run=true&sample_id=widgets.Shortcuts.2&channel=stable
### Screenshots or Video
<details open>
<summary>Screenshots / Video demonstration</summary>
https://github.com/user-attachments/assets/df74011c-878d-4d09-a61a-f04a16cb286f
</details>
### Logs
<details open><summary>Logs</summary>
```console
❯ flutter run
Launching lib/main.dart on Linux in debug mode...
Building Linux application...
✓ Built build/linux/x64/debug/bundle/shortcut_demo
** (shortcut_demo:106985): CRITICAL **: 15:36:29.180: Failed to read XDG desktop portal settings: GDBus.Error:org.freedesktop.portal.Error.NotFound: Requested setting not found
** (shortcut_demo:106985): CRITICAL **: 15:36:29.180: Failed to read XDG desktop portal settings: GDBus.Error:org.freedesktop.portal.Error.NotFound: Requested setting not found
** (shortcut_demo:106985): CRITICAL **: 15:36:29.181: Failed to read XDG desktop portal settings: GDBus.Error:org.freedesktop.portal.Error.NotFound: Requested setting not found
** (shortcut_demo:106985): CRITICAL **: 15:36:29.181: Failed to read XDG desktop portal settings: GDBus.Error:org.freedesktop.portal.Error.NotFound: Requested setting not found
** (shortcut_demo:106985): CRITICAL **: 15:36:29.181: Failed to read XDG desktop portal settings: GDBus.Error:org.freedesktop.portal.Error.NotFound: Requested setting not found
(shortcut_demo:106985): Atk-CRITICAL **: 15:36:29.181: atk_socket_embed: assertion 'plug_id != NULL' failed
Syncing files to device Linux... 50ms
Flutter run key commands.
r Hot reload. 🔥🔥🔥
R Hot restart.
h List all available interactive commands.
d Detach (terminate "flutter run" but leave application running).
c Clear the screen
q Quit (terminate the application on the device).
A Dart VM Service on Linux is available at: http://127.0.0.1:40701/TtwvhbbgA4U=/
The Flutter DevTools debugger and profiler on Linux is available at: http://127.0.0.1:9102?uri=http://127.0.0.1:40701/TtwvhbbgA4U=/
Lost connection to device.
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
❯ flutter doctor -v
[✓] Flutter (Channel stable, 3.24.1, on NixOS 24.05 (Uakari) 6.11.2, locale en_US.UTF-8)
• Flutter version 3.24.1 on channel stable at /nix/store/cpgyn3rn3a117nv4s1g7j5ajgbdkplby-flutter-wrapped-3.24.1-sdk-links
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision nixpkgs000 (), 1970-01-01 00:00:00
• Engine revision c9b9d5780d
• Dart version 3.5.1
• DevTools version 2.37.2
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /nix/store/jlm9zzh56f609sxlnzz78n57vj5kk2h9-android-sdk-env/share/android-sdk
• Platform android-34, build-tools 34.0.0
• ANDROID_HOME = /nix/store/jlm9zzh56f609sxlnzz78n57vj5kk2h9-android-sdk-env/share/android-sdk
• ANDROID_SDK_ROOT = /nix/store/jlm9zzh56f609sxlnzz78n57vj5kk2h9-android-sdk-env/share/android-sdk
• Java binary at: /nix/store/xy53lk4001h814d7dwh8f52wcqxrn7rp-openjdk-17.0.11+9/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.11+9-nixos)
• All Android licenses accepted.
[✗] Chrome - develop for the web (Cannot find Chrome executable at google-chrome)
! Cannot find Chrome. Try setting CHROME_EXECUTABLE to a Chrome executable.
[✓] Linux toolchain - develop for Linux desktop
• clang version 18.1.8
• cmake version 3.29.6
• ninja version 1.12.1
• pkg-config version 0.29.2
[!] Android Studio (not installed)
• Android Studio not found; download from https://developer.android.com/studio/index.html
(or visit https://flutter.dev/to/linux-android-setup for detailed instructions).
[✓] Connected device (1 available)
• Linux (desktop) • linux • linux-x64 • NixOS 24.05 (Uakari) 6.11.2
[✓] Network resources
• All expected network resources are available.
! Doctor found issues in 2 categories.
```
</details>
In addition, I'm using Fcitx 5.
| d: api docs,a: desktop,has reproducible steps,P3,team-linux,found in release: 3.24,found in release: 3.27 | low | Critical |
2,587,920,026 | pytorch | Inconsistent output for ConvTranspose3d on GPU | ### 🐛 Describe the bug
See code snippets:
```
import torch
from torch import nn
m = nn.ConvTranspose3d(32, 16, bias=False, kernel_size=(4, 4, 4), padding=(1, 1, 1), stride=(2, 2, 2))
input = torch.randn(1, 32, 32, 32, 10)
output1 = m(input)
output2 = m(input)
print('CPU version', (output1==output2).sum()/output1.numel())
m = m.cuda()
input = input.cuda()
output1 = m(input)
output2 = m(input)
print('GPU case 1', (output1==output2).sum()/output1.numel())
output1 = m(input)
output2 = m(input)
print('GPU case 2', (output1==output2).sum()/output1.numel())
```
with following result:
```
CPU version tensor(1.)
GPU case 1 tensor(0.7242, device='cuda:0')
GPU case 2 tensor(0.7333, device='cuda:0')
```
The output return by module on GPU is inconsistent with on CPU, and also different from each other for each run.
### Versions
```
Collecting environment information...
PyTorch version: 2.4.0+cu118
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A
OS: CentOS Linux 7 (Core) (x86_64)
GCC version: (GCC) 9.3.1 20200408 (Red Hat 9.3.1-2)
Clang version: Could not collect
CMake version: version 2.8.12.2
Libc version: glibc-2.28
Python version: 3.11.9 (main, Apr 19 2024, 16:48:06) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.163-1.el7.x86_64-x86_64-with-glibc2.28
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3090
GPU 1: NVIDIA GeForce RTX 3090
GPU 2: NVIDIA GeForce RTX 3090
GPU 3: NVIDIA GeForce RTX 3090
Nvidia driver version: 550.107.02
cuDNN version: Probably one of the following:
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn.so.8.5.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.5.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.5.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.5.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.5.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.5.0
/usr/local/cuda-11.7/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.5.0
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn.so.8.9.7
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_adv_infer.so.8.9.7
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_adv_train.so.8.9.7
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_cnn_infer.so.8.9.7
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_cnn_train.so.8.9.7
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_ops_infer.so.8.9.7
/usr/local/cuda-11.8/targets/x86_64-linux/lib/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Silver 4214 CPU @ 2.20GHz
Stepping: 7
CPU MHz: 999.998
CPU max MHz: 3200.0000
CPU min MHz: 1000.0000
BogoMIPS: 4400.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 16896K
NUMA node0 CPU(s): 0-11,24-35
NUMA node1 CPU(s): 12-23,36-47
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke avx512_vnni md_clear flush_l1d arch_capabilities
Versions of relevant libraries:
[pip3] flake8==7.1.1
[pip3] numpy==1.26.3
[pip3] onnx==1.16.2
[pip3] onnxruntime-gpu==1.19.2
[pip3] onnxscript==0.1.0.dev20241011
[pip3] torch==2.4.0+cu118
[pip3] torchaudio==2.4.0+cu118
[pip3] torchsummary==1.5.1
[pip3] torchvision==0.19.0+cu118
[pip3] torchviz==0.0.2
[pip3] triton==3.0.0
[conda] numpy 1.26.3 pypi_0 pypi
[conda] torch 2.4.0+cu118 pypi_0 pypi
[conda] torchaudio 2.4.0+cu118 pypi_0 pypi
[conda] torchsummary 1.5.1 pypi_0 pypi
[conda] torchvision 0.19.0+cu118 pypi_0 pypi
[conda] torchviz 0.0.2 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi
```
cc @mruberry @kurtamohler | triaged,module: determinism | low | Critical |
2,587,942,951 | transformers | Image-Text-to-Text Support in Transformers Pipeline | ### Feature request
Implement the new feature to support a pipeline that can take both an image and text as inputs, and produce a text output. This would be particularly useful for multi-modal tasks such as visual question answering (VQA), image captioning, or image-based text generation.
```python
from transformers import pipeline
# Initialize the pipeline with multi-modal models
multi_modal_pipeline = pipeline("image-text-to-text", model="meta-llama/Llama-3.2-11B-Vision-Instruct")
# Example usage
messages = [
{"role": "user", "content": [
{"type": "image"},
{"type": "text", "text": "If I had to write a haiku for this one, it would be: "}
]}
]
result = multi_modal_pipeline(messages )
print(result) # Should return an answer or relevant text based on the image and question
```
### Motivation
- Simplifies workflows involving multi-modal data.
- Enables more complex and realistic tasks to be handled with existing Transformer models.
- Encourages more multi-modal model usage in research and production.
### Your contribution
**Transformers Integration**
Ensure that the pipeline works well within the Hugging Face Transformers library:
- Implement the custom pipeline class (`ImageTextToTextPipeline`).
- Add support for handling different data types (image, text) and ensure smooth forward pass execution.
```python
class ImageTextToTextPipeline(Pipeline):
....
``` | Feature request | low | Minor |
2,587,954,509 | PowerToys | KeePassXC is not recognised | ### Microsoft PowerToys version
0.85.1
### Installation method
PowerToys auto-update
### Running as admin
Yes
### Area(s) with issue?
Workspaces
### Steps to reproduce
1. Have [KeePassXC](https://keepassxc.org/) installed
2. Have KeePassXC open
3. Try to capture it with Workspaces
### ✔️ Expected Behavior
KeePassXC is captured and can be used in Workspaces
### ❌ Actual Behavior
KeePassXC is not recognised by Workspaces
### Other Software
Windows 10 Pro 22H2 | Issue-Bug,Needs-Triage,Product-Workspaces | low | Minor |
2,587,988,620 | vscode | Improvement to Minimap: Colour segment functions, better visualisation and location indicator | <!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- Please search existing issues to avoid creating duplicates. -->
<!-- Describe the feature you'd like. -->
The minimap is essential in helping in navigation, especially to know the location at the code. Why not use colours to segment functions (only functions) like so: It makes the code much easier to read and digest.
Compare the left column to the right column:

**My argument is:** Just by glancing at the left, I know my location exactly within the code, and this gives me the following knowledge/abilities. For example:
1. I can see the file has 5 functions, and that I am at the beginning of the 4th function.
2. I can easily jump to the beginning of the current function, say to modify a parameter.
3. A glance at the map, I can see the size of each function (from the segmentations), and I can tell exactly where I am and which part of code I am looking at.
4. I can tell the file has two almost similar size functions (could be overrides), it has 3 other main bigger functions. So I can see the sizes of the functions and their count. Glancing at the map and the file name (the tab) helps me remember what this file does...etc. etc. | feature-request,editor-minimap | low | Minor |
2,587,993,659 | pytorch | DISABLED test_ddp_update_process_group_new_group (__main__.TestDistBackendWithSpawn) | This test was disabled because it is failing in PR #137161 ([recent examples](https://hud.pytorch.org/pr/pytorch/pytorch/137161#31427529291)).
It has `@skip_if_lt_x_gpu(4)` decorator which means it was never tested by CI before the 2x M60 to 4x T4 upgreade.
cc @XilunWu @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed,triaged,skipped | low | Minor |
2,588,076,784 | tensorflow | Cannot pass $LOCAL_CUDNN_PATH as /usr | ### Issue type
Bug
### Have you reproduced the bug with TensorFlow Nightly?
Yes
### Source
source
### TensorFlow version
2.18.0.rc0
### Custom code
Yes
### OS platform and distribution
Linux Ubuntu 22.04
### Mobile device
_No response_
### Python version
3.10
### Bazel version
6.5.0
### GCC/compiler version
12
### CUDA/cuDNN version
8.9.0
### GPU model and memory
GTX 4090
### Current behavior?
If I pass `LOCAL_CUDNN_PATH` as /usr, just like the docker image nvidia/cuda:12.1.1-cudnn8-devel-ubuntu22.04, then tensorflow bazel would create a symlink: external/cudnn/include as /usr/include,
then, tensorflow would pass `-isystem external/cuda_cudnn/include`
after that, `#include_next` always skip the current directory external/cuda_cudnn/include, which is actually /usr/include, then bazel would report the error:
```
/usr/local/include/c++/12/cstdlib:75:15: fatal error: stdlib.h: No such file or directory
#include_next <stdlib.h>
^~~~~~~~~~
compilation terminated.
```
If I do not pass `LOCAL_CUDNN_PATH` and let tensorflow redownload a cudnn library, things would be very smooth.
### Standalone code to reproduce the issue
```shell
git submodule sync
git submodule update --init --recursive
export _GLIBCXX_USE_CXX11_ABI=1
. /work/conda_init.sh \
&& conda activate py3 \
&& HERMETIC_CUDA_VERSION=12.1.0 HERMETIC_CUDNN_VERSION=8.9.4.25 HERMETIC_CUDA_COMPUTE_CAPABILITIES=9.0 TF_NVCC_CLANG=1 TF_NEED_TENSORRT=1 LOCAL_CUDANN_PATH=/usr LOCAL_CUDA_PATH=$CUDA_HOME TF_NEED_CUDA=1 CLANG_CUDA_COMPILER_PATH=/llvm_release_17_with_nvptx/bin/clang ./configure
. /work/conda_init.sh \
&& conda activate py3 \
&& export LOCAL_CUDA_PATH=$CUDA_HOME \
&& bazel build --config=opt --copt="-Iexternal/cuda_cudnn/include" --copt="-Wno-error=unused-command-line-argument" //tensorflow/tools/pip_package:wheel --repo_env=WHEEL_NAME=tensorflow --config=cuda --config=cuda_wheel --config=dbg --copt="-Wno-int-conversion" --copt="-Wno-error=extra-semi" --copt="-Wno-gnu-include-next" --copt="-Wno-error=c23-extensions" --copt="-Wno-error=overlength-strings" --copt="--gcc-install-dir=/usr/lib/gcc/x86_64-linux-gnu/12" --verbose_failures --subcommands
```
```
### Relevant log output
_No response_ | stat:awaiting tensorflower,type:build/install,subtype: ubuntu/linux,comp:core,2.18.rc | medium | Critical |
2,588,079,155 | opencv | DNNTestNetwork.FastNeuralStyle_eccv16 reduced for the new engine | ### System Information
Platform: any
Reference: https://github.com/opencv/opencv/pull/26056
### Detailed description
The test checks fallbacks to CPU and expects that it should be done with CUDA only. The new engine does not support non-CPU back-ends. The fallback check disabled for now.
### Steps to reproduce
-
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [X] I updated to the latest OpenCV version and the issue is still there
- [X] There is reproducer code and related data files (videos, images, onnx, etc) | bug,category: dnn | low | Minor |
2,588,088,199 | flutter | [go_router_builder] Two consecutive pop operations trigger "Future already completed" error | ### What package does this bug report belong to?
go_router_builder
### What target platforms are you seeing this bug on?
Android, iOS
### Have you already upgraded your packages?
Yes
### Dependency versions
<details><summary>pubspec.lock</summary>
```lock
# Generated by pub
# See https://dart.dev/tools/pub/glossary#lockfile
packages:
_fe_analyzer_shared:
dependency: transitive
description:
name: _fe_analyzer_shared
sha256: f256b0c0ba6c7577c15e2e4e114755640a875e885099367bf6e012b19314c834
url: "https://pub.dev"
source: hosted
version: "72.0.0"
_macros:
dependency: transitive
description: dart
source: sdk
version: "0.3.2"
analyzer:
dependency: transitive
description:
name: analyzer
sha256: b652861553cd3990d8ed361f7979dc6d7053a9ac8843fa73820ab68ce5410139
url: "https://pub.dev"
source: hosted
version: "6.7.0"
args:
dependency: transitive
description:
name: args
sha256: bf9f5caeea8d8fe6721a9c358dd8a5c1947b27f1cfaa18b39c301273594919e6
url: "https://pub.dev"
source: hosted
version: "2.6.0"
async:
dependency: transitive
description:
name: async
sha256: "947bfcf187f74dbc5e146c9eb9c0f10c9f8b30743e341481c1e2ed3ecc18c20c"
url: "https://pub.dev"
source: hosted
version: "2.11.0"
boolean_selector:
dependency: transitive
description:
name: boolean_selector
sha256: "6cfb5af12253eaf2b368f07bacc5a80d1301a071c73360d746b7f2e32d762c66"
url: "https://pub.dev"
source: hosted
version: "2.1.1"
build:
dependency: transitive
description:
name: build
sha256: "80184af8b6cb3e5c1c4ec6d8544d27711700bc3e6d2efad04238c7b5290889f0"
url: "https://pub.dev"
source: hosted
version: "2.4.1"
build_config:
dependency: transitive
description:
name: build_config
sha256: bf80fcfb46a29945b423bd9aad884590fb1dc69b330a4d4700cac476af1708d1
url: "https://pub.dev"
source: hosted
version: "1.1.1"
build_daemon:
dependency: transitive
description:
name: build_daemon
sha256: "79b2aef6ac2ed00046867ed354c88778c9c0f029df8a20fe10b5436826721ef9"
url: "https://pub.dev"
source: hosted
version: "4.0.2"
build_resolvers:
dependency: transitive
description:
name: build_resolvers
sha256: "339086358431fa15d7eca8b6a36e5d783728cf025e559b834f4609a1fcfb7b0a"
url: "https://pub.dev"
source: hosted
version: "2.4.2"
build_runner:
dependency: "direct main"
description:
name: build_runner
sha256: "028819cfb90051c6b5440c7e574d1896f8037e3c96cf17aaeb054c9311cfbf4d"
url: "https://pub.dev"
source: hosted
version: "2.4.13"
build_runner_core:
dependency: transitive
description:
name: build_runner_core
sha256: f8126682b87a7282a339b871298cc12009cb67109cfa1614d6436fb0289193e0
url: "https://pub.dev"
source: hosted
version: "7.3.2"
built_collection:
dependency: transitive
description:
name: built_collection
sha256: "376e3dd27b51ea877c28d525560790aee2e6fbb5f20e2f85d5081027d94e2100"
url: "https://pub.dev"
source: hosted
version: "5.1.1"
built_value:
dependency: transitive
description:
name: built_value
sha256: c7913a9737ee4007efedaffc968c049fd0f3d0e49109e778edc10de9426005cb
url: "https://pub.dev"
source: hosted
version: "8.9.2"
characters:
dependency: transitive
description:
name: characters
sha256: "04a925763edad70e8443c99234dc3328f442e811f1d8fd1a72f1c8ad0f69a605"
url: "https://pub.dev"
source: hosted
version: "1.3.0"
checked_yaml:
dependency: transitive
description:
name: checked_yaml
sha256: feb6bed21949061731a7a75fc5d2aa727cf160b91af9a3e464c5e3a32e28b5ff
url: "https://pub.dev"
source: hosted
version: "2.0.3"
clock:
dependency: transitive
description:
name: clock
sha256: cb6d7f03e1de671e34607e909a7213e31d7752be4fb66a86d29fe1eb14bfb5cf
url: "https://pub.dev"
source: hosted
version: "1.1.1"
code_builder:
dependency: transitive
description:
name: code_builder
sha256: f692079e25e7869c14132d39f223f8eec9830eb76131925143b2129c4bb01b37
url: "https://pub.dev"
source: hosted
version: "4.10.0"
collection:
dependency: transitive
description:
name: collection
sha256: ee67cb0715911d28db6bf4af1026078bd6f0128b07a5f66fb2ed94ec6783c09a
url: "https://pub.dev"
source: hosted
version: "1.18.0"
convert:
dependency: transitive
description:
name: convert
sha256: "0f08b14755d163f6e2134cb58222dd25ea2a2ee8a195e53983d57c075324d592"
url: "https://pub.dev"
source: hosted
version: "3.1.1"
crypto:
dependency: transitive
description:
name: crypto
sha256: ec30d999af904f33454ba22ed9a86162b35e52b44ac4807d1d93c288041d7d27
url: "https://pub.dev"
source: hosted
version: "3.0.5"
cupertino_icons:
dependency: "direct main"
description:
name: cupertino_icons
sha256: ba631d1c7f7bef6b729a622b7b752645a2d076dba9976925b8f25725a30e1ee6
url: "https://pub.dev"
source: hosted
version: "1.0.8"
dart_style:
dependency: transitive
description:
name: dart_style
sha256: "7856d364b589d1f08986e140938578ed36ed948581fbc3bc9aef1805039ac5ab"
url: "https://pub.dev"
source: hosted
version: "2.3.7"
fake_async:
dependency: transitive
description:
name: fake_async
sha256: "511392330127add0b769b75a987850d136345d9227c6b94c96a04cf4a391bf78"
url: "https://pub.dev"
source: hosted
version: "1.3.1"
file:
dependency: transitive
description:
name: file
sha256: a3b4f84adafef897088c160faf7dfffb7696046cb13ae90b508c2cbc95d3b8d4
url: "https://pub.dev"
source: hosted
version: "7.0.1"
fixnum:
dependency: transitive
description:
name: fixnum
sha256: "25517a4deb0c03aa0f32fd12db525856438902d9c16536311e76cdc57b31d7d1"
url: "https://pub.dev"
source: hosted
version: "1.1.0"
flutter:
dependency: "direct main"
description: flutter
source: sdk
version: "0.0.0"
flutter_lints:
dependency: "direct dev"
description:
name: flutter_lints
sha256: "3f41d009ba7172d5ff9be5f6e6e6abb4300e263aab8866d2a0842ed2a70f8f0c"
url: "https://pub.dev"
source: hosted
version: "4.0.0"
flutter_test:
dependency: "direct dev"
description: flutter
source: sdk
version: "0.0.0"
flutter_web_plugins:
dependency: transitive
description: flutter
source: sdk
version: "0.0.0"
frontend_server_client:
dependency: transitive
description:
name: frontend_server_client
sha256: f64a0333a82f30b0cca061bc3d143813a486dc086b574bfb233b7c1372427694
url: "https://pub.dev"
source: hosted
version: "4.0.0"
glob:
dependency: transitive
description:
name: glob
sha256: "0e7014b3b7d4dac1ca4d6114f82bf1782ee86745b9b42a92c9289c23d8a0ab63"
url: "https://pub.dev"
source: hosted
version: "2.1.2"
go_router:
dependency: "direct main"
description:
name: go_router
sha256: "6f1b756f6e863259a99135ff3c95026c3cdca17d10ebef2bba2261a25ddc8bbc"
url: "https://pub.dev"
source: hosted
version: "14.3.0"
go_router_builder:
dependency: "direct main"
description:
name: go_router_builder
sha256: "3425b72dea69209754ac6b71b4da34165dcd4d4a2934713029945709a246427a"
url: "https://pub.dev"
source: hosted
version: "2.7.1"
graphs:
dependency: transitive
description:
name: graphs
sha256: "741bbf84165310a68ff28fe9e727332eef1407342fca52759cb21ad8177bb8d0"
url: "https://pub.dev"
source: hosted
version: "2.3.2"
http_multi_server:
dependency: transitive
description:
name: http_multi_server
sha256: "97486f20f9c2f7be8f514851703d0119c3596d14ea63227af6f7a481ef2b2f8b"
url: "https://pub.dev"
source: hosted
version: "3.2.1"
http_parser:
dependency: transitive
description:
name: http_parser
sha256: "2aa08ce0341cc9b354a498388e30986515406668dbcc4f7c950c3e715496693b"
url: "https://pub.dev"
source: hosted
version: "4.0.2"
io:
dependency: transitive
description:
name: io
sha256: "2ec25704aba361659e10e3e5f5d672068d332fc8ac516421d483a11e5cbd061e"
url: "https://pub.dev"
source: hosted
version: "1.0.4"
js:
dependency: transitive
description:
name: js
sha256: c1b2e9b5ea78c45e1a0788d29606ba27dc5f71f019f32ca5140f61ef071838cf
url: "https://pub.dev"
source: hosted
version: "0.7.1"
json_annotation:
dependency: transitive
description:
name: json_annotation
sha256: "1ce844379ca14835a50d2f019a3099f419082cfdd231cd86a142af94dd5c6bb1"
url: "https://pub.dev"
source: hosted
version: "4.9.0"
leak_tracker:
dependency: transitive
description:
name: leak_tracker
sha256: "3f87a60e8c63aecc975dda1ceedbc8f24de75f09e4856ea27daf8958f2f0ce05"
url: "https://pub.dev"
source: hosted
version: "10.0.5"
leak_tracker_flutter_testing:
dependency: transitive
description:
name: leak_tracker_flutter_testing
sha256: "932549fb305594d82d7183ecd9fa93463e9914e1b67cacc34bc40906594a1806"
url: "https://pub.dev"
source: hosted
version: "3.0.5"
leak_tracker_testing:
dependency: transitive
description:
name: leak_tracker_testing
sha256: "6ba465d5d76e67ddf503e1161d1f4a6bc42306f9d66ca1e8f079a47290fb06d3"
url: "https://pub.dev"
source: hosted
version: "3.0.1"
lints:
dependency: transitive
description:
name: lints
sha256: "976c774dd944a42e83e2467f4cc670daef7eed6295b10b36ae8c85bcbf828235"
url: "https://pub.dev"
source: hosted
version: "4.0.0"
logging:
dependency: transitive
description:
name: logging
sha256: "623a88c9594aa774443aa3eb2d41807a48486b5613e67599fb4c41c0ad47c340"
url: "https://pub.dev"
source: hosted
version: "1.2.0"
macros:
dependency: transitive
description:
name: macros
sha256: "0acaed5d6b7eab89f63350bccd82119e6c602df0f391260d0e32b5e23db79536"
url: "https://pub.dev"
source: hosted
version: "0.1.2-main.4"
matcher:
dependency: transitive
description:
name: matcher
sha256: d2323aa2060500f906aa31a895b4030b6da3ebdcc5619d14ce1aada65cd161cb
url: "https://pub.dev"
source: hosted
version: "0.12.16+1"
material_color_utilities:
dependency: transitive
description:
name: material_color_utilities
sha256: f7142bb1154231d7ea5f96bc7bde4bda2a0945d2806bb11670e30b850d56bdec
url: "https://pub.dev"
source: hosted
version: "0.11.1"
meta:
dependency: transitive
description:
name: meta
sha256: bdb68674043280c3428e9ec998512fb681678676b3c54e773629ffe74419f8c7
url: "https://pub.dev"
source: hosted
version: "1.15.0"
mime:
dependency: transitive
description:
name: mime
sha256: "41a20518f0cb1256669420fdba0cd90d21561e560ac240f26ef8322e45bb7ed6"
url: "https://pub.dev"
source: hosted
version: "2.0.0"
package_config:
dependency: transitive
description:
name: package_config
sha256: "1c5b77ccc91e4823a5af61ee74e6b972db1ef98c2ff5a18d3161c982a55448bd"
url: "https://pub.dev"
source: hosted
version: "2.1.0"
path:
dependency: transitive
description:
name: path
sha256: "087ce49c3f0dc39180befefc60fdb4acd8f8620e5682fe2476afd0b3688bb4af"
url: "https://pub.dev"
source: hosted
version: "1.9.0"
pool:
dependency: transitive
description:
name: pool
sha256: "20fe868b6314b322ea036ba325e6fc0711a22948856475e2c2b6306e8ab39c2a"
url: "https://pub.dev"
source: hosted
version: "1.5.1"
pub_semver:
dependency: transitive
description:
name: pub_semver
sha256: "40d3ab1bbd474c4c2328c91e3a7df8c6dd629b79ece4c4bd04bee496a224fb0c"
url: "https://pub.dev"
source: hosted
version: "2.1.4"
pubspec_parse:
dependency: transitive
description:
name: pubspec_parse
sha256: c799b721d79eb6ee6fa56f00c04b472dcd44a30d258fac2174a6ec57302678f8
url: "https://pub.dev"
source: hosted
version: "1.3.0"
shelf:
dependency: transitive
description:
name: shelf
sha256: ad29c505aee705f41a4d8963641f91ac4cee3c8fad5947e033390a7bd8180fa4
url: "https://pub.dev"
source: hosted
version: "1.4.1"
shelf_web_socket:
dependency: transitive
description:
name: shelf_web_socket
sha256: "073c147238594ecd0d193f3456a5fe91c4b0abbcc68bf5cd95b36c4e194ac611"
url: "https://pub.dev"
source: hosted
version: "2.0.0"
sky_engine:
dependency: transitive
description: flutter
source: sdk
version: "0.0.99"
source_gen:
dependency: transitive
description:
name: source_gen
sha256: "14658ba5f669685cd3d63701d01b31ea748310f7ab854e471962670abcf57832"
url: "https://pub.dev"
source: hosted
version: "1.5.0"
source_helper:
dependency: transitive
description:
name: source_helper
sha256: "6adebc0006c37dd63fe05bca0a929b99f06402fc95aa35bf36d67f5c06de01fd"
url: "https://pub.dev"
source: hosted
version: "1.3.4"
source_span:
dependency: transitive
description:
name: source_span
sha256: "53e943d4206a5e30df338fd4c6e7a077e02254531b138a15aec3bd143c1a8b3c"
url: "https://pub.dev"
source: hosted
version: "1.10.0"
stack_trace:
dependency: transitive
description:
name: stack_trace
sha256: "73713990125a6d93122541237550ee3352a2d84baad52d375a4cad2eb9b7ce0b"
url: "https://pub.dev"
source: hosted
version: "1.11.1"
stream_channel:
dependency: transitive
description:
name: stream_channel
sha256: ba2aa5d8cc609d96bbb2899c28934f9e1af5cddbd60a827822ea467161eb54e7
url: "https://pub.dev"
source: hosted
version: "2.1.2"
stream_transform:
dependency: transitive
description:
name: stream_transform
sha256: "14a00e794c7c11aa145a170587321aedce29769c08d7f58b1d141da75e3b1c6f"
url: "https://pub.dev"
source: hosted
version: "2.1.0"
string_scanner:
dependency: transitive
description:
name: string_scanner
sha256: "556692adab6cfa87322a115640c11f13cb77b3f076ddcc5d6ae3c20242bedcde"
url: "https://pub.dev"
source: hosted
version: "1.2.0"
term_glyph:
dependency: transitive
description:
name: term_glyph
sha256: a29248a84fbb7c79282b40b8c72a1209db169a2e0542bce341da992fe1bc7e84
url: "https://pub.dev"
source: hosted
version: "1.2.1"
test_api:
dependency: transitive
description:
name: test_api
sha256: "5b8a98dafc4d5c4c9c72d8b31ab2b23fc13422348d2997120294d3bac86b4ddb"
url: "https://pub.dev"
source: hosted
version: "0.7.2"
timing:
dependency: transitive
description:
name: timing
sha256: "70a3b636575d4163c477e6de42f247a23b315ae20e86442bebe32d3cabf61c32"
url: "https://pub.dev"
source: hosted
version: "1.0.1"
typed_data:
dependency: transitive
description:
name: typed_data
sha256: facc8d6582f16042dd49f2463ff1bd6e2c9ef9f3d5da3d9b087e244a7b564b3c
url: "https://pub.dev"
source: hosted
version: "1.3.2"
vector_math:
dependency: transitive
description:
name: vector_math
sha256: "80b3257d1492ce4d091729e3a67a60407d227c27241d6927be0130c98e741803"
url: "https://pub.dev"
source: hosted
version: "2.1.4"
vm_service:
dependency: transitive
description:
name: vm_service
sha256: f652077d0bdf60abe4c1f6377448e8655008eef28f128bc023f7b5e8dfeb48fc
url: "https://pub.dev"
source: hosted
version: "14.2.4"
watcher:
dependency: transitive
description:
name: watcher
sha256: "3d2ad6751b3c16cf07c7fca317a1413b3f26530319181b37e3b9039b84fc01d8"
url: "https://pub.dev"
source: hosted
version: "1.1.0"
web:
dependency: transitive
description:
name: web
sha256: cd3543bd5798f6ad290ea73d210f423502e71900302dde696f8bff84bf89a1cb
url: "https://pub.dev"
source: hosted
version: "1.1.0"
web_socket:
dependency: transitive
description:
name: web_socket
sha256: "3c12d96c0c9a4eec095246debcea7b86c0324f22df69893d538fcc6f1b8cce83"
url: "https://pub.dev"
source: hosted
version: "0.1.6"
web_socket_channel:
dependency: transitive
description:
name: web_socket_channel
sha256: "9f187088ed104edd8662ca07af4b124465893caf063ba29758f97af57e61da8f"
url: "https://pub.dev"
source: hosted
version: "3.0.1"
yaml:
dependency: transitive
description:
name: yaml
sha256: "75769501ea3489fca56601ff33454fe45507ea3bfb014161abc3b43ae25989d5"
url: "https://pub.dev"
source: hosted
version: "3.1.2"
sdks:
dart: ">=3.5.0 <4.0.0"
flutter: ">=3.19.0"
```
</details>
### Steps to reproduce
1. Tap `Push Second Screen`
2. Tap `Push Third Screen`
3. Tap `Pop`
### Expected results
The app should navigate back to the second screen, and then the first screen.
### Actual results
The app navigates back to the second screen, but `Future already completed` error occurs. The app doesn't navigate back to the first screen.
This error occurs since `go_router` 14.0.0. Before that, it worked fine.
```
E/flutter ( 6268): [ERROR:flutter/runtime/dart_vm_initializer.cc(41)] Unhandled Exception: Bad state: Future already completed
E/flutter ( 6268): #0 _AsyncCompleter.complete (dart:async/future_impl.dart:43:31)
E/flutter ( 6268): #1 ImperativeRouteMatch.complete (package:go_router/src/match.dart:456:15)
E/flutter ( 6268): #2 GoRouterDelegate._completeRouteMatch (package:go_router/src/delegate.dart:162:14)
E/flutter ( 6268): #3 GoRouterDelegate._handlePopPageWithRouteMatch.<anonymous closure> (package:go_router/src/delegate.dart:150:9)
E/flutter ( 6268): <asynchronous suspension>
E/flutter ( 6268):
```
### Code sample
<details open><summary>Code sample</summary>
```dart
import 'package:flutter/material.dart';
import 'package:go_router/go_router.dart';
part 'main.g.dart';
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp.router(
routerConfig: GoRouter(routes: $appRoutes),
);
}
}
class FirstScreen extends StatelessWidget {
const FirstScreen({super.key});
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: const Text('First Screen'),
),
body: Center(
child: TextButton(
onPressed: () async {
final result = await SecondScreenRoute().push<String>(context);
print('result from second screen: $result');
},
child: const Text('Push Second Screen'),
),
),
);
}
}
class SecondScreen extends StatelessWidget {
const SecondScreen({super.key});
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: const Text('Second Screen'),
),
body: Center(
child: TextButton(
onPressed: () async {
final result = await ThirdScreenRoute().push<String>(context);
print('result from third screen: $result');
if (context.mounted) {
context.pop(result);
}
},
child: const Text('Push Third Screen'),
),
),
);
}
}
class ThirdScreen extends StatelessWidget {
const ThirdScreen({super.key});
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: const Text('Third Screen'),
),
body: Center(
child: TextButton(
onPressed: () {
context.pop('popped from third screen');
},
child: const Text('Pop'),
),
),
);
}
}
@TypedGoRoute<FirstScreenRoute>(
path: '/',
routes: [
TypedGoRoute<SecondScreenRoute>(
path: 'second',
routes: [
TypedGoRoute<ThirdScreenRoute>(
path: 'third',
),
],
),
],
)
class FirstScreenRoute extends GoRouteData {
@override
Widget build(BuildContext context, GoRouterState state) {
return const FirstScreen();
}
}
class SecondScreenRoute extends GoRouteData {
@override
Widget build(BuildContext context, GoRouterState state) {
return const SecondScreen();
}
}
class ThirdScreenRoute extends GoRouteData {
@override
Widget build(BuildContext context, GoRouterState state) {
return const ThirdScreen();
}
}
```
</details>
### Screenshots or Videos
<details open>
<summary>Screenshots / Video demonstration</summary>
[Upload media here]
</details>
### Logs
<details open><summary>Logs</summary>
```console
[Paste your logs here]
```
</details>
### Flutter Doctor output
<details open><summary>Doctor output</summary>
```console
[✓] Flutter (Channel stable, 3.24.0, on macOS 14.5 23F79 darwin-arm64, locale ja-JP)
• Flutter version 3.24.0 on channel stable at /Users/hibix/fvm/versions/3.24.0
• Upstream repository https://github.com/flutter/flutter.git
• Framework revision 80c2e84975 (3 months ago), 2024-07-30 23:06:49 +0700
• Engine revision b8800d88be
• Dart version 3.5.0
• DevTools version 2.37.2
[✓] Android toolchain - develop for Android devices (Android SDK version 34.0.0)
• Android SDK at /Users/hibix/Library/Android/sdk
• Platform android-34, build-tools 34.0.0
• Java binary at: /Applications/Android Studio.app/Contents/jbr/Contents/Home/bin/java
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
• All Android licenses accepted.
[✓] Xcode - develop for iOS and macOS (Xcode 15.4)
• Xcode at /Applications/Xcode-15.4.0.app/Contents/Developer
• Build 15F31d
• CocoaPods version 1.15.2
[✓] Chrome - develop for the web
• Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
[✓] Android Studio (version 2024.1)
• Android Studio at /Applications/Android Studio.app/Contents
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 17.0.11+0-17.0.11b1207.24-11852314)
[✓] VS Code (version 1.94.1)
• VS Code at /Applications/Visual Studio Code.app/Contents
• Flutter extension version 3.98.0
[✓] Connected device (5 available)
• sdk gphone64 arm64 (mobile) • emulator-5554 • android-arm64 • Android 13 (API 33) (emulator)
• iPhone12mini (mobile) • 00008101-001C5D880AD0001E • ios • iOS 17.6.1 21G93
• macOS (desktop) • macos • darwin-arm64 • macOS 14.5 23F79 darwin-arm64
• Mac Designed for iPad (desktop) • mac-designed-for-ipad • darwin • macOS 14.5 23F79 darwin-arm64
• Chrome (web) • chrome • web-javascript • Google Chrome 129.0.6668.91
[✓] Network resources
• All expected network resources are available.
• No issues found!
```
</details>
| c: crash,package,has reproducible steps,P2,p: go_router_builder,team-go_router,triaged-go_router,found in release: 3.24,found in release: 3.27 | low | Critical |
2,588,132,841 | pytorch | Can FlightRecorder set the stream key of collective and P2P OP into the entry. | ### 🚀 The feature, motivation and pitch
Flight recorder can dump uncompleted NCCL OPs from the trace buffer if the distributed training hangs. In codes of ProcessGroupNCCL.cpp, the collective OPs and batch P2P OPs are launched on the same stream. The single P2P are launched on another stream. We can select the first stuck OP on each stream if the entry of FlightRecorder has the stream key. We only need to analyze the first stuck OP on each stream to diagnose the hang training because other OPs on the same stream are blocked by the first OP.
https://github.com/pytorch/pytorch/blob/b7be4b1e4803351d05d5ed5b219532f7dc5daaa7/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp#L3161-L3170
### Alternatives
_No response_
### Additional context
_No response_
cc @XilunWu @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o | oncall: distributed,triaged | low | Major |
2,588,221,542 | react | [DevTools Bug] Cannot remove node "820" because no matching node was found in the Store. | ### Website or app
fix
### Repro steps
fix
### How often does this bug happen?
Often
### DevTools package (automated)
react-devtools-extensions
### DevTools version (automated)
6.0.0-d66fa02a30
### Error message (automated)
Cannot remove node "820" because no matching node was found in the Store.
### Error call stack (automated)
```text
at chrome-extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:1:1173591
at v.emit (chrome-extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:1:1141200)
at chrome-extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:1:1142807
at bridgeListener (chrome-extension://fmkadmapgofadopljbjfkapdkoienihi/build/main.js:1:1552441)
```
### Error component stack (automated)
```text
fix
```
### GitHub query string (automated)
```text
https://api.github.com/search/issues?q=Cannot remove node because no matching node was found in the Store. in:title is:issue is:open is:public label:"Component: Developer Tools" repo:facebook/react
```
| Type: Bug,Status: Unconfirmed,Component: Developer Tools | medium | Critical |
2,588,228,016 | react | Bug: Quickly selecting checkboxes on iOS doesn't work correctly | React version: 18.0.0 and 17.0.2
## Steps To Reproduce
1. Open https://4txdx2.csb.app/ (React 18) in a device running iOS in Safari / Chrome.
2. Quickly tap one checkbox followed by another one.
3. Notice that if you quickly select one checkbox followed by another one, it checks / unchecks the previous checkbox you were on on.
Link to code example:
React 18.0.0
[Sandbox](https://codesandbox.io/p/sandbox/react-18-checkbox-issue-4txdx2), [Preview](https://4txdx2.csb.app/)
React 17.0.2
[Sandbox](https://codesandbox.io/p/sandbox/react-17-checkbox-issue-yqgs6d), [Preview](https://yqgs6d.csb.app/)
React 16.4.0
[Sandbox](https://codesandbox.io/p/sandbox/react-16-checkbox-issue-jzsk96), [Preview](https://jzsk96.csb.app/)
## The current behavior
There appears to be some sort of race condition where tapping on a controlled / uncontrolled `<input type="checkbox">` quickly after tapping another checkbox updates the original checkbox you tapped rather than the checkbox that was just tapped. It appears to be a timing issue - if you wait long enough between taps, the events are fired on the correct elements.
This works in React 16 but not React 17 and 18.
## The expected behavior
The checkbox state should reflect the checked state of the checkbox you tapped, not the previous checkbox you tapped. | Status: Unconfirmed | medium | Critical |
2,588,248,960 | PowerToys | Instant Language Flip Feature - for wrong keyboards selected while typing | ### Description of the new feature / enhancement
The Instant Language Flip feature allows users to quickly switch the language of their typed text with a single button press. This is particularly useful when users accidentally type in the wrong language due to an incorrect keyboard layout selection. For example, if a user types "שדגש" in Hebrew but intended to type "hello" in English, pressing the Instant Language Flip button will convert the text to "hello" while maintaining the same characters. This feature enhances user productivity by eliminating the need to retype text and ensures seamless communication across different languages.
### Scenario when this would be used?
User types "שדגש" in Hebrew, intending to type "hello" in English. Pressing the Instant Language Flip button converts it to "hello".
User types "דגכ" in Hebrew, intending to type "cat" in English. Pressing the Instant Language Flip button converts it to "cat".
### Supporting information
Every bi-lingual has done this thousands of times, and I've been approached as a Microsoft employee (@Yomanor) - "Why hasn't Microsoft fixed this by now?" | Needs-Triage | low | Minor |
2,588,289,790 | opencv | Deconvolution layer is broken in dnn | ### System Information
Platform: Any
Reference: https://github.com/opencv/opencv/pull/26056/
### Detailed description
The tests for the layer are disabled. Default implementation leads to sigsegv.
### Steps to reproduce
-
### Issue submission checklist
- [X] I report the issue, it's not a question
- [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution
- [X] I updated to the latest OpenCV version and the issue is still there
- [X] There is reproducer code and related data files (videos, images, onnx, etc) | bug,category: dnn | low | Critical |
2,588,296,231 | go | proposal: cmd/go: compile-time instrumentation and `-toolexec` | ### Proposal Details
Following up on https://github.com/golang/go/issues/41145#issuecomment-2408949985, this issue aims to explain what we're doing at https://github.com/DataDog/orchestrion, why we're doing it this way, the challenges we face, and some of the toolchain evolution directions we'd like to explore.
## Context
Users are embracing observability more and more, as the growing success of OpenTelemetry demonstrates; but onboarding Go applications to observability requires manually instrumenting the whole codebase, making it a tedious/expensive endeavor especially when onboarding existing large applications.
In go, everything is explicit. If some behavior happens, there is code describing it. No hidden magic, no hidden cost. This is a great property to have, but this means instrumentation can be distracting as it may take up a lot of space in the codebase (often multiple lines to start a span with several tags, and then to finish the span, possibly capturing error information & occasionally additional tags), which can result in obscuring the business logic. It also makes it easy for a developer to forget using instrumentation; resulting in observability blind spots (or in the case of security instrumentation, in leaving vulnerable code available to exploit).
Customers want to be able to onboard observability with as little friction as possible, and many prefer observability to stay out of sight when building application logic. The interest in approaches such as eBPF is a testament to the desire of having "no-code-change" observability; but these approaches are inherently limited to certain platforms, incur a performance toll that is not acceptable in certain situations, and limited (for good reasons!) in capabilities.
In addition to this, many enterprises haven't fully embraced the DevOps (and even fewer the DevSecOps) paradigm, and have different organizations take charge of developing, operating, and securing software products. There is often tension between the goals/KPIs of these different organizations (schematizing a little bit):
- Developers are measured on how fast they deliver business logic
- Operators are measured on how efficient and reliable the products operate
- Security teams are measured on how secure the application is
In effect this means developers don't want to be encumbered by observability (or security) if they can avoid it; operators and security teams want to be able to deliver their goals without getting in the way of developers. These give an advantage to instrumentation techniques that do not require code modification; as they minimize impact on developer productivity while allowing to maximize coverage.
Other use-cases for compile-time source code re-writing have recently gotten some visibility:
- https://github.com/pijng/prep (compile-time evaluated expression)
- https://y0yang.com/posts/golang-opentelemetry-auto-instrumentation (OTel focused, but otherwise very similar to Orchestrion)
## Orchestrion
We have built [orchestrion ](https://github.com/DataDog/orchestrion) as a tool targeted to Ops and SecOps personas, but that can also fit within a Dev/DevOps/DevSecOps workflow. It uses `-toolexec` to intercept all invocations to `go tool compile`, re-writing all the `.go` files to add instrumentation everywhere possible.
Orchestrion can in some ways be seen as a compile-time-woven Aspect-oriented Programming (AoP) framework; except it is currently fairly specialized to our instrumentation needs.
### Challenges we had to address
#### Adding dependency edges
One of the consequences of re-writing source code to inject instrumentation calls is that we often need to introduce new dependencies to the package being built. A package that is built against `github.com/gorilla/mux` once instrumented as a new dependency edge to `gopkg.in/DataDog/dd-trace-go/contrib/gorilla/mux` to support the instrumentation. In most cases these new dependency edges support additional `import` clauses, which is ideal as it allows the compiler to type-check all parts of the re-written file. To support the compiler, Orchestrion modifies the `importcfg` file to register mappings between import paths and the corresponding _export file_, as resolved by `go list` (via `packages.Load`).
In some situations, Orchestrion needs to use `//go:linkname` (in the "handshake") form in order to avoid creating circular import dependencies. In those cases, the produced archive has an implied dependency on some other package; which must be made available at link-time. We embed a special text file within the produced archive to track those "link-time" dependencies, so that the requirement is persisted in `GOCACHE`. We then intercept invocations of `go tool link` and update the `importcfg.link` file, adding all link-time dependencies to it (again, resolved by `go list` via `packages.Load`).
#### Resolving additional dependencies
In order to _correctly_ resolve the new dependency edges, we have to use `packages.Load` (`go list`), and forward the appropriate `BuildFlags`, such as `-cover`, `-covermode` and `-coverpkg`. Today, the toolchain does not provide any visibility on the full build arguments and Orchestrion has to crawl the process tree in search for a `go build` (or `go test`, `go run`, etc...) invocation, gather it's arguments list, and do its best at parsing it, so it can forward the right values to child compilations if needed.
This poses challenges because `go`'s standard flags are fairly permissive in terms of style and syntax (e.g, it transparently allows adding an extra `-` ahead of a flag, which allows using the `-flag` or `--flag` styles interchangeably), and there is no way to know for sure what flags have a value or are simple boolean toggles.
We also need to re-implement some default argument behavior; specifically there is an implied `-coverpkg` argument value if one is not provided explicitly, that we need to make explicit when triggering child builds, as failure to do so typically leads to a fingerprint mismatch at link time.
In addition to this, our need to resolve additional packages during the build means we are going to trigger more compilation processes than desired (by the user); because we have no way to co-operate with the toolchain to honor the `-p n` flag provided by the user.
Since multiple packages may introduce the same synthetic dependency, we often end up needing to resolve the same package multiple times in parallel, which is a waste of resources. To address this, we had to implement a job server, which is a process that exists for the duration of the build, and which is responsible for resolving those dependencies, ensuring a given dependency is resolved exactly once. That process can also check that we are not introducing cyclic dependencies; and "cleanly" aborts the build if it detects a cycle, instead of endlessly recursing over itself.
#### Parsing tool arguments
Orchestrion also needs to parse arguments passed to `compile` and `link` commands, but there is no programmatic way to get these in a structured way. We have to resort to parsing the help output of these commands to infer what arguments have value (or don't) and hope for the best.
In the case of `link` there isn't much we care about except for accessing the `-importcfg` flag. For `compile`, we actually need to establish the correct list of positional arguments (they'll be the `.go` files) without risking a false positive (we had a bug where we simply took all arguments that ended in `.go` but ran into a fun issue with the `github.com/nats-io/nats.go` package).
#### Interactions with `GOCACHE`
Orchestrion tries to co-operate with the `go` toolchain as naturally as possible. One aspect of this is integrating with `GOCACHE` to allow our users to enjoy the performance benefits of incremental builds.
In order to do this, Orchestrion needs to influence the build ID of objects that are built with instrumentation. The only "hook point" available is intercepting the `-V=full` invocation of all tools, and modifying the string it returns to include an invalidation fragment.
Today, we intercept both the `compile -V=full` and `link -V=full` invocations, and append to it a hash that summarizes:
- The version of `orchestrion` being used, as changing it may change how aspects are applied, which may produce different instrumentation
- The complete list of aspects in use, as changing these will produce different instrumentation
- The complete list of packages aspects might inject, together with their dependencies (another use of `packages.Load`), as a lot of these are typically not accounted for in the "natural" build ID of any object
The drawback of being able to do this only at the "complete build level" is that it'll result in excessively frequent cache invalidation... Of course, changing `orchestrion` itself should be treated equivalent to changing compilers/toolchain, and result in complete invalidation... But the aspects & injected dependencies will only affect packages where they are effectively used, which isn't _all_ of them.
For example, we don't do anything in the majority of the standard library (some aspects today target `os` and `runtime`), so the "regular" export files would be fine to use for all the rest.
#### Line information preserving
We want the instrumented code to retain the original code's line information, so that stack traces visible by the user still match the code they wrote. To do this, we sprinkle `//line` directives around every "synthetic" code.
The `go/ast` package does not provide much in the way of a facility to manipulate comments or directives, and we use `github.com/dave/dst` to simplify editing commentary within the source files. This however comes at a significant cost (as it performs several other costly operations we don't necessarily need done).
The injected code can also easily interfere with the debugging experience (via `dlv`), as the "real" source file including that instrumented code is often no longer present on disk when the program is being run... This becomes more problematic in cases when the original source file is not in "canonical" go format, as the line numbering we infer from the original file may be skewed (because `github.com/dave/dst` _always_ produces canonical go format output) -- a "source map" (like is done in Node.JS, for example) would probably be somewhat easier to confidently manage.
## Toolchain (`-toolexec`) evolution directions
Some of these may be moonshots, or things that don't fit well with the go design principles; but I'm including them anyway as they might hint at problems in search of a solution, and maybe someone comes up with a more acceptable solution:
- Dedicated support for source-rewriting tools
- Also, a simplified onboarding experience (pluggable transformers automatically discovered from `go.mod` dependencies?)
- Ability to influence the build id on a per-package basis
- Ability to add new edges to the build graph
- Improved `go/ast` API, in particular w/r/t comment alterations
- Improved source mapping (that'd make it easier to "stack" transforms, e.g `cover` + instrumentation)
- Allow `-toolexec` to (try to) respect the `-p n` flag of the overarching build
- Share overarching build flags with `-toolexec`, or a way for a `-toolexec` tool to request extra build artifacts from the overarching build (see also, adding new edges to the build graph)
## Other related issues
- https://github.com/golang/go/issues/41145
- https://github.com/golang/go/issues/27628
- https://github.com/golang/go/issues/29430
- https://github.com/golang/go/issues/35204 | Proposal,GoCommand | low | Critical |
2,588,305,929 | yt-dlp | Support for starwars.com | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a new site support request
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that none of provided URLs [violate any copyrights](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and am willing to share it if required
### Region
Worldwide
### Example URLs
- Single video: https://www.starwars.com/video/trailer-the-bad-batch
### Provide a description that is worded well enough to be understood
In March 2024 I was able to download any video from https://www.starwars.com/video/ with yt-dlp even though it is not on the supported sites list. Recently I found out it no longer works, I get an error:
ERROR: [Disney] Unable to download webpage: HTTPSConnectionPool(host='www.starwars.com', port=443): Read timed out. (read timeout=20.0) (caused by TransportError("HTTPSConnectionPool(host='www.starwars.com', port=443): Read timed out. (read timeout=20.0)"))
I am still able to download videos from Disney however ( https://ondisneyplus.disney.com )
As far as I can see, starwars.com does in fact use Disney servers.
Here's a link to a video that I can download with yt-dlp from Disney (second from last on https://ondisneyplus.disney.com/show/star-wars-the-bad-batch ):
https://video.disney.com/watch/star-wars-the-bad-batch-official-trailer-disney-5bec30e18976cebd6b4f2a72
The same video is on starwars.com, but I get the above mentioned error when I try to download it (second from last when you click on "The Bad Batch" on https://www.starwars.com/video ):
https://www.starwars.com/video/trailer-the-bad-batch
Videos can be successfully downloaded from starwars.com using the Firefox extension Video DownloadHelper, but I prefer and need to use yt-dlp.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
yt-dlp.exe https://www.starwars.com/video/trailer-the-bad-batch -vU
[debug] Command-line config: ['https://www.starwars.com/video/trailer-the-bad-batch', '-vU']
[debug] Encodings: locale cp1251, fs utf-8, pref cp1251, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.10.07 from yt-dlp/yt-dlp [1a176d874] (win_exe)
[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1k 25 Mar 2021)
[debug] exe versions: ffmpeg 6.1-full_build-www.gyan.dev (setts), ffprobe 6.1-full_build-www.gyan.dev, rtmpdump 2.4
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.35.5, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1838 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.10.07 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.10.07 from yt-dlp/yt-dlp)
[Disney] Extracting URL: https://www.starwars.com/video/trailer-the-bad-batch
[Disney] trailer-the-bad-batch: Downloading webpage
ERROR: [Disney] Unable to download webpage: HTTPSConnectionPool(host='www.starwars.com', port=443): Read timed out. (read timeout=20.0) (caused by TransportError("HTTPSConnectionPool(host='www.starwars.com', port=443): Read timed out. (read timeout=20.0)"))
File "yt_dlp\extractor\common.py", line 741, in extract
File "yt_dlp\extractor\disney.py", line 79, in _real_extract
File "yt_dlp\extractor\common.py", line 1200, in _download_webpage
File "yt_dlp\extractor\common.py", line 1151, in download_content
File "yt_dlp\extractor\common.py", line 961, in _download_webpage_handle
File "yt_dlp\extractor\common.py", line 910, in _request_webpage
File "urllib3\connectionpool.py", line 536, in _make_request
File "urllib3\connection.py", line 507, in getresponse
File "http\client.py", line 1344, in getresponse
File "http\client.py", line 307, in begin
File "http\client.py", line 268, in _read_status
File "socket.py", line 669, in readinto
File "ssl.py", line 1241, in recv_into
File "ssl.py", line 1099, in read
socket.timeout: The read operation timed out
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "requests\adapters.py", line 667, in send
File "urllib3\connectionpool.py", line 843, in urlopen
File "urllib3\util\retry.py", line 449, in increment
File "urllib3\util\util.py", line 39, in reraise
File "urllib3\connectionpool.py", line 789, in urlopen
File "urllib3\connectionpool.py", line 538, in _make_request
File "urllib3\connectionpool.py", line 369, in _raise_timeout
urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='www.starwars.com', port=443): Read timed out. (read timeout=20.0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "yt_dlp\networking\_requests.py", line 328, in _send
File "requests\sessions.py", line 589, in request
File "requests\sessions.py", line 703, in send
File "requests\adapters.py", line 713, in send
requests.exceptions.ReadTimeout: HTTPSConnectionPool(host='www.starwars.com', port=443): Read timed out. (read timeout=20.0)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "yt_dlp\extractor\common.py", line 897, in _request_webpage
File "yt_dlp\YoutubeDL.py", line 4172, in urlopen
File "yt_dlp\networking\common.py", line 117, in send
File "yt_dlp\networking\_helper.py", line 208, in wrapper
File "yt_dlp\networking\common.py", line 340, in send
File "yt_dlp\networking\_requests.py", line 352, in _send
yt_dlp.networking.exceptions.TransportError: HTTPSConnectionPool(host='www.starwars.com', port=443): Read timed out. (read timeout=20.0)
```
| site-bug | low | Critical |
2,588,320,179 | vscode | Bug on Keyboard shortcut `multiDiffEditor.goToFile` |
Type: <b>Bug</b>
When using this shortcut, as I am focused on a file in the Multi Diff Editor, there is an error:
"Cannot read properties of undefined (reading 'toString')"
VS Code version: Code 1.94.2 (384ff7382de624fb94dbaf6da11977bba1ecd427, 2024-10-09T16:08:44.566Z)
OS version: Windows_NT x64 10.0.19045
Modes:
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|Intel(R) Xeon(R) Platinum 8358P CPU @ 2.60GHz (4 x 2594)|
|GPU Status|2d_canvas: unavailable_software<br>canvas_oop_rasterization: disabled_off<br>direct_rendering_display_compositor: disabled_off_ok<br>gpu_compositing: disabled_software<br>multiple_raster_threads: enabled_on<br>opengl: disabled_off<br>rasterization: disabled_software<br>raw_draw: disabled_off_ok<br>skia_graphite: disabled_off<br>video_decode: disabled_software<br>video_encode: disabled_software<br>vulkan: disabled_off<br>webgl: unavailable_software<br>webgl2: unavailable_software<br>webgpu: unavailable_software<br>webnn: disabled_off|
|Load (avg)|undefined|
|Memory (System)|16.00GB (6.09GB free)|
|Process Argv|D:\\shalev.ku\\repos\\electro-kubi-multi-root.code-workspace --crash-reporter-id 26ee39f3-acb7-4c45-bb2f-cf690c87d8d9|
|Screen Reader|no|
|VM|25%|
</details><details><summary>Extensions (4)</summary>
Extension|Author (truncated)|Version
---|---|---
prettier-vscode|esb|11.0.0
mssql|ms-|1.24.0
powershell|ms-|2024.2.2
sqltools|mtx|0.28.3
</details><details>
<summary>A/B Experiments</summary>
```
vsliv368:30146709
vspor879:30202332
vspor708:30202333
vspor363:30204092
vscod805cf:30301675
binariesv615:30325510
vsaa593:30376534
py29gd2263:31024239
c4g48928:30535728
azure-dev_surveyone:30548225
962ge761:30959799
pythongtdpath:30769146
pythonnoceb:30805159
asynctok:30898717
pythonmypyd1:30879173
h48ei257:31000450
pythontbext0:30879054
accentitlementsc:30995553
cppperfnew:31000557
dsvsc020:30976470
pythonait:31006305
dsvsc021:30996838
724cj586:31013169
a69g1124:31058053
dvdeprecation:31068756
dwnewjupyter:31046869
2f103344:31071589
impr_priority:31102340
nativerepl2:31139839
refactort:31108082
pythonrstrctxt:31112756
wkspc-onlycs-t:31132770
wkspc-ranged-t:31151552
cf971741:31144450
defaultse:31146405
iacca2:31156134
notype1:31157159
5fd0e150:31155592
```
</details>
<!-- generated by issue reporter --> | multi-diff-editor | low | Critical |
2,588,350,858 | react | [Compiler Bug]: Does not correctly identify components returned from factory | ### What kind of issue is this?
- [X] React Compiler core (the JS output is incorrect, or your app works incorrectly after optimization)
- [ ] babel-plugin-react-compiler (build issue installing or using the Babel plugin)
- [X] eslint-plugin-react-compiler (build issue installing or using the eslint plugin)
- [ ] react-compiler-healthcheck (build issue installing or using the healthcheck script)
### Link to repro
https://playground.react.dev/#N4Igzg9grgTgxgUxALhASwLYAcIwC4AEAVAQIZgEBKCpchAZjBBgQOQw12sDcAOgHYCEADxz4CAEwT1SUADYMo-Omgj8CAWQCeAQSxYAFAEoCwAQQIc8sdQB4JaAG4A+ABII5ciAQDquORK2APQOLnz8AL4CAvRKKmoEAGIQEMam5gRwamCEANr8pBgIADQEYAh4AHKFCAC6BAC8VJx4AHRQ5QDKeKR4CAasrEbhFlY2BLYAFgCMbh5epgVFEcEzzgJRgvyxyniq6nAcvQjJqSZm6pYV4zvx6qdpFxYWWfw5BPk1peVVNfVN1FobQ6CG6xwGQxGzzGMDsa3cnm8wCWCBWQTWGU2mxAESAA
even without state, the output is questionable: https://playground.react.dev/#N4Igzg9grgTgxgUxALhASwLYAcIwC4AEAVAQIZgEBKCpchAZjBBgQOQw12sDcAOgHYCEADxz4CAEwT1SUADYMo-Omgj8CAWQCeAQSxYAFAEoCwAQQIc8sdQB4JaAG4A+ABII5ciAQDquORK2APQOLnz8AL4CAvRKKmoEAGIQEMam5gRwamCEANr8pBgIADQEYAh4AHKFCAC6BAC8VJx4AHRQ5QDKeKR4CAasrEbhFlY2BLYAFgCMbh5epgVFEcEzzgJRgvyxyniq6nAcvQjJqSZm6pYV4zvx6qdpFxYWQUGZ2XlLJWUV1UX1TWotDaHQQ3WOAyGI2eYxgdjW7k8EFKAHd-IEgmsMptNiAIkA
### Repro steps
Simply create a function that returns a React function component. This is completely valid code.
### How often does this bug happen?
Every time
### What version of React are you using?
19 | Type: Bug,Status: Unconfirmed,Component: Optimizing Compiler | low | Critical |
2,588,370,216 | godot | Unable to add exported TileMapLayer property to scene node | ### Tested versions
- Reproducable v4.3.stable.mono.official [77dcf97d8]
### System information
Both Mac and PC
### Issue description
Attempting to drag&drop a TileMapLayer onto a scene node property won't take.
If I add it to the tscn file directly it's removed on build.
(I'm new to godot so might not be describing this properly)
### Steps to reproduce
I'm following a tutorial on udemy. But here's a screenshot

### Minimal reproduction project (MRP)
[puzzle_course.zip](https://github.com/user-attachments/files/17376788/puzzle_course.zip)
| bug,topic:editor,needs testing,topic:dotnet | low | Minor |
2,588,373,022 | transformers | Add support for GOT-OCR2.0 | ### Model description
As an OCR-2.0 model, GOT can handle all artificial optical signals (e.g., plain texts, math/molecular formulas, tables, charts, sheet music, and even geometric shapes) under various OCR tasks. On the input side, the model supports commonly used scene- and document-style images in slice and whole-page styles. On the output side, GOT can generate plain or formatted results (markdown/tikz/smiles/kern) via an easy prompt. Besides, the model enjoys interactive OCR features, i.e., region-level recognition guided by coordinates or colors.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Implementation: https://github.com/Ucas-HaoranWei/GOT-OCR2.0/
Paper: https://arxiv.org/abs/2409.01704 | New model | low | Major |
2,588,399,609 | godot | Setting scale of a control node inside of a container doesn't work if set immediately | ### Tested versions
- Reproducible in v4.4.dev3.mono.official
### System information
v4.4.dev3.mono.official [f4af8201b]
### Issue description
When control node is contained by a container (eg. `VBoxContainer`) and its scale is set immediately after node's creation, the scale value doesn't have any effect. If the scale is set afterwards or it is set using a tween with 0 second duration, the scale works as expected.
### Steps to reproduce
Scene:

Code:
```gdscript
extends Node2D
@onready var label: Label = $VBoxContainer/Label
@onready var v_box_container: VBoxContainer = $VBoxContainer
var label_2
func _ready() -> void:
# this doesn't work:
label.scale = Vector2(0.4, 1.0)
label_2 = Label.new()
v_box_container.add_child(label_2)
label_2.text = "Another label with some text"
# this doesn't work:
label_2.scale = Vector2(1.0, 2.0)
# setting scale using a tween with zero duration works:
#create_tween().tween_property(label, "scale", Vector2(0.4, 1.0), 0.0)
#create_tween().tween_property(label_2, "scale", Vector2(1.0, 2.0), 0.0)
func _on_button_pressed() -> void:
# this works:
label.scale = Vector2(0.4, 1.0)
label_2.scale = Vector2(1.0, 2.0)
```
If the button is clicked or if the two `create_tween()` lines are uncommented, scales are working as expected.
### Minimal reproduction project (MRP)
[scale_test.zip](https://github.com/user-attachments/files/17376934/scale_test.zip)
| discussion,documentation,topic:gui | low | Minor |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.