id
int64
393k
2.82B
repo
stringclasses
68 values
title
stringlengths
1
936
body
stringlengths
0
256k
labels
stringlengths
2
508
priority
stringclasses
3 values
severity
stringclasses
3 values
2,671,802,648
pytorch
ROCm: libamdhip64.so is linked twice when building a torch C++ program
### 🐛 Describe the bug Hi, I am giving a try to PyTorch C++ API, and for some reason `hipcc` appears to link twice some shared objects, once from `lib/python3.10/site-packages/torch/lib` and once from `/opt/rocm/lib`. This appears to bring some conflicts, which results in segfaults at the end of libtorch execution (similar to https://discuss.pytorch.org/t/segmentation-fault-at-end-of-execution-libtorch/211881). I have this file structure: ``` CMakeLists.txt gemv.h gemv.hip test_torch.cpp ``` Where `test_torch.cpp` is some frontend main function that does some stuff. CMakeLists.txt is the following: ```cmake cmake_minimum_required(VERSION 3.10) # Set project name. set(PROJECT_NAME test_app) project(${PROJECT_NAME}) # Set the C++ standard to C++17 set(CMAKE_CXX_STANDARD 17) set(CMAKE_CXX_STANDARD_REQUIRED YES) # Set amd's Clang++ as the compiler set(CMAKE_CXX_COMPILER /opt/rocm/bin/amdclang++) # Specify the path to LibTorch (replace with your path) set(Torch_DIR "/scratch/felmarty/miniconda3/envs/py310/lib/python3.10/site-packages/torch/share/cmake/Torch") # Supported AMD GPU architectures. set(HIP_SUPPORTED_ARCHS "gfx90a") find_package(Torch REQUIRED) # Importing torch recognizes and sets up some HIP/ROCm configuration but does # not let cmake recognize .hip files. In order to get cmake to understand the # .hip extension automatically, HIP must be enabled explicitly. enable_language(HIP) # Add kernels? file(GLOB HIP_FILES *.hip) # add_library(${PROJECT_NAME}_lib SHARED ${hip_files}) # Add the executable you want to compile add_executable(${PROJECT_NAME} test_torch.cpp ${HIP_FILES}) # Link the executable with LibTorch target_link_libraries(${PROJECT_NAME} "${TORCH_LIBRARIES}") # Add additional compilation flags if needed set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${TORCH_CXX_FLAGS}") ``` At the end of `test_app` execution: ``` 1,8 res: 1 here <- libtorch stuff ends here Segmentation fault (core dumped) ``` I tried to use as well: ```cmake add_library(${PROJECT_NAME}_lib SHARED ${HIP_FILES}) # Add the executable you want to compile add_executable(${PROJECT_NAME} test_torch.cpp) # Link the executable with LibTorch target_link_libraries(${PROJECT_NAME} "${TORCH_LIBRARIES}") target_link_libraries(${PROJECT_NAME} ${PROJECT_NAME}_lib) ``` and I have the same segfault. Looking at the linker, using the above full CMakeLists.txt, I get: ``` $ ldd test_app | grep libamdhip64 libamdhip64.so.6 => /opt/rocm-6.2.4/lib/llvm/bin/../../../lib/libamdhip64.so.6 (0x00007a97ada00000) libamdhip64.so => /scratch/felmarty/miniconda3/envs/py310/lib/python3.10/site-packages/torch/lib/libamdhip64.so (0x00007a97aac00000) ``` so libamdhip64.so (and a few others) is linked twice. Using a `Makefile` build system instead of cmake, I had the same issue with the linking ```bash # compile test_torch.ccp to test_torch & gemv.hip to gemv before hipcc -D_GLIBCXX_USE_CXX11_ABI=0 test_torch gemv -L${TORCH_PATH}/lib \ -Wl,-rpath,${TORCH_PATH}/lib \ -ldl -lc10 -ltorch_cpu -ltorch_hip -Wl,--as-needed ``` So far when only compiling `test_torch.cpp` and not the kernel, `-Wl,--as-needed` was enough to prevent `hipcc` to link from `/opt/rocm/lib` and only link from `lib/python3.10/site-packages/torch/lib`. But adding the actual `.hip` files add links to `/opt/rocm/lib` shared objects anyway. Is this double linking expected? Do you think the segfault is related to this double linking? Should amd-related shared objects be linked from `/opt/rocm/lib` or from `lib/python3.10/site-packages/torch/lib`? Edit: I looked at `nm -D /opt/rocm-6.2.4/liblibamdhip64.so.6` & `nm -D /path/to/py310/lib/python3.10/site-packages/torch/lib/libamdhip64.so` and the symbols defined are the same, so I am not sure what is happening here. There were 5-6 other duplicate links (other than `liblibamdhip64`) like this so something might be different in an other one. Thank you! ### Versions ``` PyTorch version: 2.5.1+rocm6.2 Is debug build: False CUDA used to build PyTorch: N/A ROCM used to build PyTorch: 6.2.41133-dd7f95766 OS: Ubuntu 24.04 LTS (x86_64) GCC version: (Ubuntu 13.2.0-23ubuntu4) 13.2.0 Clang version: Could not collect CMake version: version 3.28.3 Libc version: glibc-2.39 Python version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-6.8.0-48-generic-x86_64-with-glibc2.39 Is CUDA available: True CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: AMD Instinct MI250X/MI250 (gfx90a:sramecc+:xnack-) Nvidia driver version: Could not collect cuDNN version: Could not collect HIP runtime version: 6.2.41133 MIOpen runtime version: 3.2.0 Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 48 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 32 On-line CPU(s) list: 0-31 Vendor ID: AuthenticAMD Model name: AMD EPYC 73F3 16-Core Processor CPU family: 25 Model: 1 Thread(s) per core: 1 Core(s) per socket: 16 Socket(s): 2 Stepping: 1 Frequency boost: enabled CPU(s) scaling MHz: 65% CPU max MHz: 4036.6211 CPU min MHz: 1500.0000 BogoMIPS: 6986.55 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin brs arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm debug_swap Virtualization: AMD-V L1d cache: 1 MiB (32 instances) L1i cache: 1 MiB (32 instances) L2 cache: 16 MiB (32 instances) L3 cache: 512 MiB (16 instances) NUMA node(s): 2 NUMA node0 CPU(s): 0-15 NUMA node1 CPU(s): 16-31 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Reg file data sampling: Not affected Vulnerability Retbleed: Not affected Vulnerability Spec rstack overflow: Mitigation; Safe RET Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] numpy==1.26.4 [pip3] onnx==1.16.2 [pip3] onnxoptimizer==0.3.13 [pip3] onnxruntime==1.18.1 [pip3] onnxruntime_extensions==0.12.0 [pip3] onnxsim==0.4.36 [pip3] pytorch-triton-rocm==3.1.0 [pip3] torch==2.5.1+rocm6.2 [pip3] torchaudio==2.5.1+rocm6.2 [pip3] torchvision==0.20.1+rocm6.2 [conda] numpy 1.26.4 pypi_0 pypi [conda] pytorch-triton-rocm 3.1.0 pypi_0 pypi [conda] torch 2.5.1+rocm6.2 pypi_0 pypi [conda] torchaudio 2.5.1+rocm6.2 pypi_0 pypi [conda] torchvision 0.20.1+rocm6.2 pypi_0 pypi ``` cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd
module: rocm,triaged
low
Critical
2,671,811,782
pytorch
When loading a model trained with QAT `(qat.pth)` using `model.load_state_dict`, an error occurs: `RuntimeError: Error(s) in loading state_dict for GraphModule: Missing key(s), Unexpected key(s) in state_dict.`
### 🐛 Describe the bug - I'm reporting this issue due to errors related to capture_pre_autograd_graph and torch.compile in QAT. - Note: Apologies if there are any misunderstandings. - Based on the following tutorial, I implemented QAT on my custom model (let's call it `MMM`) and saved the trained model `qat.pth` using the `state_dict` format. Observing the training loss, I confirmed that the model is learning properly. - Tutorial: [Quantization Aware Training with PT2E](https://pytorch.org/tutorials/prototype/pt2e_quant_qat.html) ■Excerpt from train.py (・・・ indicates omissions) ... model = MMM().to(device) ... example_inputs = (torch.randn(・・・).to(device),) dynamic_shapes = tuple( {0: torch.export.Dim("dim", min=1, max=128)} if i == 0 else None for i in range(len(example_inputs)) ) exported_model = torch.export.export_for_training( model, example_inputs, dynamic_shapes=dynamic_shapes ).module() quantizer = XNNPACKQuantizer() quantizer.set_global(get_symmetric_quantization_config(is_qat=True)) print("Model has been annotated for quantization.") model = prepare_qat_pt2e(exported_model, quantizer) model.to(device) print("Model is prepared for QAT.") ... torch.ao.quantization.move_exported_model_to_train(model) ... torch.save(model.state_dict(), os.path.join(results_dir, "qat.pth")) - Afterwards, when I tried to evaluate the performance using the trained model "qat.pth" by loading it with load_state_dict, I encountered an error stating that the keys do not match. Looking at the error, the key names have slightly changed (some . have become _, etc.), and the existence of parameters like activation_post_process (probably for PTQ) is causing an error where the model cannot be loaded. **- How can I resolve this?** ■Excerpt from valid.py … model = MMM().to(device) example_inputs = (torch.randn(・・・).to(device),) exported_model = capture_pre_autograd_graph(model, example_inputs) quantizer = XNNPACKQuantizer() quantizer.set_global(get_symmetric_quantization_config(is_qat=True)) model = prepare_qat_pt2e(exported_model, quantizer) model.load_state_dict(torch.load("pth_path")) model = convert_pt2e(model) torch.ao.quantization.move_exported_model_to_eval(model) ### Error logs File "・・・/valid.py", line 53, in main model.load_state_dict(torch.load("pth_path")) File "・・・/miniconda3/envs/pytorch2/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2584, in load_state_dict raise RuntimeError( RuntimeError: Error(s) in loading state_dict for GraphModule: Missing key(s) in state_dict: "XXX_0_weight", "XXX_0_bias", ・・・, "YYY_0_weight", "YYY_0_bias", "XXX_1_num_batches_tracked", "XXX_1_running_mean", "XXX_1_running_var", ・・・, Unexpected key(s) in state_dict: "XXX.0.weight", "XXX.0.bias", ・・・, "YYY.0.weight", "YYY.0.bias", "XXX.1.num_batches_tracked", "XXX.1.running_mean", "XXX.1.running_var", ・・・, "XXX.3.weight", "XXX.3.bias", ・・・, "activation_post_process_Z.fake_quant_enabled", "activation_post_process_Z.observer_enabled", "activation_post_process_Z.scale", "activation_post_process_Z.zero_point", "activation_post_process_Z.activation_post_process.eps", "activation_post_process_Z.activation_post_process.min_val", "activation_post_process_Z.activation_post_process.max_val", ・・・ ### Versions Windows WSL Miniconda PyTorch 2.5.1 Python 3.10 cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
awaiting response (this tag is deprecated),needs reproduction,oncall: export
low
Critical
2,671,850,998
PowerToys
ALT+ESC remapping fails, disables function completly
### Microsoft PowerToys version v0.86.0 ### Installation method GitHub, PowerToys auto-update ### Running as admin None ### Area(s) with issue? Keyboard Manager ### Steps to reproduce Add a new keyboard shortcut remap for ALT+ESC (switch active window to next) Observe that the new mapping does not work (nothing happens). Observe also that the original shortcut does not work (nothing happens). I'm trying to remap it to ALT+` but other mappings don't work either (for example ALT+F1) ### ✔️ Expected Behavior The shortcut should be properly mapped to the new configuration ### ❌ Actual Behavior Nothing happens. Original shortcut also is disabled. ### Other Software _No response_
Issue-Bug,Needs-Triage
low
Minor
2,671,907,059
PowerToys
[FR]: Bring alt-tab selected window to the center of the main monitor, and back to where it came from once it's out of focus
### Description of the new feature / enhancement I'd like to propose a window management feature that brings an alt-tab selected window to the center of the main monitor and sends it to a different, pre-defined location once alt-tabbed out of focus. ### Scenario when this would be used? The user story here is that I have a bunch of windows across multiple monitors open and laid out in a particular way that I want to mostly be able to glance at and only periodically interact with. However, when I do need to interact with a particular winodw, I want to bring it to the center of the main monitor for the duration of the interaction and send it back to the pre-defined position once I'm done. ### Supporting information I think the conceit that can be used is pinning. Pin a window in particular place and then alt-tabbing it in and out of focus carries it between the center of the main monitor and its pinned location. That should probably be extended to being able to define the in-focus state (location & size of the window).
Needs-Triage
low
Minor
2,671,954,222
pytorch
multilabel_margin_loss gives incorrect results in torch.compile
### 🐛 Describe the bug Got different results when running compiled version of torch.nn.functional.multilabel_margin_loss. Reproducer: ``` import torch dtype = torch.float32 C = 6 N = 2 reduction = "none" #backend = "eager" # this works backend = "aot_eager" # this fails def func(x, y, reduction): result = torch.nn.functional.multilabel_margin_loss(x, y, reduction=reduction) return result input = torch.rand((N, C) if N is not None else C, dtype=dtype) target = torch.rand(input.shape) target = torch.multinomial(target, C, replacement=False) indexes = torch.randint(1, C, (N,) if N is not None else (1,)) indexes_mask = torch.nn.functional.one_hot(indexes, C).to(torch.bool) target = torch.where(indexes_mask, -1, target) target = target.reshape(input.shape) # Both functions give different results normal_output = func(input, target, reduction) compile_output = torch.compile(func, backend=backend)(input, target, reduction) print("Input:", input) print("Target:", target) print("Normal output:", normal_output) print("Compile output:", compile_output) torch.testing.assert_close(normal_output, compile_output) ``` Observe numerical inaccuracies also for other reduction modes. Additionally see that with `backend = "eager"` everything run as intended. ### Error logs ``` Input: tensor([[0.1798, 0.5515, 0.3197, 0.8951, 0.9017, 0.3625], [0.0736, 0.9311, 0.8952, 0.8994, 0.7971, 0.8615]]) Target: tensor([[ 4, 5, 0, 3, -1, 2], [ 5, -1, 1, 4, 0, 3]]) Normal output: tensor([1.1345, 0.7148]) Compile output: tensor([1.9717, 7.5716]) Traceback (most recent call last): File "/home/pswider/qnpu/multilabel_repro.py", line 34, in <module> torch.testing.assert_close(normal_output, compile_output) File "/home/pswider/venv-pure/lib/python3.10/site-packages/torch/testing/_comparison.py", line 1530, in assert_close raise error_metas[0].to_error(msg) AssertionError: Tensor-likes are not close! Mismatched elements: 2 / 2 (100.0%) Greatest absolute difference: 6.85684871673584 at index (1,) (up to 1e-05 allowed) Greatest relative difference: 0.9055964946746826 at index (1,) (up to 1.3e-06 allowed) ``` ### Versions PyTorch version: 2.5.1+cu124 Is debug build: False CUDA used to build PyTorch: 12.4 OS: Ubuntu 22.04.5 LTS (x86_64) cc @chauhang @penguinwu
triaged,oncall: pt2,module: pt2 accuracy
low
Critical
2,671,959,452
PowerToys
Thinkpad trackpad middle button function as double click.
### Description of the new feature / enhancement Press the middle button on Thinkpad trackpads, and get a double click. ### Scenario when this would be used? Every-day use, enhancing workflow a lot all the time. ### Supporting information _No response_
Needs-Triage
low
Minor
2,671,973,032
TypeScript
MSBuild integration with ASP.NET Core
The ASP.NET SDK has an asset pipeline that processes all the web content for the app to apply optimizations (compression/fingerprinting, etc.). There are a few things that make integration between the MSBuild SDK and the ASP.NET Core pipeline challenging, and while we've given people guidance, it would be great if we can enable this scenario to work without having to make such changes to their project. There are two main challenges that we face: * By default the TypeScript targets run too late in the pipeline for us to see the generated outputs. Our guidance suggests hooking up the relevant targets to the `PrepareForBuild` target as shown below: ```xml <PrepareForBuildDependsOn> CompileTypeScript; CompileTypeScriptWithTSConfig; GetTypeScriptOutputForPublishing;$(PrepareForBuildDependsOn) </PrepareForBuildDependsOn> ``` * I suspect the snippet above is not fully correct, as more targets are involved in the TypeScript setup in `CompileDependsOn`. * This makes the TS targets run early enough so that standard targets in the ASP.NET Core pipeline can detect and process them as expected. * The second challenge that we face is that is typical for people to dump their outputs into the wwwroot folder. When this happens, (after the change above) `GetTypeScriptOutputForPublishing` will add all the `GeneratedJavaScript` items to the `Content` item group unconditionally. * This results in the presence of duplicate `Content` items that interferes with the build. * We've given people the following target to remove the duplicates before the `GetTypeScriptOutputForPublishing` target adds them. ```xml <Target Name="RemoveDuplicateTypeScriptOutputs" BeforeTargets="GetTypeScriptOutputForPublishing"> <ItemGroup> <Content Remove="@(GeneratedJavaScript)" /> </ItemGroup> </Target> ``` This gets things into a working state, but it's obviously not trivial for customers to discover/setup in their app. Hopefully we can work together to make this scenario `just work` by implementing a few simple changes. * Have a target `ResolveTypeScriptOutputs` or similar that produces the compile outputs as items in the `Content` item group and that can run early enough in the pipeline (PrepareForBuild is a good candidate, or it can be configurable). * Avoid adding duplicate content items to the `Content` item group by using `Exclude="@(Content)` to prevent duplicates. * Checking for `UsingMicrosoftNETSdkStaticWebAssets` to wire up `ResolveTypeScriptOutputs` early enough in the pipeline so that the outputs can be detected and disable other targets that are not needed (Anything that deals with copying the outputs to the output/publish folder is already handled by the static web assets SDK).
Needs Investigation
low
Major
2,671,989,201
TypeScript
Tuple Spread Inconsistencies When Intersected
### 🔎 Search Terms Tuple, Intersection, Spread ### 🕗 Version & Regression Information - This is the behavior in every version I tried, and I reviewed the FAQ for entries about tuples and arrays ### ⏯ Playground Link https://www.typescriptlang.org/play/?#code/C4TwDgpgBAKgrmANtAvFA2gZ2AJwJYB2A5gDRQB0lBcAtgEYQ7oC6zA3AFCiSwLICSBYI0wQAxsDwB7AlDTwk0AGRQA3lDA4pYAFxRs+YlAC+nDl3DQFyAEoRsAQTFj7mOb0XoAjOw4B6PygggD0oanpGc24rPghBYRxRCWkCO0dnV3drOKERcUkZb18AoKhQg0IiKAAfMNoGHHMLHmyAZU0IAEMAE3d0SnJs4sDSsowK4jIB8IaWZijLDwFcxPyU9pwu3rR+ymz4vOSZYdLQgAoJqtqZxgBKFiA ### 💻 Code ```ts type Tuple = [string, ...number[]]; type TupleIntersection = Tuple & { prop: string }; type TupleRestAccess = Tuple[1]; // ^ number type TupleIntersectionRestAccess = TupleIntersection[1]; // ^ string | number type TupleSpread = [...Tuple]; // ^ [string, ...number[]] type TupleIntersectionSpread = [...TupleIntersection]; // ^ (string | number)[] ``` ### 🙁 Actual behavior `TupleIntersection[1]` and `[...TupleIntersection]` both seem to use an overly broad type, matching the `number` index signature instead of the more narrow spread signature. What I mean is that `TupleIntersection[1]` behaves just like `Tuple[number]` does. This leads me to believe that the `number` index signature is synthesized correctly but the logic for numeric literals in the range of the spread signature aren't handled. ### 🙂 Expected behavior I expected `Tuple`'s behaviour to match `TupleIntersection`. ### Additional information about the issue _No response_
Bug,Help Wanted
low
Minor
2,671,996,538
pytorch
The description of key_padding_mask in float mode is incorrect
### 📚 The doc issue The problematic [document](https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html#torch.nn.MultiheadAttention) is about the description of key_padding_mask. key_padding_mask:For a float mask, it will be directly added to the corresponding key value. issue:I tested this float type and the results showed that it was not directly added to the key value, but did change the weight value in some form. The code and results are shown below. ``` import torch import torch.nn as nn import torch.nn.functional as F multiheadattention=nn.MultiheadAttention(embed_dim=2, num_heads=1,bias=False,batch_first=True) data=torch.tensor([[[1,1.],[2,0.]]]) multiheadattention.in_proj_weight.data=torch.tensor( [[1.,2],[0,1],[1,0],[1,1],[2,1],[2,0]]) multiheadattention.out_proj.weight.data=torch.tensor([[1,2], [3,4.]]) keypaddingmask=torch.tensor([[1,2.]]) out1=multiheadattention(data,data,data, need_weights=True,key_padding_mask=keypaddingmask)#When removing key_padding_mask, the returned weight tensor([[[0.1070, 0.8930], [0.1956, 0.8044]]] is correct, but adding key_padding_mask will result in an incorrect weight tensor([[[0.0422, 0.9578], [0.0821, 0.9179]]] as described in the document. print(out1) ``` ![image](https://github.com/user-attachments/assets/985e8af9-0a17-4f0e-af28-8f925c9b3258) ### Suggest a potential alternative/fix Update the description about key_padding_mask. cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @bhosmer @cpuhrsch @erichan1 @drisspg
module: nn,triaged
low
Minor
2,672,016,824
tauri
[bug] `tauri add` doesn't add the conditional inclusion for desktop vs mobile
If you follow the docs to add updater like `cargo tauri add updater`, it adds everything but you'll get this error: `error[E0433]: failed to resolve: use of undeclared crate or module tauri_plugin_updater`. The reason for this is because in the `Cargo.toml` it will add `tauri-plugin-updater` conditionally excluding mobile builds: ``` [target.'cfg(not(any(target_os = "android", target_os = "ios")))'.dependencies] tauri-plugin-updater = "2.0.2" ``` Thus, the program cannot find tauri-plugin-updater when referenced in the `lib.rs` builder. I fixed this by doing: ``` let mut builder = tauri::Builder::default() .plugin(tauri_plugin_dialog::init()) .plugin(tauri_plugin_fs::init()) .plugin(tauri_plugin_os::init()) .invoke_handler(tauri::generate_handler![ ... ]); #[cfg(not(any(target_os = "android", target_os = "ios")))] { builder = builder.plugin(tauri_plugin_updater::Builder::new().build()); } ``` One option is we can update the install script to also do this by default.
type: bug,status: needs triage
low
Critical
2,672,022,299
PowerToys
json
### Description of the new feature / enhancement json viewer, parser and crack thing ### Scenario when this would be used? when we copy the complex json value we want to crack it like cracker do and parse them also some time we have to convert it on xml ### Supporting information _No response_
Product-Advanced Paste
low
Minor
2,672,037,643
excalidraw
web embed
hope you can allow 127.0.0.1 or localhost ip use web embed
enhancement,whitelist
low
Minor
2,672,043,266
godot
Parameter audio/driver/output_latency typed as INT is too restrictive to adjust the buffer size.
### Tested versions - Reproducible in 4.4 dev4 and previous releases ### System information Windows 11 ### Issue description I use a lot of audio tools in combination with Godot : - Audio interfaces like RME Babyface Pro fs or Motu M2 - Qjackctl for Windows - Voicemeeter - BiasFx or BYOD All these tools let me choose the buffer size to reduce the latency in audio input and output. For my setup, a buffer size of 256 is a good compromise for audio quality and audio latency. So, when I try to adjust Godot parameter "audio/driver/output_latency" to set a similar buffer size, it is not possible to obtain a buffer size of 256, due to the input value rounded as integer. Here are the possible values obtained when I modify this parameter : 0 -> Not possible automatically set to 1 1 -> WASAPI: audio buffer frames: 128 calculated latency: 2ms 2 -> WASAPI: audio buffer frames: 128 calculated latency: 2ms 3 -> WASAPI: audio buffer frames: 144 calculated latency: 3ms 4 -> WASAPI: audio buffer frames: 192 calculated latency: 4ms 5 -> WASAPI: audio buffer frames: 240 calculated latency: 5ms 6 -> WASAPI: audio buffer frames: 288 calculated latency: 6ms 7 -> WASAPI: audio buffer frames: 336 calculated latency: 7ms 8 -> WASAPI: audio buffer frames: 384 calculated latency: 8ms 9 -> WASAPI: audio buffer frames: 432 calculated latency: 9ms 10 -> WASAPI: audio buffer frames: 480 calculated latency: 10ms > 10 -> WASAPI: audio buffer frames: 480 calculated latency: 10ms It is difficult to set the buffer size in Godot because we have to do a calculation based on the latency in ms and the default sample rate of audio files. There is no GDScript function to get back the buffer size, and we have to use the verbose mode to obtain this information. Such a function is necessary, because there can be a difference between the desired latency and the latency restricted by the audio driver. I can play with the source code for testing purposes during my development process, but if I release the game, the end users won't be able to adjust this kind of setting to reduce the audio latency consistently with their hardware and OS capabilities. ### Steps to reproduce Change the parameter value and run Godot in verbose mode to display the message that contains the buffer size and the calculated latency : "WASAPI: audio buffer frames: 480 calculated latency: 10ms" ### Minimal reproduction project (MRP) Not necessary, running Godot editor in verbose mode is enough.
enhancement,topic:audio
low
Minor
2,672,063,333
deno
`glob` does not exist in `node:fs` and `node:fs/promises`
Version: Deno 2.0.6 The Deno docs specify that [`glob`](https://docs.deno.com/api/node/fs/~/glob) and [`globSync`](https://docs.deno.com/api/node/fs/~/globSync) should be exported from `node:fs` and [`glob`](https://docs.deno.com/api/node/fs/promises/~/glob) from `node:fs/promises`, but these seem to not be exported or defined. ``` error: Uncaught SyntaxError: The requested module 'node:fs' does not provide an export named 'glob' import { glob } from "node:fs"; ^ ``` ``` error: Uncaught SyntaxError: The requested module 'node:fs' does not provide an export named 'globSync' import { globSync } from "node:fs"; ^ ``` ``` error: Uncaught SyntaxError: The requested module 'node:fs/promises' does not provide an export named 'glob' import { glob } from "node:fs/promises"; ^ ``` ```ts import fs from "node:fs"; import fsp from "node:fs/promises"; console.log(fs.glob, fs.globSync, fsp.glob); // undefined undefined undefined ``` edit: also, if Deno is actually supposed to add `glob` and `globSync`, shouldn't an equivalent function be available from Deno's built-in filesystem api? it seems strange to _only_ add it to `node:fs`
bug,node compat
low
Critical
2,672,084,471
angular
Unable to Clear Cached Data in Angular-Generated Service Worker
### Which @angular/* package(s) are the source of the bug? service-worker ### Is this a regression? No ### Description I am using the Angular Service Worker, and it works well overall. However, I encountered a potential data leak issue when caching API responses. Here’s the scenario: If I cache an API response (e.g., abc.com/api/posts) for User A, then log out and log in as User B, and switch to offline mode before fetching the API for User B, I can still see the cached data belonging to User A. This is a significant security concern, as it exposes sensitive data across user sessions. To address this issue, I decided to implement a custom service worker to extend the functionality of Angular’s generated service worker. My custom implementation looks like this: ```javascript importScripts('./ngsw-worker.js'); self.addEventListener('message', async (event) => { const cacheNames = await self.caches.keys(); for (const name of cacheNames) { await self.caches.delete(name); } }); ``` With this approach, I expect that sending a message (e.g., after logging out) will clear all cached data from the browser's cache storage: ```javascript navigator.serviceWorker.controller.postMessage({ type: 'CLEAR_CACHE', }); ``` While this solution successfully clears the visible data in the browser’s cache storage (verified via DevTools), an issue persists: if I refresh the page and switch to offline mode, the previously cached data is still accessible. This suggests that Angular’s ngsw-worker.js maintains an internal cache that I cannot access or clear using my current implementation. ### Please provide a link to a minimal reproduction of the bug _No response_ ### Please provide the exception or error you saw ```true ``` ### Please provide the environment you discovered this bug in (run `ng version`) ```true Angular CLI: 17.2.3 Node: 20.10.0 Package Manager: npm 10.2.3 OS: Mac os Angular: 17.2.3 ... animations, common, compiler, compiler-cli, core, forms ... platform-browser, platform-browser-dynamic, router Package Version --------------------------------------------------------- @angular-devkit/architect 0.1702.3 @angular-devkit/build-angular 17.2.3 @angular-devkit/core 17.2.3 @angular-devkit/schematics 17.2.3 @angular/cli 17.2.3 @schematics/angular 17.2.3 rxjs 7.5.6 typescript 5.3.3 zone.js 0.14.4 ``` ### Anything else? _No response_
area: service-worker
low
Critical
2,672,090,701
deno
`deno install` can't handle code and cache on different devices
Version: Deno 2.0.6 My code folder (`/Volumes/Code`) is a separated volume (so that it can be case-sensitive). When I was following https://deno.com/blog/build-astro-with-deno and ran `deno install --allow-scripts`: ``` Failed to clone dir "/Users/shiroki/Library/Caches/deno/npm/registry.npmjs.org/sisteransi/1.0.5" to "/Volumes/Code/osa/website/node_modules/.deno/sisteransi@1.0.5/node_modules/sisteransi" via clonefile: Cross-device link (os error 18) ...... Failed to clone dir "/Users/shiroki/Library/Caches/deno/npm/registry.npmjs.org/astro/4.16.13" to "/Volumes/Code/osa/website/node_modules/.deno/astro@4.16.13/node_modules/astro" via clonefile: Cross-device link (os error 18) error: script 'install' in 'sharp@0.33.5' failed with exit code 1 stderr: error: [ERR_MODULE_NOT_FOUND] Cannot find module 'file:///Volumes/Code/osa/website/node_modules/.deno/sharp@0.33.5/node_modules/sharp/install/check' imported from 'file:///Volumes/Code/osa/website/node_modules/.deno/sharp@0.33.5/node_modules/sharp' error: failed to run scripts for packages: sharp@0.33.5 ``` The `package.json` FYI: ```json { "name": "website", "type": "module", "version": "0.0.1", "scripts": { "dev": "astro dev", "start": "astro dev", "build": "astro check && astro build", "preview": "astro preview", "astro": "astro" }, "dependencies": { "astro": "^4.16.13", "@astrojs/check": "^0.9.4", "typescript": "^5.6.3" } } ``` What I found might be correlated: #20246 #19879
windows,triage required 👀
low
Critical
2,672,114,756
PowerToys
Zoom button issue
### Microsoft PowerToys version v0.86.0 ### Installation method GitHub ### Running as admin No ### Area(s) with issue? Keyboard Manager ### Steps to reproduce I have a keyboard Keymaster v1 where in place of `Insert` button located `Zoom` (zoom in) button It works as `Win+=` but Key Remap function identifies as `=` which is not correct With this re-assignment I cannot use simple equality sign which is not what I expect `Shift + Zoom` on this keyboard doing the same as `Win + -` (zoom out) and still working in such way even after I did reassign for this button ### ✔️ Expected Behavior Key mapper should correctly identify the key and re-assign it to proper (selected) value ### ❌ Actual Behavior Identification for the zoom key did not work properly ### Other Software Issue not related to other software. I do not have any re-mappers or key managers My OS is Windows 10 Pro
Issue-Bug,Needs-Triage
low
Minor
2,672,175,641
deno
Add a interactive way to upgrade packages
Yarn has this command to upgrade packages: https://yarnpkg.com/cli/upgrade-interactive Example: ![image](https://github.com/user-attachments/assets/c95b0dd4-bdcf-49b8-9a79-109ec5692d76) It would be nice if deno have something like this.
feat,install
low
Major
2,672,185,057
three.js
Line2 dash scale problem on perspective camera
### Description Dash scale will change when line out of camera's near plane. This is because shader trim vertices position, but keep line Distance same. https://github.com/mrdoob/three.js/blob/beab9e845f9e5ae11d648f55b24a0e910b56a85a/examples/jsm/lines/LineMaterial.js#L127 Possible solution: We should trim line distance too ( [see sample](https://codesandbox.io/p/sandbox/rn458y) ): ### Reproduction steps 1. Build Line2 with long segment that will has one of point out of camera near space ( -100, 100 by Z for example ) 2. Enable dash in LineMaterial 3. Fly around line ### Code /// ### Live example https://codesandbox.io/p/sandbox/rn458y Video: https://drive.google.com/file/d/1znYC56d1LTgS-0T9wuHUw3rRETozCyNX/view?usp=drive_link note: green is fixed material instance, purple is original. ### Screenshots <img width="835" alt="image" src="https://github.com/user-attachments/assets/6bce0e70-45f8-4a23-86c2-09505c756f03"> ### Version any ### Device _No response_ ### Browser _No response_ ### OS _No response_
Addons
low
Minor
2,672,227,947
PowerToys
Keyboard Manager supports distinguishing between short and long key presses
### Description of the new feature / enhancement For the "keys" functionality in the "Keyboard Manager", is there any development plan to support the differentiation between short and long key presses? ### Scenario when this would be used? If this feature is supported, I could use the following key mappings as an example: 1. Short press of "Caps Lock" mapped to "Esc" 2. Long press of "Caps Lock" mapped to "Alt (Left)" I think such key mappings would be immensely helpful for users who frequently use the Alt key. For instance, when using the GlazeWM application to manage windows, the long press of Caps Lock mapped to Alt would allow for easy access to many shortcuts. Additionally, I would like to mention that the support for this feature could greatly improve the health of my little finger! :) ### Supporting information _No response_
Needs-Triage
low
Minor
2,672,249,655
ollama
llama3.2-vision model quantization request
“q5_K_M” is not among the quantizations in the ‘llama3.2-vision’ model in the Ollama library. Is it possible to add it?
model request
low
Minor
2,672,258,420
kubernetes
apiserver timeouts and random shutdown
### What happened? I'm running a k8s cluster using `kind`. Currently, there are ~3k agents connected. But in my tests, I frequently see connection timeouts, or connection reset messages from the `apiserver`. Example: ``` ❯ kubectl get pods Get "https://REDACTED:6443/api/v1/namespaces/default/pods?limit=500": net/http: TLS handshake timeout - error from a previous attempt: read tcp 10.219.21.215:58434->10.219.21.215:6443: read: connection reset by peer ``` Another issue is that the apiserver seems to shutdown without any obvious error message. Here are the logs from apiserver captured when I saw the 'connection reset' error, also indicates apiserver shutdown: ``` E1119 13:37:45.788062 1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError" E1119 13:37:45.788243 1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 156.135µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError" E1119 13:37:45.789372 1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError" E1119 13:37:45.791612 1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError" E1119 13:37:45.792698 1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.6593ms" method="POST" path="/api/v1/namespaces/default/events" result=null E1119 13:37:49.739514 1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError" E1119 13:37:49.739573 1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 24.511µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError" E1119 13:37:49.739805 1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"context canceled\"}: context canceled" logger="UnhandledError" E1119 13:37:49.740573 1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError" E1119 13:37:49.740921 1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError" E1119 13:37:49.741688 1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError" E1119 13:37:49.741964 1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError" E1119 13:37:49.742969 1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="3.77214ms" method="PATCH" path="/api/v1/namespaces/default/events/58a19fee-2f57-4f96-a4f6-0e70fba87637.1809622ae123cfd8" result=null E1119 13:37:49.743003 1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError" E1119 13:37:49.744130 1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.933539ms" method="GET" path="/api/v1/nodes/58a19fee-2f57-4f96-a4f6-0e70fba87637" result=null E1119 13:39:54.379155 1 compact.go:124] etcd: endpoint ([https://127.0.0.1:2379]) compact failed: etcdserver: mvcc: required revision has been compacted I1119 13:40:03.367380 1 controller.go:128] Shutting down kubernetes service endpoint reconciler W1119 13:40:03.391092 1 lease.go:265] Resetting endpoints for master service "kubernetes" to [] I1119 13:40:03.411543 1 controller.go:86] Shutting down OpenAPI V3 AggregationController I1119 13:40:03.411720 1 cluster_authentication_trust_controller.go:466] Shutting down cluster_authentication_trust_controller controller I1119 13:40:03.411735 1 storage_flowcontrol.go:187] APF bootstrap ensurer is exiting I1119 13:40:03.412052 1 available_controller.go:440] Shutting down AvailableConditionController I1119 13:40:03.412063 1 controller.go:132] Ending legacy_token_tracking_controller I1119 13:40:03.412067 1 controller.go:133] Shutting down legacy_token_tracking_controller I1119 13:40:03.412075 1 system_namespaces_controller.go:76] Shutting down system namespaces controller I1119 13:40:03.412082 1 autoregister_controller.go:168] Shutting down autoregister controller I1119 13:40:03.412094 1 gc_controller.go:91] Shutting down apiserver lease garbage collector I1119 13:40:03.412101 1 apf_controller.go:389] Shutting down API Priority and Fairness config worker ``` ### What did you expect to happen? apiserver shouldn't shutdown, and should work without timeout or connection-reset messages. ### How can we reproduce it (as minimally and precisely as possible)? I don't have a minimal reproducer right now. I can describe my set-up in detail, if required. ### Anything else we need to know? _No response_ ### Kubernetes version <details> ```console ❯ kubectl version Client Version: v1.31.0 Kustomize Version: v5.4.2 Server Version: v1.31.0 ``` </details> ### Cloud provider <details> Not using Cloud </details> ### OS version <details> ```console # On Linux: ❯ cat /etc/os-release NAME="Red Hat Enterprise Linux" VERSION="8.10 (Ootpa)" ID="rhel" ID_LIKE="fedora" VERSION_ID="8.10" PLATFORM_ID="platform:el8" PRETTY_NAME="Red Hat Enterprise Linux 8.10 (Ootpa)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:redhat:enterprise_linux:8::baseos" HOME_URL="https://www.redhat.com/" DOCUMENTATION_URL="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8" BUG_REPORT_URL="https://issues.redhat.com/" REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 8" REDHAT_BUGZILLA_PRODUCT_VERSION=8.10 REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux" REDHAT_SUPPORT_PRODUCT_VERSION="8.10" ❯ uname -a Linux born22.toa.des.co 4.18.0-553.22.1.el8_10.x86_64 #1 SMP Wed Sep 11 18:02:00 EDT 2024 x86_64 x86_64 x86_64 GNU/Linux ``` </details> ### Install tools <details> </details> ### Container runtime (CRI) and version (if applicable) <details> </details> ### Related plugins (CNI, CSI, ...) and versions (if applicable) <details> </details>
kind/bug,sig/scalability,sig/api-machinery,triage/accepted
low
Critical
2,672,318,750
PowerToys
I hope to implement a new feature that can batch move files into a newly created folder.
### Description of the new feature / enhancement There was a software called 'Files2Folder' that could achieve this functionality. It was integrated into the right-click menu. When I selected multiple files and clicked on this new option, it would create a folder in the current directory and move all the selected files into it. I found this feature very convenient. However, it's a pity that this software can no longer run on Windows 11. I wonder if a similar feature could be inherited. ### Scenario when this would be used? When you need to manually organize a moderate amount of files, this feature will be very convenient. It can also greatly simplify operational logic in everyday use. ### Supporting information _No response_
Needs-Triage
low
Minor
2,672,328,922
go
proposal: cmd/go: module aliases
### Proposal Details Importing a Go module is tied to a DNS domain. Should the developer of a module loose ownership of that domain, then the module must be migrated to a different domain. All consumers must follow at the same time, otherwise a binary might end up with a mixture of old and new code under different names, which can cause failures (new module A is used and configured by the binary, old module B used by some dependency is not). Kubernetes is [currently facing that challenge](https://github.com/kubernetes/kubernetes/issues/127966#issuecomment-2404126185) because the `.io` domain might go away and all modules are called `k8s.io/<something>`. It is not certain that anything needs to be done, but if something needs to be done, then it might take years to be ready - so let's discuss now. Can Go support such a transition more gracefully? One possibility would be to let a module define one or more aliases in its `go.mod`: ``` module k8s.io/api alias k8s.dev/api ``` If such a module gets imported via the alias, the compiler should not complain and treat it as if it had been imported under the official module name. Nothing would change in the `vendor` directory (in particular not some mass renaming of files). In https://pkg.go.dev, the alias could be a redirect to the original name. Deep links into the documentation of a package remain valid. Obviously this only makes sense if the original domain is guaranteed to disappear or not to be used anymore. If the module author transfers a domain, they have to migrate and cannot use the old module name anymore because the new owner of the domain might decide to publish its own modules there. If Go should detect such a conflict is TBD.
Proposal,modules
medium
Critical
2,672,344,190
transformers
The usage of the "forced_decoder_ids" parameter
### Feature request How to use the "forced_decoder_ids" parameter for decoder-only models? This parameter seems to be deprecated in the latest version. ![微信图片_20241119222811](https://github.com/user-attachments/assets/a5f2d44f-d98f-4959-85ae-e64489e3b9df) ### Motivation This is an important function. ### Your contribution This is an important function.
Feature request
low
Minor
2,672,351,586
youtube-dl
Bluesky + Mastodon Support
## Checklist - [ ] I'm reporting a new site support request - [ ] I've verified that I'm running youtube-dl version **2021.12.17** - [ ] I've checked that all provided URLs are alive and playable in a browser - [ ] I've checked that none of provided URLs violate any copyrights - [ ] I've searched the bugtracker for similar site support requests including closed ones ## Example URLs - Single video: https://mastodon.social/@EUCommission@ec.social-network.europa.eu/113508672709054404 - Single video: https://bsky.app/profile/0x0.boo/post/3lbc5bzun4k2p ## Description Would love to have a download option for those sites. WRITE DESCRIPTION HERE
site-support-request
low
Critical
2,672,382,079
PowerToys
PowerToys MWBHelper service crashed unexpectedly
### Microsoft PowerToys version 0.86.0 ### Installation method Microsoft Store ### Running as admin Yes ### Area(s) with issue? Mouse Without Borders ### Steps to reproduce This one is difficult to reproduce because it requires specific circumstances that don't seem easy to reproduce. I'm of course just saying that because hopefully it'll jinx the bug and I'll be able to reproduce it once more. Basically, I forgot to enable MWB as a service because I forgot to check that box in Settings for this PC - I had just set it up yesterday to run in sync with my two other PC's. Did that have something to do with it? Maybe. But, I also found that these errors in the Event Viewer were very specific and could be used to trace this bug, even if I don't know yet how to reproduce it. For both PowerToysSettings.exe and MWBHelper.exe, the Windows Application Event Log showed two Information level event with ID 1001, immediately followed by an error that displayed event ID 1002 with the Hanging Events task category. Since the 1001 information events are a little big, I attached those four reports as text files. But, here is what the 1002 error looked like: ``` The program PowerToys.MouseWithoutBordersHelper.exe version 0.86.0.0 stopped interacting with Windows and was closed. To see if more information about the problem is available, check the problem history in the Security and Maintenance control panel. ``` Similarly, PowerToys.Settings.exe also has a similar event in the same format as above, just with the different executable name. Subsequently, the User Profile Service restarted immediately after with event ID 1531, and the Winlogon service had an Information level event with ID 6003 that said: ``` The winlogon notification subscriber <SessionEnv> was unavailable to handle a critical notification event. ``` Clearly, something happened with these applications specifically to cause them to crash this way, and according to the logs, something on the Kernel level crashed - though, it is unclear what exactly (shown in log 2). The unusual thing about it was that PowerToys.Settings.exe was also one of the faulting application in the logs, and I believe the reason for that was because I tried opening the Settings pane as soon as MWBHelper crashed, which Windows notified me of, so I became aware of the crash and tried to see if the service was still functioning. In fact, it seems AdancedPaste also crashed similarly right after PowerToys.Settings crashed with the faulting module named CoreMessagingXP.dll, which makes this potentially a cascading failure. It's important to note, though, that AdvancedPaste crashed, whereas the previous two applications were Hanging Events. Both the PowerToysSettings and MWBHelper executables were ghost processes on my machine and couldn't be stopped even by running an administrative terminate call via WMIC. I've learned just enough from this bug that, if I was able to reproduce it, I could attach to the process with WinDbg and find the faulting kernel handle/thread/etc.. I would be curious to find it and see which one it is. ### ✔️ Expected Behavior Nothing to crash or hang ### ❌ Actual Behavior [WER log 0.txt](https://github.com/user-attachments/files/17816714/WER.log.0.txt) [WER log 1.txt](https://github.com/user-attachments/files/17816765/WER.log.1.txt) [WER log 2.txt](https://github.com/user-attachments/files/17816766/WER.log.2.txt) [WER log 3.txt](https://github.com/user-attachments/files/17816715/WER.log.3.txt) ### Other Software _No response_
Issue-Bug,Needs-Triage
low
Critical
2,672,394,259
transformers
Flex attention + refactor
Opening this to add support for all models following #34282 Lets bring support for flex attention to more models! 🤗 - [x] Gemma2 It would be great to add the support for more architectures such as - [ ] Qwen2 - [ ] Llama - [ ] Gemma - [ ] QwenVl - [ ] Mistral - [ ] Clip ... and many more For anyone who wants to contribute just open a PR and link it to this issue, and ping me for a review!! 🤗
PyTorch,Feature request,Good Difficult Issue
low
Major
2,672,491,345
tauri
[bug] android code signin guide is wrong
### Describe the bug I want to build and deploy my android app, but when I ran the code I got an error: "App Not Installed As Package Appears To Be Invalid" I thought this was because of the fact that the app was unsigned, so I started to follow [the android signin guide](https://v2.tauri.app/distribute/sign/android/) but when building the app now I got a new error in the build.gradle.kts on line 34: "Malformed \uxxxx encoding." ### Reproduction this is a standard tauri app, made using ```npm create tauri@latest``` (with vite in the frontend) with no extra packages, only the shell plugin and the notification plugin being used. I also followed the guide and create keystore.properties in the android directory: ``` password=(here is my Alias password) keyAlias=upload storeFile=(here is my absulote path to the jks file) ``` I also changed the build.gradle.kts according to the guide: ``` import java.util.Properties import java.io.FileInputStream plugins { id("com.android.application") id("org.jetbrains.kotlin.android") id("rust") } val tauriProperties = Properties().apply { val propFile = file("tauri.properties") if (propFile.exists()) { propFile.inputStream().use { load(it) } } } android { compileSdk = 34 namespace = "com.israir.app" defaultConfig { manifestPlaceholders["usesCleartextTraffic"] = "false" applicationId = "com.israir.app" minSdk = 24 targetSdk = 34 versionCode = tauriProperties.getProperty("tauri.android.versionCode", "1").toInt() versionName = tauriProperties.getProperty("tauri.android.versionName", "1.0") } signingConfigs { create("release") { val keystorePropertiesFile = rootProject.file("keystore.properties") val keystoreProperties = Properties() if (keystorePropertiesFile.exists()) { keystoreProperties.load(FileInputStream(keystorePropertiesFile)) } keyAlias = keystoreProperties["keyAlias"] as String keyPassword = keystoreProperties["password"] as String storeFile = file(keystoreProperties["storeFile"] as String) storePassword = keystoreProperties["password"] as String } } buildTypes { getByName("debug") { manifestPlaceholders["usesCleartextTraffic"] = "true" isDebuggable = true isJniDebuggable = true isMinifyEnabled = false packaging { jniLibs.keepDebugSymbols.add("*/arm64-v8a/*.so") jniLibs.keepDebugSymbols.add("*/armeabi-v7a/*.so") jniLibs.keepDebugSymbols.add("*/x86/*.so") jniLibs.keepDebugSymbols.add("*/x86_64/*.so") } } getByName("release") { signingConfig = signingConfigs.getByName("release") isMinifyEnabled = true proguardFiles( *fileTree(".") { include("**/*.pro") } .plus(getDefaultProguardFile("proguard-android-optimize.txt")) .toList().toTypedArray() ) } } kotlinOptions { jvmTarget = "1.8" } buildFeatures { buildConfig = true } } rust { rootDirRel = "../../../" } dependencies { implementation("androidx.webkit:webkit:1.6.1") implementation("androidx.appcompat:appcompat:1.6.1") implementation("com.google.android.material:material:1.8.0") testImplementation("junit:junit:4.13.2") androidTestImplementation("androidx.test.ext:junit:1.1.4") androidTestImplementation("androidx.test.espresso:espresso-core:3.5.0") } apply(from = "tauri.build.gradle.kts") ``` also, I tested and it seems gradle does find my jks file, but fails to read it (also, the password is 100% accurate, since I deleted the old jks file and created a new one). ### Expected behavior I just want to compile the app and deliver it to the client ): ### Full `tauri info` output ```text [✔] Environment - OS: Windows 10.0.22631 x86_64 (X64) ✔ WebView2: 130.0.2849.80 ✔ MSVC: Visual Studio Community 2022 ✔ rustc: 1.81.0 (eeb90cda1 2024-09-04) ✔ cargo: 1.81.0 (2dbb1af80 2024-08-20) ✔ rustup: 1.27.1 (54dd3d00f 2024-04-24) ✔ Rust toolchain: stable-x86_64-pc-windows-gnu (default) - node: 20.9.0 - pnpm: 9.4.0 - yarn: 1.22.22 - npm: 10.8.3 - bun: 1.1.2 - deno: deno 2.0.6 [-] Packages - tauri 🦀: 2.1.1 - tauri-build 🦀: 2.0.3 - wry 🦀: 0.47.0 - tao 🦀: 0.30.8 - tauri-cli 🦀: 1.5.6 - @tauri-apps/api : 2.1.1 - @tauri-apps/cli : 2.1.0 [-] Plugins - tauri-plugin-notification 🦀: 2.0.1 - @tauri-apps/plugin-notification : 2.0.0 - tauri-plugin-shell 🦀: 2.0.2 - @tauri-apps/plugin-shell : 2.0.1 [-] App - build-type: bundle - CSP: unset - frontendDist: ../dist - devUrl: http://localhost:1420/ - framework: React - bundler: Vite ``` ### Stack trace ```text > israir@0.1.0 build > tsc && vite build vite v5.4.11 building for production... ✓ 1945 modules transformed. dist/index.html 0.49 kB │ gzip: 0.31 kB dist/assets/index-D-dDrWGl.css 21.52 kB │ gzip: 4.47 kB dist/assets/index-Bg0t2eB_.js 829.82 kB │ gzip: 223.29 kB (!) Some chunks are larger than 500 kB after minification. Consider: - Using dynamic import() to code-split the application - Use build.rollupOptions.output.manualChunks to improve chunking: https://rollupjs.org/configuration-options/#output-manualchunks - Adjust chunk size limit for this warning via build.chunkSizeWarningLimit. ✓ built in 29.00s Compiling israir v0.1.0 (C:\Users\raisf\Desktop\projects\Israir\src-tauri) Finished `release` profile [optimized] target(s) in 1m 53s Info symlinking lib "C:\\Users\\raisf\\Desktop\\projects\\Israir\\src-tauri\\target\\aarch64-linux-android\\release\\libisrair_lib.so" in jniLibs dir "C:\\Users\\raisf\\Desktop\\projects\\Israir\\src-tauri\\gen/android\\app/src/main/jniLibs/arm64-v8a" Info "C:\\Users\\raisf\\Desktop\\projects\\Israir\\src-tauri\\target\\aarch64-linux-android\\release\\libisrair_lib.so" requires shared lib "libandroid.so" Info "C:\\Users\\raisf\\Desktop\\projects\\Israir\\src-tauri\\target\\aarch64-linux-android\\release\\libisrair_lib.so" requires shared lib "libdl.so" Info "C:\\Users\\raisf\\Desktop\\projects\\Israir\\src-tauri\\target\\aarch64-linux-android\\release\\libisrair_lib.so" requires shared lib "liblog.so" Info "C:\\Users\\raisf\\Desktop\\projects\\Israir\\src-tauri\\target\\aarch64-linux-android\\release\\libisrair_lib.so" requires shared lib "libm.so" Info "C:\\Users\\raisf\\Desktop\\projects\\Israir\\src-tauri\\target\\aarch64-linux-android\\release\\libisrair_lib.so" requires shared lib "libc.so" Info symlink at "C:\\Users\\raisf\\Desktop\\projects\\Israir\\src-tauri\\gen/android\\app/src/main/jniLibs/arm64-v8a\\libisrair_lib.so" points to "C:\\Users\\raisf\\Desktop\\projects\\Israir\\src-tauri\\target\\aarch64-linux-android\\release\\libisrair_lib.so" Info symlink at "C:\\Users\\raisf\\Desktop\\projects\\Israir\\src-tauri\\gen/android\\app/src/main/jniLibs/armeabi-v7a\\libisrair_lib.so" points to "C:\\Users\\raisf\\Desktop\\projects\\Israir\\src-tauri\\target\\armv7-linux-androideabi\\release\\libisrair_lib.so" Info symlink at "C:\\Users\\raisf\\Desktop\\projects\\Israir\\src-tauri\\gen/android\\app/src/main/jniLibs/x86\\libisrair_lib.so" points to "C:\\Users\\raisf\\Desktop\\projects\\Israir\\src-tauri\\target\\i686-linux-android\\release\\libisrair_lib.so" Info symlink at "C:\\Users\\raisf\\Desktop\\projects\\Israir\\src-tauri\\gen/android\\app/src/main/jniLibs/x86_64\\libisrair_lib.so" points to "C:\\Users\\raisf\\Desktop\\projects\\Israir\\src-tauri\\target\\x86_64-linux-android\\release\\libisrair_lib.so" FAILURE: Build failed with an exception. * Where: Build file 'C:\Users\raisf\Desktop\projects\Israir\src-tauri\gen\android\app\build.gradle.kts' line: 32 * What went wrong: Malformed \uxxxx encoding. * Try: > Run with --stacktrace option to get the stack trace. > Run with --info or --debug option to get more log output. > Run with --scan to get full insights. > Get more help at https://help.gradle.org. BUILD FAILED in 2s Failed to assemble APK: command ["C:\\Users\\raisf\\Desktop\\projects\\Israir\\src-tauri\\gen/android\\gradlew.bat", "--project-dir", "C:\\Users\\raisf\\Desktop\\projects\\Israir\\src-tauri\\gen/android"] exited with code 1: command ["C:\\Users\\raisf\\Desktop\\projects\\Israir\\src-tauri\\gen/android\\gradlew.bat", "--project-dir", "C:\\Users\\raisf\\Desktop\\projects\\Israir\\src-tauri\\gen/android"] exited with code 1 Error Failed to assemble APK: command ["C:\\Users\\raisf\\Desktop\\projects\\Israir\\src-tauri\\gen/android\\gradlew.bat", "--project-dir", "C:\\Users\\raisf\\Desktop\\projects\\Israir\\src-tauri\\gen/android"] exited with code 1: command ["C:\\Users\\raisf\\Desktop\\projects\\Israir\\src-tauri\\gen/android\\gradlew.bat", "--project-dir", "C:\\Users\\raisf\\Desktop\\projects\\Israir\\src-tauri\\gen/android"] exited with code 1 ``` ### Additional context _No response_
type: bug,status: needs triage
low
Critical
2,672,575,917
vscode
Focus movement with Attached file pill should be improved
1. Open chat view, notice current file is attached and rendered as a pill 2. Press <kbd>Tab</kbd> -> focus is nicely moved to the pill 3. Press <kbd>Tab</kbd> focus moves to the eye 4. Press space on the eye. Focus gets lost 🐛 Instead focus should stay on the pill 5. Press <kbd>esc</kbd> when focus is on the pill. It should move focus back to chat like find widget does 🐛 In addition we need to make it easier for users to remove / disable items. This can follow the breakpoints pattern. * When focus is on the pill, <kbd>cmd</kbd>+<kbd>backspace</kbd> on macOS and <kbd>delete</kbd> on Win and LInux Fixing this should make it easier for a keyboard centric user to disable / enable attached files. And move focus back to input. fyi @ulugbekna @meganrogge
bug,accessibility,chat
low
Minor
2,672,587,934
nvm
`nvm install Argon` etc works, and should not
#### Operating system and version: Ubuntu server 24.04.1, x86-64, clean install, removed snapd and unattended-updates #### `nvm debug` output: <details> ```sh nvm debug nvm --version: v0.40.1 $SHELL: /bin/bash $SHLVL: 1 whoami: 'user' ${HOME}: /home/user ${NVM_DIR}: '${HOME}/.nvm' ${PATH}: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin $PREFIX: '' ${NPM_CONFIG_PREFIX}: '' $NVM_NODEJS_ORG_MIRROR: '' $NVM_IOJS_ORG_MIRROR: '' shell version: 'GNU bash, version 5.2.21(1)-release (x86_64-pc-linux-gnu)' uname -a: 'Linux 6.8.0-48-generic #48-Ubuntu SMP PREEMPT_DYNAMIC Fri Sep 27 14:04:52 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux' checksum binary: 'sha256sum' OS version: Ubuntu 24.04.1 LTS awk: /usr/bin/awk, GNU Awk 5.2.1, API 3.2, PMA Avon 8-g1, (GNU MPFR 4.2.1, GNU MP 6.3.0) curl: /usr/bin/curl, curl 8.5.0 (x86_64-pc-linux-gnu) libcurl/8.5.0 OpenSSL/3.0.13 zlib/1.3 brotli/1.1.0 zstd/1.5.5 libidn2/2.3.7 libpsl/0.21.2 (+libidn2/2.3.7) libssh/0.10.6/openssl/zlib nghttp2/1.59.0 librtmp/2.3 OpenLDAP/2.6.7 wget: /usr/bin/wget, GNU Wget 1.21.4 built on linux-gnu. git: /usr/bin/git, git version 2.43.0 grep: /usr/bin/grep (grep --color=auto), grep (GNU grep) 3.11 sed: /usr/bin/sed, sed (GNU sed) 4.9 cut: /usr/bin/cut, cut (GNU coreutils) 9.4 basename: /usr/bin/basename, basename (GNU coreutils) 9.4 rm: /usr/bin/rm, rm (GNU coreutils) 9.4 mkdir: /usr/bin/mkdir, mkdir (GNU coreutils) 9.4 xargs: /usr/bin/xargs, xargs (GNU findutils) 4.9.0 nvm current: none which node: which iojs: which npm: npm config get prefix: Command 'npm' not found, but can be installed with: sudo apt install npm npm root -g: Command 'npm' not found, but can be installed with: sudo apt install npm ``` </details> #### `nvm ls` output: <details> ```sh v22.11.0 default -> Jod (-> N/A) iojs -> N/A (default) unstable -> N/A (default) node -> stable (-> v22.11.0) (default) stable -> 22.11 (-> v22.11.0) (default) lts/* -> lts/jod (-> v22.11.0) lts/argon -> v4.9.1 (-> N/A) lts/boron -> v6.17.1 (-> N/A) lts/carbon -> v8.17.0 (-> N/A) lts/dubnium -> v10.24.1 (-> N/A) lts/erbium -> v12.22.12 (-> N/A) lts/fermium -> v14.21.3 (-> N/A) lts/gallium -> v16.20.2 (-> N/A) lts/hydrogen -> v18.20.5 (-> N/A) lts/iron -> v20.18.0 (-> N/A) lts/jod -> v22.11.0 ``` </details> #### How did you install `nvm`? curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.1/install.sh | bash #### What steps did you perform? install nvm, execute nvm install 22 logout, login (over ssh, headless box) #### What happened? It downloaded node, but it doesn't seem to 'know' it. The node directory in ~/.nvm isn't added to PATH. #### What did you expect to happen? Being able to run node. #### Is there anything in any of your profile files that modifies the `PATH`? No. Virgin system. Additional info: I have tried older versions of the nvm script. Didn't help. I can run node, when I execute 'nvm use 22'. Then the PATH is changed, until next logout/login. As a test I added 'set -x' in .bashrc before the 3 added lines. The output on login: <details> ```sh iiiin++ export NVM_DIR=/home/user/.nvm ++ NVM_DIR=/home/user/.nvm ++ '[' -s /home/user/.nvm/nvm.sh ']' ++ . /home/user/.nvm/nvm.sh +++ NVM_SCRIPT_SOURCE=']' +++ '[' -z '' ']' +++ export NVM_CD_FLAGS= +++ NVM_CD_FLAGS= +++ nvm_is_zsh +++ '[' -n '' ']' +++ '[' -z /home/user/.nvm ']' +++ case $NVM_DIR in +++ unset NVM_SCRIPT_SOURCE +++ nvm_process_parameters +++ local NVM_AUTO_MODE +++ NVM_AUTO_MODE=use +++ '[' 0 -ne 0 ']' +++ nvm_auto use +++ local NVM_MODE +++ NVM_MODE=use +++ case "${NVM_MODE}" in +++ local VERSION +++ local NVM_CURRENT ++++ nvm_ls_current ++++ local NVM_LS_CURRENT_NODE_PATH +++++ command which node ++++ NVM_LS_CURRENT_NODE_PATH= ++++ nvm_echo none ++++ command printf '%s\n' none +++ NVM_CURRENT=none +++ '[' _none = _none ']' ++++ nvm_resolve_local_alias default ++++ nvm_echo ++++ command printf '%s\n' '' +++ VERSION=N/A +++ '[' -n N/A ']' +++ '[' _N/A '!=' _N/A ']' +++ return 0 ++ '[' -s /home/user/.nvm/bash_completion ']' ++ . /home/user/.nvm/bash_completion +++ command -v nvm +++ [[ -n '' ]] +++ complete -o default -F __nvm nvm + '[' -d /home/user/bin ']' + '[' -d /home/user/.local/bin ']' ``` </details>
bugs,pull request wanted,installing nvm
low
Critical
2,672,623,876
node
Add a `level` parameter to test runner diagnostics
maybe we should add a `level` parameter in diagnostics (i.e debug/info/warn/error) so reporters can implement coloring or other things _Originally posted by @MoLow in https://github.com/nodejs/node/pull/55911#discussion_r1847778597_
feature request,test_runner
low
Critical
2,672,650,862
electron
How to use fetch() from utility process with self-signed certs / custom proxy settings?
### Preflight Checklist - [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/main/CONTRIBUTING.md) for this project. - [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/main/CODE_OF_CONDUCT.md) that this project adheres to. - [x] I have searched the [issue tracker](https://www.github.com/electron/electron/issues) for a feature request that matches the one I want to file, without success. ### Problem Description #### Context It is possible to use `net.fetch()` from a utility process. As stated [here](https://www.electronjs.org/docs/latest/api/net#netfetchinput-init) in the documentation: > This method will issue requests from the [default session](https://www.electronjs.org/docs/latest/api/session#sessiondefaultsession). To send a fetch request from another session, use [ses.fetch()](https://www.electronjs.org/docs/latest/api/session#sesfetchinput-init). So using `net.fetch()` in a utility process basically comes down to using `session.defaultSession.fetch()`. #### Requirement I need to use [session.setCertificateVerifyProc()](https://www.electronjs.org/docs/latest/api/session#sessetcertificateverifyprocproc) in order to accept self-signed certificates from a given certificate authority. However, I want that logic to be limited to the task performed by a utility process. So I would like to do the following in a utility process: - create a session that is specific to that utility process (using `session.fromPartition()`) - call `setCertificateVerifyProc()` on that specific session - to then call `fetch()` on that specific session #### Problem `session` is undefined in the context of a utility process. ### Proposed Solution Make `session` available in the context of a utility process. Note that this could maybe help reconcile the 2 different ways of setting proxy configuration (see below, this is error-prone). ```js const config = { proxyRules }; // Set proxy settings for main process and renderer processes session.defaultSession.setProxy(config); // Set proxy settings for utility process app.setProxy(config); ``` ### Alternatives Considered NA ### Additional Information _No response_
enhancement :sparkles:
medium
Critical
2,672,692,055
PowerToys
Add a Group Policy Management tool
### Description of the new feature / enhancement A new Group Policy Management tool for PowerToys that simplifies and modernizes the experience of working with Group Policies. This tool would provide an intuitive interface and streamlined functionality compared to the existing Group Policy Editor. It would also integrate seamlessly into the PowerToys suite, complementing tools like the Environment Variables, Hosts File Editor, and Registry Preview. ### Scenario when this would be used? Managing Group Policy settings on Windows can be cumbersome due to the outdated interface and overly complex options in the current editor. For example: - A administrator wants to quickly enable or disable a specific policy but struggles to navigate through multiple layers in the existing tool; - Another admin needs to view and adjust multiple policies efficiently, but the lack of a modern search and filter system slows down the process. With a Group Policy Management tool in PowerToys, these users can perform these tasks faster and with greater ease, enhancing productivity. This tool could also help users who regularly adjust system configurations consolidate all system-related tools in one place for better accessibility. ### Supporting information _No response_
Needs-Triage
low
Major
2,672,708,472
kubernetes
resourceFieldRef.divisor when unspecified is set to 0 (documented is 1)
### What happened? The divisor key in [ResourceFieldRef](https://kubernetes.io/docs/reference/kubernetes-api/common-definitions/resource-field-selector/#ResourceFieldSelector) is documented to default to 1. However, when applying a manifest using a resourceFieldRef without specifying the divisor, then checking the spec, the divisor is set to '0'. ### What did you expect to happen? The field should either not be present when checking the manifests, or having the correct value. ### How can we reproduce it (as minimally and precisely as possible)? On a kubernetes cluster, deploy the following manifest: ```yaml apiVersion: apps/v1 kind: Deployment metadata: labels: app: test-divisor name: test-divisor spec: replicas: 1 selector: matchLabels: app: test-divisor template: metadata: labels: app: test-divisor spec: containers: - image: invalid-name name: test-divisor resources: limits: memory: 30M env: - name: GOMEMLIMIT valueFrom: resourceFieldRef: resource: limits.memory ``` Then run ```console $ kubectl get deploy test-divisor -o jsonpath='{.spec.template.spec.containers[0].env[0].valueFrom.resourceFieldRef.divisor}{"\n"}' 0 ``` ### Anything else we need to know? This cause problem for things like argocd which considers the application perpetually out of sync (see cilium/cilium#3063) ### Kubernetes version <details> ```console $ kubectl version Client Version: v1.31.2 Kustomize Version: v5.4.2 Server Version: v1.29.10 WARNING: version difference between client (1.31) and server (1.29) exceeds the supported minor version skew of +/-1 ``` </details> ### Cloud provider <details> None </details> ### OS version <details> ```console $ cat /etc/os-release NAME="Arch Linux" PRETTY_NAME="Arch Linux" ID=arch BUILD_ID=rolling ANSI_COLOR="38;2;23;147;209" HOME_URL="https://archlinux.org/" DOCUMENTATION_URL="https://wiki.archlinux.org/" SUPPORT_URL="https://bbs.archlinux.org/" BUG_REPORT_URL="https://gitlab.archlinux.org/groups/archlinux/-/issues" PRIVACY_POLICY_URL="https://terms.archlinux.org/docs/privacy-policy/" LOGO=archlinux-logo $ uname -a Linux framework 6.11.6-arch1-1 #1 SMP PREEMPT_DYNAMIC Fri, 01 Nov 2024 03:30:41 +0000 x86_64 GNU/Linux ``` </details> ### Install tools <details> Kubespray </details> ### Container runtime (CRI) and version (if applicable) _No response_ ### Related plugins (CNI, CSI, ...) and versions (if applicable) _No response_
kind/bug,sig/api-machinery,sig/apps,triage/accepted
low
Critical
2,672,708,866
flutter
Smoothing Discrete Scroll Inputs
### Document Link https://flutter.dev/go/smoothing-discrete-scroll-inputs ### What problem are you solving? One of the first, and often blocking, issues noticed by developers when evaluating Flutter for desktop/web application development is its default scrolling behavior. When using low-resolution input devices, like a mouse with a scroll wheel, or when making quick gestures on high-resolution devices, scrollable content may shift abruptly by hundreds of pixels from one frame to the next. This behavior may be perceived negatively in several ways: - The abrupt movement of scrollable content might interrupt the natural flow of reading. In applications that implement smooth scrolling, users can follow text seamlessly as they scroll. Without this functionality, users must constantly refocus and locate their previous reading position after each scroll. - Modern web applications are generally expected to feature smooth scrolling, as it is enabled by default across all major browsers and platforms. The absence of smooth scrolling in an application can make the experience feel inconsistent with standard web interactions and diminish the overall perception of quality. - The sudden nature of step-based scrolling might lead users to believe that there is an issue with their device or browser. ### What issues does this document resolve? - https://github.com/flutter/flutter/issues/32120 - https://github.com/flutter/flutter/issues/159194 - https://github.com/flutter/flutter/issues/159195
framework,f: scrolling,platform-web,a: desktop,P3,design doc,team-framework,triaged-framework,:scroll:
low
Minor
2,672,734,954
vscode
Only add reference padding when the text is available
<!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ --> <!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ --> <!-- Please search existing issues to avoid creating duplicates. --> <!-- Describe the feature you'd like. --> The codebase I'm working on now takes some time to resolve all the dependencies and the references. https://github.com/user-attachments/assets/eca1e935-6316-4ce7-a64b-808a995c0f81 VSCode adds an empty "padding" while resolving the project dependencies, which leads me to think that I have an empty line multiple times when that's not the case. Proposed solution: only move the text when the references are solved. I attached a recording of the issue.
feature-request,code-lens
low
Minor
2,672,765,199
yt-dlp
Output template: Support conditional statements
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE - [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field ### Checklist - [X] I'm requesting a feature unrelated to a specific site - [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme) - [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels)) - [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates - [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue) ### Provide a description that is worded well enough to be understood I would like to format videos in playlists differently from videos out of playlists (specifically, I'm using `%(playlist_title,upload_date>%Y-%m-%d&{} - |)s%(playlist_index&{} - |)s`). However, since the `youtube:tab` extractor presents videos from eg `/@user/videos` as a playlist, this means that unless I either manually list all the videos I want to download, or download channel updates and playlists separately, the above output template will have the wrong behaviour on channel tabs. A way to distinguish the two is desirable. For the remainder of the discussion, note that in this case `playlist_id=channel_id`. Additional relevant context: On one of my devices, I'm using https://github.com/JunkFood02/Seal to wrap `yt-dlp`. While I have prodded them (https://github.com/JunkFood02/Seal/issues/1873) to support multiple output templates, at the moment they only support a single custom output template. Four ways suggest themselves: - Do nothing, and have users use the Python API or shell scripting for this (somewhat painful), or have users expand the list of urls to be downloaded manually (painful on mobile for more than a handful of videos, requires `yt-dlp --print | xargs yt-dlp` on desktop which is less than ideal) - Change `youtube:tab`'s behaviour, perhaps by adding a new metadata field (backwards incompatible) - Add support for conditional formatting in output templates. Note that for this to be useful here, we'd need the conditions to evaluate before alternative selection. To reduce maintenance burden, we could either reuse the match filter predicates (so perhaps something like `%(playlist_title?(playlist_id & playlist_id != channel_id),upload_date)s` or %(playlist_title[playlist_id][playlist_id != channel_id,upload_date)s`), or allow arbitrary python conditionals (`{playlist_title if playlist_id != "NA" and playlist_id != channel_id else upload_date}`) - Other overengineered solutions also suggest themselves, such as having a JSON-based coprocess protocol, but their burden does not look like it pays off ### Provide verbose output that clearly demonstrates the problem - [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`) - [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead - [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below ### Complete Verbose Output ```shell [debug] Command-line config: ['--ignore-config', '-vU', '--print', '%(playlist_title,upload_date>%Y-%m-%d&{} - |)s%(playlist_index|)s', 'https://www.youtube.com/@PracticalEngineering', '--max-downloads', '1'] [debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8 [debug] yt-dlp version stable@2024.11.18 from yt-dlp/yt-dlp [7ea278792] [debug] Python 3.12.7 (CPython x86_64 64bit) - Linux-6.11.5-arch1-1-x86_64-with-glibc2.40 (OpenSSL 3.4.0 22 Oct 2024, glibc 2.40) [debug] exe versions: ffmpeg 7.1 (setts), ffprobe 7.1, phantomjs broken, rtmpdump 2.4 [debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.08.30, mutagen-1.47.0, requests-2.32.3, secretstorage-3.3.3, sqlite3-3.46.1, urllib3-1.26.20, websockets-12.0 [debug] Proxy map: {} [debug] Request Handlers: urllib, requests [debug] Loaded 1837 extractors [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest Latest version: stable@2024.11.18 from yt-dlp/yt-dlp yt-dlp is up to date (stable@2024.11.18 from yt-dlp/yt-dlp) [youtube:tab] Extracting URL: https://www.youtube.com/@PracticalEngineering [youtube:tab] @PracticalEngineering: Downloading webpage [debug] [youtube:tab] Selected tab: 'videos' (videos), Requested tab: '' [youtube:tab] Downloading all uploads of the channel. To download only the videos in a specific tab, pass the tab's URL [download] Downloading playlist: Practical Engineering Australia - Videos [youtube:tab] Playlist Practical Engineering Australia - Videos: Downloading 6 items of 6 [download] Downloading item 1 of 6 [youtube] Extracting URL: https://www.youtube.com/watch?v=HmLmp06cbnU [youtube] HmLmp06cbnU: Downloading webpage [youtube] HmLmp06cbnU: Downloading ios player API JSON [youtube] HmLmp06cbnU: Downloading mweb player API JSON [debug] Loading youtube-nsig.2d24ba15 from cache [debug] [youtube] Decrypted nsig rvtzc3BXbPfKpM2A6 => MVujBRZCtmV6LA [debug] Loading youtube-nsig.2d24ba15 from cache [debug] [youtube] Decrypted nsig NvO8RpuawyiuEZinn => _eQFO4S5coqzFA [youtube] HmLmp06cbnU: Downloading m3u8 information [debug] Sort order given by extractor: quality, res, fps, hdr:12, source, vcodec, channels, acodec, lang, proto [debug] Formats sorted by: hasvid, ie_pref, quality, res, fps, hdr:12(7), source, vcodec, channels, acodec, lang, proto, size, br, asr, vext, aext, hasaud, id [debug] Default format spec: bestvideo*+bestaudio/best [info] HmLmp06cbnU: Downloading 1 format(s): 136+251 Practical Engineering Australia - Videos - 1 [info] Maximum number of downloads reached, stopping due to --max-downloads Aborting remaining downloads ```
enhancement
low
Critical
2,672,797,386
tensorflow
tf.gather and workarouds are very slow on TPU
### Issue type Performance ### Have you reproduced the bug with TensorFlow Nightly? No ### Source source ### TensorFlow version 2.16.1 ### Custom code Yes ### OS platform and distribution Ubuntu 22.04.2 LTS ### Mobile device TPU VM ### Python version 3.10.12 ### Bazel version _No response_ ### GCC/compiler version _No response_ ### CUDA/cuDNN version _No response_ ### GPU model and memory _No response_ ### Current behavior? Hi all, I am trying to solve a performance bug that occurs during pretraining/fine-tuning a DeBERTa model on TPUs. In a nutshell, the `tf.gather` implemention is very slow on TPUs (but very fast on GPUs). I am now looking for a way, that boosts up the `tf.one_hot` + `tf.einsum` trick, which would have massive impact in pretraining DeBERTa models to widespread it's usage. There are some issues that have also reported this issue: * https://github.com/huggingface/transformers/issues/18239 * https://github.com/keras-team/keras-hub/issues/606 But with no solution yet. Any help is highly appreciated! ### Standalone code to reproduce the issue Here's a code snippet with an example: ```python def take_along_axis_v2(x, indices): # Only a valid port of np.take_along_axis when the gather axis is -1 # TPU + gathers and reshapes don't go along well -- see https://github.com/huggingface/transformers/issues/18239 if isinstance(tf.distribute.get_strategy(), tf.distribute.TPUStrategy): # [B, S, P] -> [B, S, P, D] one_hot_indices = tf.one_hot(indices, depth=x.shape[-1], dtype=x.dtype) # if we ignore the first two dims, this is equivalent to multiplying a matrix (one hot) by a vector (x) # grossly abusing notation: [B, S, P, D] . [B, S, D] = [B, S, P] gathered = tf.einsum("ijkl,ijl->ijk", one_hot_indices, x) # GPUs, on the other hand, prefer gathers instead of large one-hot+matmuls else: gathered = tf.gather(x, indices, batch_dims=2) return gathered ``` Taken from https://github.com/WissamAntoun/CamemBERTa/blob/1a1fb4a658729dfac2bb93842d88261132803ec3/modeling_tf_deberta_v2.py#L734-L750 ### Relevant log output _No response_
stat:awaiting tensorflower,comp:tpus,type:performance,TF 2.16
low
Critical
2,672,847,016
rust
Tracking Issue for `const_destruct`
Feature gate: `#![feature(const_destruct)]` This is a tracking issue for `const_destruct`, which enables the naming of the `Destruct` trait and its use in `~const` bounds to allow dropping values in const contexts. ### Public API ```rust pub trait Destruct { } ``` ### Steps / History <!-- For larger features, more steps might be involved. If the feature is changed later, please add those PRs here as well. --> - [x] Implementation: It was already implemented, but it's getting a new feature gate in https://github.com/rust-lang/rust/pull/132329. - [ ] Final comment period (FCP)[^1] - [ ] Stabilization PR ### Unresolved Questions - Do we want to allow `~const` bounds on `const Drop` impls? I think we do, and sorely need them for const drop to ever be useful. See my justification in https://github.com/rust-lang/rust/pull/132329#discussion_r1838749569. We want to be able to implement a conditional drop impl like: ```rust struct DropAndCall<F: Fn()>(F); impl<F> const Drop for DropAndCall<F> where F: ~const Fn(), { fn drop(&mut self) { (self.0)(); // This should be allowed. } } ``` This is what is implemented on nightly. [^1]: https://std-dev-guide.rust-lang.org/feature-lifecycle/stabilization.html
T-lang,T-libs-api,C-tracking-issue,F-const_trait_impl,T-types,PG-const-traits
low
Minor
2,672,915,124
neovim
Sporadic crashes on Sway/Wezterm
### Problem Neovim sometimes crashes when I'm editing code. It has happened three or four times now. It always crashes with the same assertion error: ``` vim: /usr/src/debug/neovim/neovim/src/nvim/normal.c:2539: nv_screengo: Assertion `curwin->w_curswant <= INT_MAX - w' failed. ``` I don't know if this is an actionable issue - but I expect it happens to other people as well. ### Steps to reproduce I have no idea how to reproduce it - since it for me happens sporadically. ### Expected behavior I expect it not to crash. ### Nvim version (nvim -v) NVIM v0.10.2 Build type: RelWithDebInfo LuaJIT 2.1.1727870382 Run "nvim -V1 -v" for more info ### Vim (not Nvim) behaves the same? Have not been able to verify. ### Operating system/version 6.11.5-arch1-1 x86_64 GNU/Linux (Arch Linux) ### Terminal name/version wezterm 20240203-110809-5046fc22 ### $TERM environment variable xterm-256color ### Installation Arch public repository - so mainline Neovim on arch linux
has:repro,has:backtrace,bug-crash,has:plan,column
low
Critical
2,672,920,210
vscode
Editor GPU: Live share participant indicators shows gpu lines bleeding through
![Image](https://github.com/user-attachments/assets/c4c48255-6b4b-4a2f-ad19-c1e20f4b852e)
bug,editor-gpu
low
Minor
2,672,943,927
opencv
opencv university button also should want by hovering background color into blue like that shop now button
### Describe the feature and motivation This issue regarding some user experience , both button should want same css so that it is good user experience. i also attach screenshot of this issue ![Screenshot 2024-11-19 225102](https://github.com/user-attachments/assets/aa5718c8-a57d-4e7b-a6b9-0e1caabb6eaa) ![Screenshot 2024-11-19 225658](https://github.com/user-attachments/assets/71d3a354-c429-4914-b292-56f51325dd5b)
feature
low
Minor
2,672,944,909
storybook
[Bug]: Monorepo `getAbsolutePath` results in broken `project.json`
### Describe the bug Currently Storybook's `project.json` reports the addons that are listed in `.storybook/main.ts`. It does this by calling `getActualPacakgeVersion` on each of the addons: https://github.com/storybookjs/storybook/blob/next/code/core/src/telemetry/package-json.ts However, in monorepos we recommend using absolute paths with the following pattern: ```ts import { dirname, join } from 'node:path'; const getAbsolutePath = <I extends string>(input: I): I => dirname(require.resolve(join(input, 'package.json'))) as any; const config: StorybookConfig = { stories: ['../stories/**/*.mdx', '../stories/**/*.stories.@(js|jsx|mjs|ts|tsx)'], addons: [ getAbsolutePath('@storybook/addon-onboarding'), getAbsolutePath('@storybook/addon-essentials'), getAbsolutePath('@chromatic-com/storybook'), ] }, ``` This generates addon entries like the following: ``` info "addons": { info "$SNIP/node_modules/.pnpm/@storybook+addon-onboarding@8.5.0-alpha.8_react@18.3.1_storybook@8.5.0-alpha.8/node_modules/@storybook/addon-onboarding": { info "version": null info }, info "$SNIP/node_modules/.pnpm/@storybook+addon-essentials@8.5.0-alpha.8_@types+react@18.3.12_storybook@8.5.0-alpha.8/node_modules/@storybook/addon-essentials": { info "version": null info }, info "$SNIP/node_modules/.pnpm/@chromatic-com+storybook@3.2.2_react@18.3.1_storybook@8.5.0-alpha.8/node_modules/@chromatic-com/storybook": { info "version": null info }, ``` Instead, it should produce: ``` info "addons": { info "@storybook/addon-onboarding": { info "version": "8.5.0-alpha.8" info }, info "@storybook/addon-essentials": { info "version": "8.5.0-alpha.8" info }, info "@chromatic-com/storybook": { info "version": "3.2.2" info }, ``` ### Reproduction link N/A ### Reproduction steps See above ### System ```bash Any ``` ### Additional context _No response_
bug,core,sev:S2
low
Critical
2,673,057,372
deno
deno installation and downloads behind a coporate proxy
Version: Deno x2.0.5 Hi, sorry if i missed an issue.... We have a node project where we build several js files to run on a RHEL Linux. The idea is to use deno to build native executeable files via crosscompile. Now it seems as if deno could not do anything which needs to fetch files e.g.` deno install` and `deno info` wants do download from registry.npmjs.org but stucks without doing anything. I had a phone call with colleague responsible for proxy configuration and told me for which certificate i need to use on my pc to bypass issues like "local certificate errors" . so i added the .crt file with a deno setting deno.tlsCertificate in the project and also for node via cafile=xxx in npm settings file. Denos output: ``` tarting Deno language server... version: 2.0.5 (release, x86_64-pc-windows-msvc) executable: C:\mypath\deno\deno.EXE Connected to "Visual Studio Code" 1.94.2 Enabling import suggestions for: https://deno.land Error fetching registry config for "https://deno.land": EOF while parsing a value at line 1 column 0 Refreshing configuration tree... Resolved Deno configuration file: "file:///C:/mypath1/deno.json" Resolved package.json: "file:///C:/mypath1/package.json" Resolved .npmrc: "C:\Users\myuser\.npmrc" Could not set npm package requirements: Error getting response at http://registry.npmjs.org/axios for package "axios": An npm specifier not found in cache: "axios", --cached-only is specified. Could not set npm package requirements: Error getting response at http://registry.npmjs.org/axios for package "axios": An npm specifier not found in cache: "axios", --cached-only is specified. Server ready. ..... Download http://registry.npmjs.org/pg-promise successfully cancelled request with ID: 652 Download http://registry.npmjs.org/@types%2fnode Download http://registry.npmjs.org/@types%2fpg Download http://registry.npmjs.org/@types%2fpg-promise Download http://registry.npmjs.org/@typescript-eslint%2feslint-plugin Download http://registry.npmjs.org/@typescript-eslint%2fparser Download http://registry.npmjs.org/axios Download http://registry.npmjs.org/ebcdic-ascii Download http://registry.npmjs.org/basic-ftp Download http://registry.npmjs.org/esbuild Download http://registry.npmjs.org/fast-xml-parser Download http://registry.npmjs.org/esbuild-node-externals Download http://registry.npmjs.org/ftp-ts Download http://registry.npmjs.org/memorystream Download http://registry.npmjs.org/node-sql-parser Download http://registry.npmjs.org/pg Download http://registry.npmjs.org/ts-node Download http://registry.npmjs.org/pg-promise ... Could not set npm package requirements: Error getting response at http://registry.npmjs.org/@types%2fnode for package "@types/node": An npm specifier not found in cache: "@types/node", --cached-only is specified. Could not set npm package requirements: Error getting response at http://registry.npmjs.org/@types%2fnode for package "@types/node": An npm specifier not found in cache: "@types/node", --cached-only is specified. Error caching: Error getting response at http://registry.npmjs.org/@types%2fnode for package "@types/node": error sending request for url (https://registry.npmjs.org/@types%2Fnode): client error (Connect): tcp connect error: Ein Verbindungsversuch ist fehlgeschlagen, da die Gegenstelle nach einer bestimmten Zeitspanne nicht richtig reagiert hat, oder die hergestellte Verbindung war fehlerhaft, da der verbundene Host nicht reagiert hat. (os error 10060): Ein Verbindungsversuch ist fehlgeschlagen, da die Gegenstelle nach einer bestimmten Zeitspanne nicht richtig reagiert hat, oder die hergestellte Verbindung war fehlerhaft, da der verbundene Host nicht reagiert hat. (os error 10060) ``` Terminal output stuck: ``` Download ⣟ [10:36] 0/18 - http://registry.npmjs.org/@types%2fnode - http://registry.npmjs.org/@types%2fpg - http://registry.npmjs.org/@types%2fpg-promise - http://registry.npmjs.org/@typescript-eslint%2feslint-plugin ``` package.json ``` "devDependencies": { "@types/node": "^20.11.20", "@types/pg": "^8.11.10", "@types/pg-promise": "^5.4.3", ... }, "dependencies": { ... "pg": "^8.13.1", "pg-promise": "^11.10.1" } ``` So i am not sure if there is a problem with proxy(is there a checklist or something like that somewhere) , or a misconfiguration in the package.json ? Thanks!
needs investigation
low
Critical
2,673,066,759
godot
Imported Animation Name Gets Truncated with "Cycle" Ending
### Tested versions v4.3.stable.official [77dcf97d8] ### System information Godot v4.3.stable - Windows 10.0.19045 - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 980 (NVIDIA; 32.0.15.6094) - Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz (8 Threads) ### Issue description If an animation name ends with "Cycle" that ending gets removed after import process. ### Steps to reproduce Import an animation file (.glb in my case) and if any animation name ends with "Cycle", the "Cycle" ending gets removed from the animation name. ### Minimal reproduction project (MRP) Animation in example named "CubeAction_Cycle" it is shown as "CubeAction" in godot but if it is opened with another program Cycle is still in the name of animation. [bugrep_anim_name_cycle.zip](https://github.com/user-attachments/files/17819492/bugrep_anim_name_cycle.zip)
discussion,documentation,topic:import,topic:animation
low
Critical
2,673,099,799
go
crypto/internal/fips/aes/gcm: TestAllocations fails on PPC64
### Go version master ### Output of `go env` in your module/workspace: ```shell GOARCH=ppc64le ``` ### What did you do? cd crypto/internal/fips/aes/gcm go test ### What did you see happen? --- FAIL: TestAllocations (0.00s) ctrkdf_test.go:31: expected zero allocations, got 6.0 ### What did you expect to see? All tests pass
NeedsFix,arch-ppc64x,FixPending
low
Minor
2,673,212,713
flutter
"flutter run -d web-server --dds-port=PORT" doesn't start the dart development service on PORT
### Steps to reproduce 1. `flutter run -d web-server --web-hostname 0.0.0.0 --web-port 8081 --dds-port 8082 --no-web-resources-cdn` 2. List open ports, e.g. ```sh # lsof -i -P COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME dart:flut 82 root 9u IPv4 9928686 0t0 TCP *:8081 (LISTEN) dart:flut 82 root 14u IPv4 9932385 0t0 TCP *:44911 (LISTEN) ``` ### Expected results Expect tcp service listening on 8082 ### Actual results No service listening on 8082 ### Code sample <details open><summary>Code sample</summary> Dockerfile: ```dockerfile FROM debian ENV DEBIAN_FRONTEND=noninteractive ENV FLUTTER_VERSION=3.24.3 ENV FLUTTER_HOME=/usr/local/flutter RUN apt-get update RUN apt-get install -y libxi6 libgtk-3-0 libxrender1 libxtst6 libxslt1.1 curl git wget unzip libgconf-2-4 gdb libstdc++6 libglu1-mesa fonts-droid-fallback lib32stdc++6-amd64-cross python3 inotify-tools fswatch tmux # download Flutter SDK from Flutter Github repo RUN git clone --depth 1 --branch ${FLUTTER_VERSION} https://github.com/flutter/flutter.git /usr/local/flutter # Set flutter environment path ENV PATH="${FLUTTER_HOME}/bin:${FLUTTER_HOME}/bin/cache/dart-sdk/bin:${PATH}" RUN flutter precache --web # Run flutter doctor RUN flutter doctor # Enable flutter web # RUN flutter channel master # RUN flutter upgrade RUN flutter config --enable-web ``` ```sh flutter run -d web-server --web-hostname 0.0.0.0 --web-port 8081 --dds-port 8082 --no-web-resources-cdn ``` </details> ### Screenshots or Video _No response_ ### Logs <details open><summary>Logs</summary> ```console # lsof -i -P COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME dart:flut 82 root 9u IPv4 9928686 0t0 TCP *:8081 (LISTEN) dart:flut 82 root 14u IPv4 9932385 0t0 TCP *:44911 (LISTEN) ``` </details> ### Flutter Doctor output <details open><summary>Doctor output</summary> ```console # flutter doctor -v [!] Flutter (Channel [user-branch], 3.24.3, on Debian GNU/Linux 12 (bookworm) 6.6.26-linuxkit, locale en_US) ! Flutter version 3.24.3 on channel [user-branch] at /usr/local/flutter Currently on an unknown channel. Run `flutter channel` to switch to an official channel. If that doesn't fix the issue, reinstall Flutter by following instructions at https://flutter.dev/setup. ! Upstream repository unknown source is not a standard remote. Set environment variable "FLUTTER_GIT_URL" to unknown source to dismiss this error. • Framework revision 2663184aa7 (10 weeks ago), 2024-09-11 16:27:48 -0500 • Engine revision 36335019a8 • Dart version 3.5.3 • DevTools version 2.37.3 • If those were intentional, you can disregard the above warnings; however it is recommended to use "git" directly to perform update checks and upgrades. [✗] Android toolchain - develop for Android devices ✗ Unable to locate Android SDK. Install Android Studio from: https://developer.android.com/studio/index.html On first launch it will assist you in installing the Android SDK components. (or visit https://flutter.dev/to/linux-android-setup for detailed instructions). If the Android SDK has been installed to a custom location, please use `flutter config --android-sdk` to update to that location. [✗] Chrome - develop for the web (Cannot find Chrome executable at google-chrome) ! Cannot find Chrome. Try setting CHROME_EXECUTABLE to a Chrome executable. [✗] Linux toolchain - develop for Linux desktop ✗ clang++ is required for Linux development. It is likely available from your distribution (e.g.: apt install clang), or can be downloaded from https://releases.llvm.org/ ✗ CMake is required for Linux development. It is likely available from your distribution (e.g.: apt install cmake), or can be downloaded from https://cmake.org/download/ ✗ ninja is required for Linux development. It is likely available from your distribution (e.g.: apt install ninja-build), or can be downloaded from https://github.com/ninja-build/ninja/releases ✗ pkg-config is required for Linux development. It is likely available from your distribution (e.g.: apt install pkg-config), or can be downloaded from https://www.freedesktop.org/wiki/Software/pkg-config/ [!] Android Studio (not installed) • Android Studio not found; download from https://developer.android.com/studio/index.html (or visit https://flutter.dev/to/linux-android-setup for detailed instructions). [✓] Connected device (1 available) • Linux (desktop) • linux • linux-arm64 • Debian GNU/Linux 12 (bookworm) 6.6.26-linuxkit [✓] Network resources • All expected network resources are available. ! Doctor found issues in 5 categories ``` </details>
tool,platform-web,has reproducible steps,P2,team-tool,triaged-tool,found in release: 3.24,found in release: 3.27
low
Critical
2,673,219,733
PowerToys
Unable to update/install PowerToys after reverting from Win 11 24H2 to 23H2
### Microsoft PowerToys version 0.86.0 ### Installation method GitHub, PowerToys auto-update ### Running as admin None ### Area(s) with issue? Installer ### Steps to reproduce ![Image](https://github.com/user-attachments/assets/7e96d332-2c9e-4eb0-952c-75d4b54bfd46) [powertoys-bootstrapper-msi-0.86.0_20241119205512.log](https://github.com/user-attachments/files/17820002/powertoys-bootstrapper-msi-0.86.0_20241119205512.log) Getting an error "The feature you are trying to use is on a network resource that is unavailable" when I try to either update PowerToys from the app or from the MSI downloaded from GitHub. I attempted to clear the associated Package Cache folder as well, but it made no difference. Seems like this issue has something to do with the Windows update reverting process as someone else had the same problem on #24213 I did the update revert a couple of weeks ago and since then this has been an issue. ### ✔️ Expected Behavior _No response_ ### ❌ Actual Behavior _No response_ ### Other Software _No response_
Issue-Bug,Needs-Triage
low
Critical
2,673,233,523
godot
Move initialize_physics to own file
### Tested versions Reproducible in 4.3 stable - 0dda6a974c0b782216b3bf8a2a27fdbc5b0a6cd9 ### System information Windows 11 - Godot 4.3 ### Issue description Enhancement request: Move initialize_physics() function from main.cpp to a dedicated file Current location: main.cpp line 317 Current implementation: // FIXME: Could maybe be moved to have less code in main.cpp. void initialize_physics() { #ifndef *3D*DISABLED /// 3D Physics Server Moving this function to its own file would: Reduce main.cpp complexity Improve code organization Address existing FIXME comment No existing issue was found addressing this FIXME comment. ### Steps to reproduce Steps to verify: 1. Open main.cpp 2. Locate initialize_physics() function at line 317 3. Note FIXME comment indicating the need to move the function 4. Search through open issues for "initialize_physics" or "main.cpp cleanup" 5. Verify no existing issue addresses this FIXME Expected outcome: Issue tracking for moving initialize_physics() function to dedicated file, as suggested by existing FIXME comment. ### Minimal reproduction project (MRP) [main.zip](https://github.com/user-attachments/files/17819908/main.zip)
enhancement,topic:physics,topic:codestyle
low
Minor
2,673,247,775
vscode
github.dev search options no longer work
https://github.dev/OpenCPN/OpenCPN github.dev: 'Match Case', 'Match Whole Word', Use Regular Expression' options do nothing. ![Image](https://github.com/user-attachments/assets/6be28ffa-c0ae-453b-a5a3-8ba75b5af350) Version: 1.95.3 Commit: f1a4fb101478ce6ec82fe9627c43efbf9e98c813 User Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/130.0.0.0 Safari/537.36 Embedder: github.dev <!-- generated by web issue reporter -->
bug,search
low
Minor
2,673,266,789
flutter
Better implementation of FlutterTexture
### Use case To [simplify](https://github.com/misos1/packages/blob/24002603a885533831cd343624aab5df2639e933/packages/video_player/video_player_avfoundation/darwin/video_player_avfoundation/Sources/video_player_avfoundation/FVPVideoPlayerPlugin.m#L596-L600) its use in plugins like `video_player` https://github.com/flutter/packages/pull/7466 and make it better for plugins like `camera`. ### Proposal Currently it is done in a way that `textureFrameAvailable` is used to signal that we have new texture frame and engine then eventually calls `copyPixelBuffer` to obtain it. `textureFrameAvailable` does two different thing in two classes in engine. In `FlutterDarwinExternalTextureMetal.mm` is used flag `_textureFrameAvailable` and `textureFrameAvailable` sets it to `true`, but in `embedder_external_texture_metal.mm` it sets last stored image to NULL so if `copyPixelBuffer` returns NULL it will not show anything and will periodically call it until it returns non-NULL. This means returning NULL from `copyPixelBuffer` after first non-NULL can cause different results on different platforms and it is better to [avoid that](https://github.com/misos1/packages/blob/24002603a885533831cd343624aab5df2639e933/packages/video_player/video_player_avfoundation/darwin/video_player_avfoundation/Sources/video_player_avfoundation/FVPVideoPlayerPlugin.m#L607). Multiple calls to `textureFrameAvailable` will cause only single call to `copyPixelBuffer` and `_textureFrameAvailable` is reset at some unknown after time that, thus it can be harder (or even impossible without depending on internals) to supply external texture reliably at display refresh rate. For example now `video_player` calls `textureFrameAvailable` from its own display link and on an iOS device it can be called at approximately the same time as `copyPixelBuffer` but the order in which they get called can change rapidly. Now let's suppose that at the current screen refresh is first called `copyPixelBuffer` as a reaction to some previous `textureFrameAvailable` and then is called `textureFrameAvailable` which triggers `copyPixelBuffer` at the next screen refresh. But at the next screen refresh is first called `textureFrameAvailable` and only after that `copyPixelBuffer` as reaction so now we had two calls to `textureFrameAvailable` but only a single call to `copyPixelBuffer`. Another `copyPixelBuffer` can be called only after some another `textureFrameAvailable` so this resulted in dropped video frame (first number is the counter of screen refreshes): ``` 1 copyPixelBuffer 1 textureFrameAvailable 2 textureFrameAvailable 2 copyPixelBuffer 3 (engine sees that _textureFrameAvailable is false and will not call copyPixelBuffer at this frame) 3 textureFrameAvailable 4 copyPixelBuffer 4 textureFrameAvailable ``` There is race between `textureFrameAvailable` and `copyPixelBuffer` in a sense that call to `textureFrameAvailable` may or may not have any effect depending on whether engine already cleared that flag or not (or image) so it is unknown whether `copyPixelBuffer` will be called as reaction to `textureFrameAvailable` or not. This can be resolved by engine calling `copyPixelBuffer` unconditionally for every screen refresh (as another mode of operation together with current one) as is https://github.com/flutter/packages/pull/7466 trying to achieve. Of course with such mode of operation it would be good if returning NULL from `copyPixelBuffer` was well defined (showing latest texture) and also engine could avoid calling `wrapExternalPixelBuffer` or similar (if expensive) every time and instead does it only if it got new texture. But this would not solve it (at least not in an easy way) for plugins like `camera` where there is also this same problem although maybe not so visible or not so often. But maybe a much better possibility, which would work (better) also for plugins like `camera`, would be implementing it as a consumer-producer queue where instead of `textureFrameAvailable` we could send our texture into that queue and the engine would pull them from it. This queue needs to be able to contain at least 2 frames (if it is limited which it probably should be). Pushing a new frame into that queue can erase the frame at the other side of the queue (the oldest) if the limit is reached (how it is currently is like a queue with limit 1 which is not enough). Or as possible third alternative (like something between this and that previous approach) if every single call to `textureFrameAvailable` caused a call to `copyPixelBuffer` by implementing some counter instead of flag (of course never calling `copyPixelBuffer` more than once per screen refresh, and also here can be limit). And maybe they can be combined because that first approach would make implementation in plugins like in `video_player` simpler because it would not require them to actively produce frames on their own display links as is currently done in `video_player` and that second approach would make it easier for plugins like `camera`. Difference is that `video_player` needs to actively pull frames from `AVPlayerItemVideoOutput` while in `camera` frames are pushed through `AVCaptureVideoDataOutputSampleBufferDelegate` by the system. So maybe the best would be to have both approaches available (possibility for `FlutterTexture` to pull frames from us, and also us to push frames into it).
c: new feature,c: proposal,P3,team-engine,triaged-engine
low
Minor
2,673,318,078
godot
get_window().focus_entered signal is broken with OptionButtons
### Tested versions Strange signal emission reproduced in 4.3 stable and 4.4-dev4 Just clicking on an OptionsButton causing window to lose focus is present in 4.2stable as well as 4.3 and 4.4-dev4 ### System information Godot v4.3.stable - Windows 10 - GLES3 (Compatibility) ### Issue description I am creating an application that depends on the lose focus and gain focus events happening reliably. # the issue: when clicking out of the window and clicking in the window everything is fine, but if your click happen to be on an Option Button within the application, the window will first lose focus again and the emit focus_entered twice. # option buttons make window lose focus while examining this issue I found that clicking an option button causes the window to lose focus. this issue also persists in 4.2 stable. I think this is a bug since the base buttons and textedit do not cause the window to lose focus. ### Steps to reproduce 1, create a control node with a OptionButton. 2, attach a script to the control node with the following code: ```gdscript extends Control func _ready() -> void: get_window().focus_entered.connect(func_print.bind("enter")) get_window().focus_exited.connect(func_print.bind("exit")) func func_print(text): print(text) ``` 3, starting the application, then clicking outside the window and then clicking somewhere inside the window (except on the option button will result in the following output: ```gdscript enter # new in 4.3 - was not called in 4.2 exit enter ``` A notable diffrence to 4.2 is that focus_entered is emitted when startingthe application. I am not sure this change in behavior is intended. 4, starting the application, then clicking outside the window and then clicking on the option button will result in the following output: ```gdscript enter # new in 4.3 - was not called in 4.2 exit exit enter enter ``` this is very strange, as an unfocused window loses focus again and then gains it twice in a row. in 4.2 stable the same action (4) would result in this output: ```gdscript # no enter signal called in 4.2 from starting the application exit enter exit ``` note: I think the second exit is a result from the discovered strange behavior of clicking on a OptionButton causing the window lose focus (clicking on an option button results in "exit" and selecting an option in "enter"). note2: no enter event is emitted when starting the application. ### Minimal reproduction project (MRP) [dropdown_bug-4.2stable.zip](https://github.com/user-attachments/files/17820162/dropdown_bug-4.2stable.zip) [dropdown_bug-4.3stable.zip](https://github.com/user-attachments/files/17820232/dropdown_bug-4.3.zip)
bug,topic:core,topic:gui
low
Critical
2,673,393,232
tailwindcss
[v4] `addComponents` is adding styles to `@layer utilities` instead of `@layer components`
<!-- Please provide all of the information requested below. We're a small team and without all of this information it's not possible for us to help and your bug report will be closed. --> **What version of Tailwind CSS are you using?** v4.0.0-alpha.34 **What build tool (or framework if it abstracts the build tool) are you using?** `@tailwindcss/cli` **Reproduction URL** https://github.com/saadeghi/tw4-component-layer-issue **Describe your issue** These are the layers in output CSS file: ``` @layer theme, base, components, utilities; ``` **Expectation** It's expected for `addComponents` to add styles to `@layer components` **Current behavior** Currently `addComponents` adds styles to `@layer utilities`, similar to `addUtilities` Plugin example: https://github.com/saadeghi/tw4-component-layer-issue/blob/master/myplugin.js Generated style: https://github.com/saadeghi/tw4-component-layer-issue/blob/9b7a944690a35d55c7406756e30cc98c7a239623/output.css#L516
v4,bc
low
Critical
2,673,394,118
go
x/tools/gopls: hover over unkeyed field should show field info
When hovering over the name of a field in a composite literal, we show info about that field. But when hovering over a value in an _unkeyed literal_, we show nothing. If anything, field information would be more useful in an unkeyed literal! I recently bumped into this while working in the middle of a large table driven test. This should be easy to add.
help wanted,FeatureRequest,gopls,Tools
low
Minor
2,673,496,892
react-native
requestCurrentTransition inside ReactFabric-dev.js can throw unhandled exceptions
### Description Thus far I have been able to track down the issue to this file and line of code: **react-native/Libraries/Renderer/implementations/ReactFabric-dev.js** ``` function requestCurrentTransition() { var transition = ReactCurrentBatchConfig$1.transition; if (transition !== null) { // Whenever a transition update is scheduled, register a callback on the // transition object so we can get the return value of the scope function. >>> transition._callbacks.add(handleAsyncAction); <<< SOMETIMES CALLBACKS IS UNDEFINED } return transition; } ``` I apologize that I do not have a reproducible example for you. Everything does seem to operate correctly if I simply wrap the offending line in an empty try-catch. ### Steps to reproduce n/a ### React Native Version 0.76.2 ### Affected Platforms Runtime - iOS ### Output of `npx react-native info` ```text System: OS: macOS 14.2.1 CPU: (10) arm64 Apple M1 Pro Memory: 102.55 MB / 16.00 GB Shell: version: "5.9" path: /bin/zsh Binaries: Node: version: 18.19.0 path: ~/.nvm/versions/node/v18.19.0/bin/node Yarn: version: 1.22.19 path: /usr/local/bin/yarn npm: version: 10.2.3 path: ~/.nvm/versions/node/v18.19.0/bin/npm Watchman: Not Found Managers: CocoaPods: version: 1.12.0 path: /Users/chris/.rbenv/shims/pod SDKs: iOS SDK: Platforms: - DriverKit 23.4 - iOS 17.4 - macOS 14.4 - tvOS 17.4 - visionOS 1.1 - watchOS 10.4 Android SDK: Not Found IDEs: Android Studio: 2024.2 AI-242.21829.142.2421.12409432 Xcode: version: 15.3/15E204a path: /usr/bin/xcodebuild Languages: Java: version: 17.0.12 path: /usr/bin/javac Ruby: version: 2.7.5 path: /Users/chris/.rbenv/shims/ruby npmPackages: "@react-native-community/cli": installed: 15.0.1 wanted: 15.0.1 react: installed: 18.3.1 wanted: 18.3.1 react-native: installed: 0.76.2 wanted: 0.76.2 react-native-macos: Not Found npmGlobalPackages: "*react-native*": Not Found Android: hermesEnabled: true newArchEnabled: true iOS: hermesEnabled: true newArchEnabled: true ``` ### Stacktrace or Logs ```text TypeError: Cannot read property 'add' of undefined at requestCurrentTransition (http://localhost:8081/index.bundle//&platform=ios&dev=true&lazy=true&minify=false&inlineSourceMap=false&modulesOnly=false&runModule=true&excludeSource=true&sourcePaths=url-server&app={BUNDLE_ID}:13106:36) at requestUpdateLane (http://localhost:8081/index.bundle//&platform=ios&dev=true&lazy=true&minify=false&inlineSourceMap=false&modulesOnly=false&runModule=true&excludeSource=true&sourcePaths=url-server&app={BUNDLE_ID}:15752:50) at dispatchSetState (http://localhost:8081/index.bundle//&platform=ios&dev=true&lazy=true&minify=false&inlineSourceMap=false&modulesOnly=false&runModule=true&excludeSource=true&sourcePaths=url-server&app={BUNDLE_ID}:9607:37) at anonymous (http://localhost:8081/index.bundle//&platform=ios&dev=true&lazy=true&minify=false&inlineSourceMap=false&modulesOnly=false&runModule=true&excludeSource=true&sourcePaths=url-server&app={BUNDLE_ID}:373831:17) at anonymous (http://localhost:8081/index.bundle//&platform=ios&dev=true&lazy=true&minify=false&inlineSourceMap=false&modulesOnly=false&runModule=true&excludeSource=true&sourcePaths=url-server&app={BUNDLE_ID}:373890:32) at startTransition (http://localhost:8081/index.bundle//&platform=ios&dev=true&lazy=true&minify=false&inlineSourceMap=false&modulesOnly=false&runModule=true&excludeSource=true&sourcePaths=url-server&app={BUNDLE_ID}:20591:16) at ?anon_0_ (http://localhost:8081/index.bundle//&platform=ios&dev=true&lazy=true&minify=false&inlineSourceMap=false&modulesOnly=false&runModule=true&excludeSource=true&sourcePaths=url-server&app={BUNDLE_ID}:373889:30) at next (native) at asyncGeneratorStep (http://localhost:8081/index.bundle//&platform=ios&dev=true&lazy=true&minify=false&inlineSourceMap=false&modulesOnly=false&runModule=true&excludeSource=true&sourcePaths=url-server&app={BUNDLE_ID}:23563:19) at _next (http://localhost:8081/index.bundle//&platform=ios&dev=true&lazy=true&minify=false&inlineSourceMap=false&modulesOnly=false&runModule=true&excludeSource=true&sourcePaths=url-server&app={BUNDLE_ID}:23577:29) at tryCallOne (address at InternalBytecode.js:1:1180) at anonymous (address at InternalBytecode.js:1:1874) ``` ### Reproducer https://github.com/chrismiddle10/sorry-dont-have-one ### Screenshots and Videos _No response_
Needs: Triage :mag:
low
Critical
2,673,497,871
PowerToys
Feature Request: Detect conflicting keyboard shortcuts
### Description of the new feature / enhancement I wish I had a screen cap of it, but if memory serves me right, the PowerToys.Run process used a little under 300MB of memory. Unfortunately, I don't have a screen shot of it. However, shortly after two distinct changes, memory usage skyrocketed. ![Image](https://github.com/user-attachments/assets/850ca2f7-c2e6-4d2c-b4df-81e6c1e95f30) The two changes are: - Upgrade to Windows 11 24H2 - Installation of OpenAI ChatGPT client from the Microsoft Store ChatGPT was my primary suspect because I noticed that it had a conflicting default keyboard shortcut (<kbd>Alt</kbd> +<kbd>Space</kbd>). Several cleanups, reinstallations and install methods later, changing the keyboard shortcut to anything else (<kbd>Alt</kbd> + <kbd>Shift</kbd> + <kbd>Space</kbd>) did the trick. So, just like some screen capture apps that, on startup, inform you that another process hooked into the <kbd>PrtScn</kbd> key, it would be nice if PowerToys would be able to detect for that condition and display a courtesy warning to the user. ### Scenario when this would be used? This feature can be useful at startup, but also any time while it's running. Apps are constantly competing for keyboard shortcuts, but it's often unclear which one will take precedence. ### Supporting information _No response_
Needs-Triage
low
Minor
2,673,503,901
angular
Combine provideHttpClient() and provideHttpClientTesting() into a function that provides them in the correct order
### Which @angular/* package(s) are relevant/related to the feature request? common ### Description In the [docs](https://angular.dev/guide/http/testing#setup-for-testing), there is guidance given to "provide `provideHttpClient()` before `provideHttpClientTesting()`, as `provideHttpClientTesting()` will overwrite parts of `provideHttpCient()`. Doing it the other way around can potentially break your tests.". I just got bit by this due to the methods being out of order and my backend was getting called (repeatedly) during my test execution. Could we get a function that will provide all necessities without having to worry about their order? ### Proposed solution IMO, `provideHttpClientTesting()` should be providing what `provideHttpClient()` is providing with the appropriate overrides, as I am unaware of any situation in which you would use `provideHttpClientTesting()` without `provideHttpClient()`, though I could be missing something. The result would look like: ``` providers: [provideHttpClientTesting()], ``` instead of: ``` providers: [provideHttpClient(), provideHttpClientTesting()], ``` With this approach, order wouldn't need to be kept by the developer and we could remove that guidance from the docs. ### Alternatives considered The alternative would be to create a wrapper method in each project. Something like: ``` export const provideHttpClientTestingWithClient = () => { return [provideHttpClient(), provideHttpClientTesting()] } ... providers: [...provideHttpClientTestingWithClient()] ``` But this seems like it could easily be included into Angular itself for a better developer experience.
area: common/http
low
Minor
2,673,531,926
PowerToys
Is it possible to Download / Get only one of PowerToys' Utility without installing the Full PowerToys Suit?
### Description of the new feature / enhancement This is probably a quirky question, and I don't know exactly where to ask. I would like image Resizer, but I don't need nor want the full PowerToys suit. Is it possible to download and install only this one part? Thanks, Cheers. ### Scenario when this would be used? N/A ### Supporting information _No response_
Needs-Triage
low
Minor
2,673,542,901
vscode
Simplify Windows Start Meny entry
<!-- ⚠️⚠️ Do Not Delete This! feature_request_template ⚠️⚠️ --> <!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ --> <!-- Please search existing issues to avoid creating duplicates. --> <!-- Describe the feature you'd like. --> The current Start Menu entry is unnecessarily in a folder. This seems a bit unclean and frustrates me. I would recommend removing the folder, simply having the .lnk file. Current: ![Image](https://github.com/user-attachments/assets/8d92ba31-b97e-49c2-9c74-0b430d849eec) Suggestion: ![Image](https://github.com/user-attachments/assets/9444adc4-2c50-4e07-b4b2-7e222ce809fd) Minor cleanup
windows,under-discussion
low
Minor
2,673,544,920
go
cmd/compile: register allocation of rematerializable ops ignores the op output register constraints.
Note: as far as I know, this issue is not reachable with any Go source today, but it may be roadblock as we do more with SIMD. When preforming register allocation, rematerializable values [are not issued immediately](https://cs.opensource.google/go/go/+/master:src/cmd/compile/internal/ssa/regalloc.go;l=1450;drc=0c1627812406a76ac256a08bee985a9817372446). Instead, when the value is used as an input to a later op, [`allocValToReg` will copy the value](https://cs.opensource.google/go/go/+/master:src/cmd/compile/internal/ssa/regalloc.go;l=582;drc=0c1627812406a76ac256a08bee985a9817372446) and then assign it to a register based on the `allocValToReg` `mask` argument. This `mask` argument is the constraint on the input argument at the point of use. But the rematerializable value also has output register constraints, which are completely ignored. Thus register allocation may assign an invalid register. As far as I know this isn't reachable due to the limited number of rematerializable ops and a limited set of conflicting ops that may take them as an input. The obvious case here is a rematerializable op with a GPR output and users with FP input. https://go.dev/cl/629815 demonstrates this issue by manually using POR with a int constant. You can build it with `GOARCH=amd64 GOAMD64=v2 go build runtime`, which fails with: ``` # runtime <autogenerated>:1: runtime.(*pageBits).popcntRange: invalid instruction: 00077 (/usr/local/google/home/mpratt/src/go/src/runtime/mpallocbits.go:116) XORL X1, X1 <autogenerated>:1: runtime.(*pageBits).popcntRange: invalid instruction: 00135 (/usr/local/google/home/mpratt/src/go/src/runtime/mpallocbits.go:113) XORL X0, X0 <autogenerated>:1: runtime.(*mspan).countAlloc: invalid instruction: 00038 (/usr/local/google/home/mpratt/src/go/src/runtime/mbitmap.go:1418) XORL X1, X1 <autogenerated>:1: runtime.sweepLocked.countAlloc: invalid instruction: 00048 (<autogenerated>:1) XORL X1, X1 <autogenerated>:1: runtime.(*liveUserArenaChunk).countAlloc: invalid instruction: 00050 (<autogenerated>:1) XORL X1, X1 <autogenerated>:1: runtime.liveUserArenaChunk.countAlloc: invalid instruction: 00053 (<autogenerated>:1) XORL X1, X1 <autogenerated>:1: runtime.(*sweepLocked).countAlloc: invalid instruction: 00050 (<autogenerated>:1) XORL X1, X1 <autogenerated>:1: go:(**mspan).runtime.countAlloc: invalid instruction: 00059 (<autogenerated>:1) XORL X1, X1 <autogenerated>:1: runtime.(*pageCache).allocN: invalid instruction: 00175 (/usr/local/google/home/mpratt/src/go/src/runtime/mpagecache.go:63) XORL X1, X1 <autogenerated>:1: runtime.(*pageAlloc).allocToCache: invalid instruction: 00506 (/usr/local/google/home/mpratt/src/go/src/runtime/mpagecache.go:171) XORL X1, X1 <autogenerated>:1: too many errors ``` Looking at the SSA of `runtime.sweepLocked.countAlloc`, we see ``` (+1418) v47 = POPCNTQ <int> v46 : SI (-1418) v34 = Copy <int> v47 : X0 (-1418) v38 = MOVQconst <int> [0] : X1 (1418) v48 = POR <int> v34 v38 : X0 ... (-1418) v33 = Copy <int> v48 : SI (1418) v50 = ADDQ <int> v58 v33 : BX **(count[int])** ``` The problem here is that `MOVQconst` is assigned register X1, even though it has an output constraint for gp registers only. It is unclear to me what we want to happen here. Probably one of: 1. Register allocation emits an extra Copy for rematerializable outputs to incompatible inputs, similar to how v34 and v33 above copy between gp and fp registers. 2. Or, earlier passes should prevent incompatible values from ever becoming arguments to later ops. e.g., some pass would convert `MOVQconst` to `MOVSDconst`. (2) alludes to why I don't think this can be triggered in Go code today: the main incompatibility is between gp and fp registers, but fp registers today are used almost exclusively for float operations, so the inputs are already floats. But if we start doing more SIMD, this will be less true, as many operations treat the fp registers as vectors of non-float types. cc @golang/compiler @randall77
NeedsInvestigation,compiler/runtime
low
Critical
2,673,576,622
godot
Antialiased draw_* primitives become blurry when window resolution exceeds base viewport size (and stretch mode = canvas_items or scale factor > 1)
### Tested versions - Reproducible in: 4.3.stable, 4.4.dev4 ### System information Windows 10.0.19045 - Multi-window, 1 monitor - Vulkan (Forward+) - dedicated GeForce GTX 765M - Intel(R) Core(TM) i7-4700MQ CPU @ 2.40GHz (8 threads) ### Issue description **Background** CanvasItem (and RenderingServer) contains several draw_* functions which allow convenient rendering of 2D primitives such as circles and lines, with minimal setup. **The problem** Some of these functions feature optional antialiasing, which works well if the runtime window size matches the viewport's base size defined in the project settings. Unfortunately, when stretch mode is set to canvas_items (or scale factor > 1) and the window exceeds the base viewport width/height, the transparent feathers are not automatically scaled, resulting in blurry edges, as demonstrated in the MRP. I have verified that this happens for circles, lines and rects. Using the RenderingServer functions directly yields the same results. Note that this is not an inherent limitation of these functions, because changing the viewport base size to match the screen size always results in smooth edges, even at very high resolutions. Note: this is vaguely related to [proposal #7309](https://github.com/godotengine/godot-proposals/issues/7309). **Potential fix** It's relatively easy to achieve resolution-independent antialiasing of 2D shapes using shaders (SDF font rendering is such an example). For circles, the shader is very simple (this is included in the MRP for comparison with the default): ```glsl shader_type canvas_item; void fragment() { float dist_to_center = distance(UV, vec2(0.5, 0.5)); float derivative = fwidth(dist_to_center); float alpha = smoothstep(0.5, 0.5 - derivative, dist_to_center); COLOR.a *= alpha; } ``` **Image (from the MRP)** ![Image](https://github.com/user-attachments/assets/b65bc56d-d624-49fd-8a7f-69b652bbabad) ### Steps to reproduce Import the MRP and run the example scene. ### Minimal reproduction project (MRP) [blurry_draw_antialiasing.zip](https://github.com/user-attachments/files/17821536/blurry_draw_antialiasing.zip)
bug,topic:rendering,topic:2d
low
Minor
2,673,577,776
godot
Calling Tween.custom_step() from within a tween freezes the game and produces an unclear fatal C++ error
### Tested versions - Reproducible in `v4.3.stable.mono.official` [77dcf97d8] ### System information Godot v4.3.stable.mono - macOS 15.0.1 - Vulkan (Forward+) - integrated Apple M3 Max - Apple M3 Max (14 Threads) ### Issue description While I'm aware that calling CustomStep() from within a tween can be undefined behavior (I would understand if that particular edge case would be a wontfix!), I still think that a clearer error should be printed, and the game should definitely not freeze. When CustomStep() is called, this gets printed right before the program crashes: ``` ERROR: FATAL: Index p_index = 4 is out of bounds (((Vector<T> *)(this))->_cowdata.size() = 3). at: operator[] (./core/templates/vector.h:52) ``` ("3" is the amount of tweeners, so it may vary depending on the amount of queued tweens) Avoiding this issue is as easy as wrapping the CustomStep() call inside a `Callable` + with `CallDeferred()`. ### Steps to reproduce The smallest code that can crash the game and trigger the above error is: ```cs bool finished = false; Tween tween = GetTree().CreateTween(); tween.TweenCallback(Callable.From(() => { if (finished) return; finished = true; tween.CustomStep(1e9); })); ``` *(the `finished` bool is to prevent a stack overflow which would be happening because of that particular tween being called again over and over)* Replacing `tween.CustomStep(1e9);` with `Callable.From(() => { tween.CustomStep(1e9); }).CallDeferred()` is enough to fix the issue. ### Minimal reproduction project (MRP) [MRP-TweenCrash.zip](https://github.com/user-attachments/files/17821663/MRP-TweenCrash.zip) (Please note that the error might not even be visible from the Godot editor; you might need to run the game via the terminal or an editor like e.g. Jetbrains Rider)
bug,topic:core
low
Critical
2,673,641,014
godot
Appending to a Node's array through an EditorScript doesn't save the changes to the scene file
### Tested versions - Reproducible in v4.3.stable.official [77dcf97d8] ### System information Godot v4.3.stable - Windows 10.0.19045 - Vulkan (Mobile) - dedicated NVIDIA GeForce GTX 1070 (NVIDIA; 30.0.14.7141) - Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz (8 Threads) ### Issue description After running an EditorScript that appends to a Node's array, it does not include the new value of the array in the saved scene file. I have found that the workaround is to use a local variable array in the EditorScript and assign that array as a whole, rather than call .append() directly on the Node's array, so this has something to do with using .append() This won't work: ```gdscript for i in range(0, 5): current_map.counted_ints.append(i) ``` This will be fine: ```gdscript var temp:Array[int] = [] for i in range(0, 5): temp.append(i) current_map.counted_ints = temp ``` I notice also that the "revert" icon appears will show up when assigning the whole array, but it won't show up on an array that has been appended. If you play the game at this point, the value will return to its "pre-appended" value from before you ran the EditorScript. I guess because those changes haven't been saved to the scene file. If you modify the array in the editor, it will now save, and the revert icon appears, but pressing the revert button reverts to the post-appended value, and saving now will make it disappear from the scene file again. ### Steps to reproduce Open the array_test scene or create a 3D scene and attach the array_node.gd script to the root node Contents of array_node.gd (also in project file) ```gdscript extends Node3D class_name ArrayNode @export var counted_ints:Array[int] @export var assigned_ints:Array[int] ``` Then go to the array_test.gd and run it as an EditorScript Contents of array_test.gd (also in project file): ```gdscript @tool extends EditorScript func _run() -> void: var current_map:ArrayNode = get_scene() as ArrayNode if current_map == null: return current_map.counted_ints = [] for i in range(0, 5): current_map.counted_ints.append(i) # these changes will not be saved to the .tscn file current_map.assigned_ints = [0, 1, 2, 3, 4] # this will be saved to the .tscn file ``` After running that, save the scene. The value for assigned_ints will appear in the scene file but the value for counted_ints will not. Contents of scene file after saving: ``` [gd_scene load_steps=2 format=3 uid="uid://ld3p44qg5h4o"] [ext_resource type="Script" path="res://array_node.gd" id="1_l8tlx"] [node name="ArrayTest" type="Node3D"] script = ExtResource("1_l8tlx") assigned_ints = Array[int]([0, 1, 2, 3, 4]) ``` Also note that assigned_ints will have a revert button and counted_ints will not. If you modify counted_ints in the editor, the revert button will appear and pressing it will revert to the post-appended version. But playing the game or leaving Godot and returning will revert to the pre-appended version, because the post-appended version hasn't been saved to the scene file. ### Minimal reproduction project (MRP) [array_append_editorscript.zip](https://github.com/user-attachments/files/17822731/array_append_editorscript.zip)
bug,topic:gdscript,topic:editor
low
Minor
2,673,664,257
flutter
TextField as WidgetSpan mixes parent TextField initial text when delete or arrow keys used
### Steps to reproduce Have attached complete sample code which shows the issue but steps to recreate are: 1. create a new flutter app 2. create a custom TextEditingController and set the "text" value within the constructor to any text string (this is to make the issue easier to see, if this value isn't set instead of the text being replaced in the WidgetSpan TextField it is deleted, so still a problem) 3. within the custom controller add a WidgetSpan with a child of a TextField and return it as the child of a TextSpan 4. in the app add a TextField within MainApp using the custom controller as its controller 5. run the app against a Windows or Mac target (I assume linux) 6. in the WidgetSpan text field enter some text then press the delete key or arrow keys See attached example for a bare bones app showing the problem ### Expected results Normal text field behaviour or deleting, moving cursor etc... ### Actual results Windows and Mac - Text field text is replaced the the "text" value from the constructor of the parent text field. Web - seems to work correctly Android - when using the software keyboard the delete key works correctly and can navigate using touch, however when using a hardware keyboard via the emulator the problem occurs Ios - software keyboard works fine too, however when using a hardware keyboard via the emulator the delete key works fine but when pressing an arrow left or right the problem occurs ### Code sample <details open><summary>Code sample</summary> ```dart import 'package:flutter/material.dart'; void main() { runApp(const MainApp()); } class MainApp extends StatefulWidget { const MainApp({super.key}); @override State<MainApp> createState() => _MainAppState(); } class _MainAppState extends State<MainApp> { late CustomTextController _controller; late ChildTextField _childTextField; @override void initState() { super.initState(); _childTextField = ChildTextField(); _controller = CustomTextController(childTextField: _childTextField); } @override Widget build(BuildContext context) { return MaterialApp( home: Scaffold( body: Container( height: 300, width: 300, child: TextField( controller: _controller, maxLines: null, ), ), ), ); } } class CustomTextController extends TextEditingController { final ChildTextField childTextField; CustomTextController({ required this.childTextField, }){ text = "Initial text in parent text field"; } @override TextSpan buildTextSpan({ required BuildContext context, TextStyle? style, required bool withComposing, }) { final List<InlineSpan> children = []; children.add(TextSpan(text: "First line in parent text field", style: style)); children.add(TextSpan(text: "\n\n\n", style: style)); children.add( WidgetSpan( child: Container( height: 40, width: 500, decoration: BoxDecoration(border: Border.all(color: Colors.red)), child: childTextField), ), ); children.add(TextSpan(text: "\n\n", style: style)); children.add(TextSpan(text: "End line of parent text field", style: style)); return TextSpan(style: style, children: children); } } class ChildTextField extends TextField { const ChildTextField({super.key}); @override State<ChildTextField> createState() => _ChildTextFieldState(); } class _ChildTextFieldState extends State<ChildTextField> { late TextEditingController _controller; @override void initState() { super.initState(); _controller = TextEditingController( ); } @override void dispose() { _controller.dispose(); super.dispose(); } @override Widget build(BuildContext context) { return TextField( controller: _controller, ); } } ``` </details> ### Screenshots or Video <details open> <summary>Screenshots / Video demonstration</summary> Video showing the issue on Windows, when the delete key was pressed the text was replaced with the text constructor value from the parent TextField. </details> https://github.com/user-attachments/assets/e3bf21a8-f422-4c6f-a279-41fcb8ac7fde In case it's of use when I was reviewing editable_text.dart in the Flutter source code I could see occurrences of the wrong value being passed in to function, i.e. the text value from the parent TextField was being passed into a function being called for the WidgetSpan TextField. The watch window shows _value variable being the child TextField value and the value parameter is set to text from the parent TextField. <img width="1237" alt="Flutter WidgetSpan TextField Bug Debug Screenshot EditableText" src="https://github.com/user-attachments/assets/dc90c8c5-2d76-4844-abd2-174fafdd5ad0"> ### Logs <details open><summary>Logs</summary> ```console [Paste your logs here] ``` </details> ### Flutter Doctor output <details open><summary>Doctor output</summary> ```console PS C:\dev\repos\widget_span_text_field_example> flutter doctor -v [√] Flutter (Channel stable, 3.24.5, on Microsoft Windows [Version 10.0.22631.4460], locale en-GB) • Flutter version 3.24.5 on channel stable at C:\dev\flutter • Upstream repository https://github.com/flutter/flutter.git • Framework revision dec2ee5c1f (6 days ago), 2024-11-13 11:13:06 -0800 • Engine revision a18df97ca5 • Dart version 3.5.4 • DevTools version 2.37.3 [√] Windows Version (Installed version of Windows is version 10 or higher) [!] Android toolchain - develop for Android devices (Android SDK version 34.0.0) • Android SDK at C:\Users\t\AppData\Local\Android\sdk X cmdline-tools component is missing Run `path/to/sdkmanager --install "cmdline-tools;latest"` See https://developer.android.com/studio/command-line for more details. X Android license status unknown. Run `flutter doctor --android-licenses` to accept the SDK licenses. See https://flutter.dev/to/windows-android-setup for more details. [√] Chrome - develop for the web • Chrome at C:\Program Files\Google\Chrome\Application\chrome.exe [√] Visual Studio - develop Windows apps (Visual Studio Enterprise 2022 17.12.0 Preview 1.0) • Visual Studio at C:\Program Files\Microsoft Visual Studio\2022\Preview • Visual Studio Enterprise 2022 version 17.12.35209.166 • The current Visual Studio installation is a pre-release version. It may not be supported by Flutter yet. • Windows 10 SDK version 10.0.22621.0 [√] Android Studio (version 2024.1) • Android Studio at C:\Users\t\AppData\Local\Programs\Android Studio • Flutter plugin can be installed from: https://plugins.jetbrains.com/plugin/9212-flutter • Dart plugin can be installed from: https://plugins.jetbrains.com/plugin/6351-dart • Java version OpenJDK Runtime Environment (build 17.0.11+0--11852314) [√] IntelliJ IDEA Community Edition (version 2024.2) • IntelliJ at C:\Users\t\AppData\Local\Programs\IntelliJ IDEA Community Edition • Flutter plugin version 81.1.3 • Dart plugin version 242.20629 [√] IntelliJ IDEA Ultimate Edition (version 2023.3) • IntelliJ at C:\Users\t\AppData\Local\Programs\IntelliJ IDEA Ultimate • Flutter plugin version 77.0.1 • Dart plugin version 233.13135.65 [√] VS Code, 64-bit edition (version unknown) • VS Code at C:\Program Files\Microsoft VS Code Insiders • Flutter extension version 3.101.20241031 X Unable to determine VS Code version. [√] VS Code (version 1.96.0-insider) • VS Code at C:\Users\t\AppData\Local\Programs\Microsoft VS Code Insiders • Flutter extension version 3.101.20241031 [√] Connected device (4 available) • sdk gphone64 x86 64 (mobile) • emulator-5554 • android-x64 • Android 14 (API 34) (emulator) • Windows (desktop) • windows • windows-x64 • Microsoft Windows [Version 10.0.22631.4460] • Chrome (web) • chrome • web-javascript • Google Chrome 131.0.6778.69 • Edge (web) • edge • web-javascript • Microsoft Edge 131.0.2903.51 [√] Network resources • All expected network resources are available. ! Doctor found issues in 1 category. ``` </details>
a: text input,framework,has reproducible steps,P2,team-text-input,triaged-text-input,found in release: 3.24,found in release: 3.27
low
Critical
2,673,704,531
terminal
WT portable mode SUI warning needs higher contrast in dark mode (currently 2.26:1, needs 3:1)
^read title
Issue-Bug,Area-Accessibility,Product-Terminal
low
Minor
2,673,715,497
godot
.import / .uid file is not removed when original file is not in root directory and gets removed outside editor
### Tested versions 4.4 dev4 Likely earlier too. ### System information Windows 10.0.19045 - Multi-window, 2 monitors - Vulkan (Forward+) - dedicated NVIDIA GeForce GTX 1060 (NVIDIA; 32.0.15.6603) - Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz (8 threads) ### Issue description When you delete a file outside editor, Godot tries to delete corresponding .import/.uid file. However this only happens if the file is in the project's root directory. If it's in any subdirectory, the other file just stays there and you have to delete it manually. ### Steps to reproduce 1. Open project's root directory in OS' file manager 2. Delete icon.svg 3. Focus the editor 4. The engine will process FileSystem and automatically delete icon.svg.import. 5. Try again with icon.svg in a subfolder ### Minimal reproduction project (MRP) N/A
bug,topic:editor
low
Major
2,673,802,402
go
runtime: `fatal error: runtime: mcall called on m->g0 stack` while concurrently fetching /profile and /trace
### Go version go version go1.23.3 linux/amd64 ### Output of `go env` in your module/workspace: ```shell GO111MODULE='' GOARCH='amd64' GOBIN='' GOCACHE='/home/piob/.cache/go-build' GOENV='/home/piob/.config/go/env' GOEXE='' GOEXPERIMENT='' GOFLAGS='' GOHOSTARCH='amd64' GOHOSTOS='linux' GOINSECURE='' GOMODCACHE='/home/piob/go/pkg/mod' GONOPROXY='' GONOSUMDB='' GOOS='linux' GOPATH='/home/piob/go' GOPRIVATE='' GOPROXY='https://proxy.golang.org,direct' GOROOT='/usr/local/go' GOSUMDB='sum.golang.org' GOTMPDIR='' GOTOOLCHAIN='auto' GOTOOLDIR='/usr/local/go/pkg/tool/linux_amd64' GOVCS='' GOVERSION='go1.23.3' GODEBUG='' GOTELEMETRY='local' GOTELEMETRYDIR='/home/piob/.config/go/telemetry' GCCGO='gccgo' GOAMD64='v1' AR='ar' CC='gcc' CXX='g++' CGO_ENABLED='1' GOMOD='/dev/null' GOWORK='' CGO_CFLAGS='-O2 -g' CGO_CPPFLAGS='' CGO_CXXFLAGS='-O2 -g' CGO_FFLAGS='-O2 -g' CGO_LDFLAGS='-O2 -g' PKG_CONFIG='pkg-config' GOGCCFLAGS='-fPIC -m64 -pthread -Wl,--no-gc-sections -fmessage-length=0 -ffile-prefix-map=/tmp/go-build1174589415=/tmp/go-build -gno-record-gcc-switches' ``` ### What did you do? Run the attached go program, which does some computation, and exposes profiling endpoints. Fetch the cpu profile, and while it is being captured, fetch execution trace. Specifically to programmatically assure that cpu profile collection has started, hit the profile endpoint twice, expect one to fail with 500 error (only one profile can be captured concurrently), and then immediately start capturing the trace. Repeat many times to reproduce the issue. Attached is the go program code, and a rust binary that will repeatedly launch this program, and hit the profiling endpoints. I tried writing a go program to fetch the profiles, but it doesn't seem to repro, I suspect that race condition is subtle. The rust program has a concurrency target set to 32, problem seems to repro when all cores are busy. Unpack the zip and: [repro.zip](https://github.com/user-attachments/files/17822364/repro.zip) $ go build fib.go $ cargo run test ### What did you see happen? Sometimes the go program ends up crashing. Output: [err.txt](https://github.com/user-attachments/files/17822378/err.txt) ### What did you expect to see? Successfully produced profiles without crashing.
NeedsInvestigation,compiler/runtime
low
Critical
2,673,803,581
ollama
ggml.c:4044: GGML_ASSERT(view_src == NULL || data_size == 0 || data_size + view_offs <= ggml_nbytes(view_src)) failed
### What is the issue? On certain API requests, the server throws a segmentation fault error and the API responds with a HTTP 500. So far, I have encountered this twice in thousands of requests. Unfortunately I do not have the particular prompts that resulted in this logged but I do not expect this to be directly reproducible based on a prompt. Full stack trace: ``` ggml.c:4044: GGML_ASSERT(view_src == NULL || data_size == 0 || data_size + view_offs <= ggml_nbytes(view_src)) failed SIGSEGV: segmentation violation PC=0x7ae06884d1d7 m=4 sigcode=1 addr=0x204803fbc signal arrived during cgo execution goroutine 7 gp=0xc000156000 m=4 mp=0xc00004d808 [syscall]: runtime.cgocall(0x5bb738602e90, 0xc000056b60) runtime/cgocall.go:157 +0x4b fp=0xc000056b38 sp=0xc000056b00 pc=0x5bb7383853cb github.com/ollama/ollama/llama._Cfunc_llama_decode(0x7adfec006460, {0x1, 0x7adfec3acb70, 0x0, 0x0, 0x7adfec3ad380, 0x7adfec3adb90, 0x7adfec17b380, 0x7adfd10c2910, 0x0, ...}) _cgo_gotypes.go:543 +0x52 fp=0xc000056b60 sp=0xc000056b38 pc=0x5bb738482952 github.com/ollama/ollama/llama.(*Context).Decode.func1(0x5bb7385fed4b?, 0x7adfec006460?) github.com/ollama/ollama/llama/llama.go:167 +0xd8 fp=0xc000056c80 sp=0xc000056b60 pc=0x5bb738484e78 github.com/ollama/ollama/llama.(*Context).Decode(0xc000056d68?, 0x1?) github.com/ollama/ollama/llama/llama.go:167 +0x17 fp=0xc000056cc8 sp=0xc000056c80 pc=0x5bb738484cd7 main.(*Server).processBatch(0xc000128120, 0xc000126150, 0xc0001261c0) github.com/ollama/ollama/llama/runner/runner.go:424 +0x29e fp=0xc000056ed0 sp=0xc000056cc8 pc=0x5bb7385fdd7e main.(*Server).run(0xc000128120, {0x5bb73893ca40, 0xc00007c050}) github.com/ollama/ollama/llama/runner/runner.go:338 +0x1a5 fp=0xc000056fb8 sp=0xc000056ed0 pc=0x5bb7385fd765 main.main.gowrap2() github.com/ollama/ollama/llama/runner/runner.go:901 +0x28 fp=0xc000056fe0 sp=0xc000056fb8 pc=0x5bb738601ec8 runtime.goexit({}) runtime/asm_amd64.s:1695 +0x1 fp=0xc000056fe8 sp=0xc000056fe0 pc=0x5bb7383edde1 created by main.main in goroutine 1 github.com/ollama/ollama/llama/runner/runner.go:901 +0xc2b goroutine 1 gp=0xc0000061c0 m=nil [IO wait, 520 minutes]: runtime.gopark(0xc000038a08?, 0xc00014b908?, 0xb1?, 0x7a?, 0x2000?) runtime/proc.go:402 +0xce fp=0xc00014b888 sp=0xc00014b868 pc=0x5bb7383bc00e runtime.netpollblock(0xc00014b920?, 0x38384b26?, 0xb7?) runtime/netpoll.go:573 +0xf7 fp=0xc00014b8c0 sp=0xc00014b888 pc=0x5bb7383b4257 internal/poll.runtime_pollWait(0x7ae067dc7fe0, 0x72) runtime/netpoll.go:345 +0x85 fp=0xc00014b8e0 sp=0xc00014b8c0 pc=0x5bb7383e8aa5 internal/poll.(*pollDesc).wait(0x3?, 0x7c?, 0x0) internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc00014b908 sp=0xc00014b8e0 pc=0x5bb7384389c7 internal/poll.(*pollDesc).waitRead(...) internal/poll/fd_poll_runtime.go:89 internal/poll.(*FD).Accept(0xc000150080) internal/poll/fd_unix.go:611 +0x2ac fp=0xc00014b9b0 sp=0xc00014b908 pc=0x5bb738439e8c net.(*netFD).accept(0xc000150080) net/fd_unix.go:172 +0x29 fp=0xc00014ba68 sp=0xc00014b9b0 pc=0x5bb7384a88a9 net.(*TCPListener).accept(0xc0000321e0) net/tcpsock_posix.go:159 +0x1e fp=0xc00014ba90 sp=0xc00014ba68 pc=0x5bb7384b95de net.(*TCPListener).Accept(0xc0000321e0) net/tcpsock.go:327 +0x30 fp=0xc00014bac0 sp=0xc00014ba90 pc=0x5bb7384b8930 net/http.(*onceCloseListener).Accept(0xc000190090?) <autogenerated>:1 +0x24 fp=0xc00014bad8 sp=0xc00014bac0 pc=0x5bb7385dfa44 net/http.(*Server).Serve(0xc000168000, {0x5bb73893c400, 0xc0000321e0}) net/http/server.go:3260 +0x33e fp=0xc00014bc08 sp=0xc00014bad8 pc=0x5bb7385d685e main.main() github.com/ollama/ollama/llama/runner/runner.go:921 +0xfcc fp=0xc00014bf50 sp=0xc00014bc08 pc=0x5bb738601c4c runtime.main() runtime/proc.go:271 +0x29d fp=0xc00014bfe0 sp=0xc00014bf50 pc=0x5bb7383bbbdd runtime.goexit({}) runtime/asm_amd64.s:1695 +0x1 fp=0xc00014bfe8 sp=0xc00014bfe0 pc=0x5bb7383edde1 goroutine 2 gp=0xc000006c40 m=nil [force gc (idle), 3 minutes]: runtime.gopark(0x1dd19be52e23?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:402 +0xce fp=0xc000046fa8 sp=0xc000046f88 pc=0x5bb7383bc00e runtime.goparkunlock(...) runtime/proc.go:408 runtime.forcegchelper() runtime/proc.go:326 +0xb8 fp=0xc000046fe0 sp=0xc000046fa8 pc=0x5bb7383bbe98 runtime.goexit({}) runtime/asm_amd64.s:1695 +0x1 fp=0xc000046fe8 sp=0xc000046fe0 pc=0x5bb7383edde1 created by runtime.init.6 in goroutine 1 runtime/proc.go:314 +0x1a goroutine 3 gp=0xc000007180 m=nil [GC sweep wait]: runtime.gopark(0x5bb738b09e01?, 0x5bb738b09e40?, 0xc?, 0x9?, 0x1?) runtime/proc.go:402 +0xce fp=0xc000047780 sp=0xc000047760 pc=0x5bb7383bc00e runtime.goparkunlock(...) runtime/proc.go:408 runtime.bgsweep(0xc00006e000) runtime/mgcsweep.go:318 +0xdf fp=0xc0000477c8 sp=0xc000047780 pc=0x5bb7383a6b9f runtime.gcenable.gowrap1() runtime/mgc.go:203 +0x25 fp=0xc0000477e0 sp=0xc0000477c8 pc=0x5bb73839b685 runtime.goexit({}) runtime/asm_amd64.s:1695 +0x1 fp=0xc0000477e8 sp=0xc0000477e0 pc=0x5bb7383edde1 created by runtime.gcenable in goroutine 1 runtime/mgc.go:203 +0x66 goroutine 4 gp=0xc000007340 m=nil [GC scavenge wait]: runtime.gopark(0x10000?, 0x166b9ea?, 0x0?, 0x0?, 0x0?) runtime/proc.go:402 +0xce fp=0xc000047f78 sp=0xc000047f58 pc=0x5bb7383bc00e runtime.goparkunlock(...) runtime/proc.go:408 runtime.(*scavengerState).park(0x5bb738b0a4c0) runtime/mgcscavenge.go:425 +0x49 fp=0xc000047fa8 sp=0xc000047f78 pc=0x5bb7383a4549 runtime.bgscavenge(0xc00006e000) runtime/mgcscavenge.go:658 +0x59 fp=0xc000047fc8 sp=0xc000047fa8 pc=0x5bb7383a4af9 runtime.gcenable.gowrap2() runtime/mgc.go:204 +0x25 fp=0xc000047fe0 sp=0xc000047fc8 pc=0x5bb73839b625 runtime.goexit({}) runtime/asm_amd64.s:1695 +0x1 fp=0xc000047fe8 sp=0xc000047fe0 pc=0x5bb7383edde1 created by runtime.gcenable in goroutine 1 runtime/mgc.go:204 +0xa5 goroutine 5 gp=0xc000007c00 m=nil [finalizer wait, 3 minutes]: runtime.gopark(0x0?, 0x5bb7389381a0?, 0x0?, 0x60?, 0x1000000010?) runtime/proc.go:402 +0xce fp=0xc000046620 sp=0xc000046600 pc=0x5bb7383bc00e runtime.runfinq() runtime/mfinal.go:194 +0x107 fp=0xc0000467e0 sp=0xc000046620 pc=0x5bb73839a6c7 runtime.goexit({}) runtime/asm_amd64.s:1695 +0x1 fp=0xc0000467e8 sp=0xc0000467e0 pc=0x5bb7383edde1 created by runtime.createfing in goroutine 1 runtime/mfinal.go:164 +0x3d goroutine 22 gp=0xc000196000 m=nil [select]: runtime.gopark(0xc000147a80?, 0x2?, 0x18?, 0x77?, 0xc000147824?) runtime/proc.go:402 +0xce fp=0xc000147698 sp=0xc000147678 pc=0x5bb7383bc00e runtime.selectgo(0xc000147a80, 0xc000147820, 0xc00037de00?, 0x0, 0x2?, 0x1) runtime/select.go:327 +0x725 fp=0xc0001477b8 sp=0xc000147698 pc=0x5bb7383cd3e5 main.(*Server).completion(0xc000128120, {0x5bb73893c5b0, 0xc0000e22a0}, 0xc0000c0360) github.com/ollama/ollama/llama/runner/runner.go:652 +0x8fe fp=0xc000147ab8 sp=0xc0001477b8 pc=0x5bb7385ff6de main.(*Server).completion-fm({0x5bb73893c5b0?, 0xc0000e22a0?}, 0x5bb7385dab8d?) <autogenerated>:1 +0x36 fp=0xc000147ae8 sp=0xc000147ab8 pc=0x5bb7386026b6 net/http.HandlerFunc.ServeHTTP(0xc00010cb60?, {0x5bb73893c5b0?, 0xc0000e22a0?}, 0x10?) net/http/server.go:2171 +0x29 fp=0xc000147b10 sp=0xc000147ae8 pc=0x5bb7385d3629 net/http.(*ServeMux).ServeHTTP(0x5bb73838ef85?, {0x5bb73893c5b0, 0xc0000e22a0}, 0xc0000c0360) net/http/server.go:2688 +0x1ad fp=0xc000147b60 sp=0xc000147b10 pc=0x5bb7385d54ad net/http.serverHandler.ServeHTTP({0x5bb73893b900?}, {0x5bb73893c5b0?, 0xc0000e22a0?}, 0x6?) net/http/server.go:3142 +0x8e fp=0xc000147b90 sp=0xc000147b60 pc=0x5bb7385d64ce net/http.(*conn).serve(0xc000190090, {0x5bb73893ca08, 0xc00010adb0}) net/http/server.go:2044 +0x5e8 fp=0xc000147fb8 sp=0xc000147b90 pc=0x5bb7385d2268 net/http.(*Server).Serve.gowrap3() net/http/server.go:3290 +0x28 fp=0xc000147fe0 sp=0xc000147fb8 pc=0x5bb7385d6c48 runtime.goexit({}) runtime/asm_amd64.s:1695 +0x1 fp=0xc000147fe8 sp=0xc000147fe0 pc=0x5bb7383edde1 created by net/http.(*Server).Serve in goroutine 1 net/http/server.go:3290 +0x4b4 goroutine 21 gp=0xc000082a80 m=nil [GC worker (idle), 4 minutes]: runtime.gopark(0x1db5aaee6275?, 0x3?, 0x58?, 0xf?, 0x0?) runtime/proc.go:402 +0xce fp=0xc0000cdf50 sp=0xc0000cdf30 pc=0x5bb7383bc00e runtime.gcBgMarkWorker() runtime/mgc.go:1310 +0xe5 fp=0xc0000cdfe0 sp=0xc0000cdf50 pc=0x5bb73839d585 runtime.goexit({}) runtime/asm_amd64.s:1695 +0x1 fp=0xc0000cdfe8 sp=0xc0000cdfe0 pc=0x5bb7383edde1 created by runtime.gcBgMarkStartWorkers in goroutine 18 runtime/mgc.go:1234 +0x1c goroutine 41 gp=0xc000082fc0 m=nil [GC worker (idle), 3 minutes]: runtime.gopark(0x1db5aaee606c?, 0x3?, 0xc?, 0xfe?, 0x0?) runtime/proc.go:402 +0xce fp=0xc0000cf750 sp=0xc0000cf730 pc=0x5bb7383bc00e runtime.gcBgMarkWorker() runtime/mgc.go:1310 +0xe5 fp=0xc0000cf7e0 sp=0xc0000cf750 pc=0x5bb73839d585 runtime.goexit({}) runtime/asm_amd64.s:1695 +0x1 fp=0xc0000cf7e8 sp=0xc0000cf7e0 pc=0x5bb7383edde1 created by runtime.gcBgMarkStartWorkers in goroutine 18 runtime/mgc.go:1234 +0x1c goroutine 50 gp=0xc0005e2000 m=nil [GC worker (idle), 3 minutes]: runtime.gopark(0x1dd19bf2c88b?, 0x3?, 0xbf?, 0x40?, 0x0?) runtime/proc.go:402 +0xce fp=0xc0000c8750 sp=0xc0000c8730 pc=0x5bb7383bc00e runtime.gcBgMarkWorker() runtime/mgc.go:1310 +0xe5 fp=0xc0000c87e0 sp=0xc0000c8750 pc=0x5bb73839d585 runtime.goexit({}) runtime/asm_amd64.s:1695 +0x1 fp=0xc0000c87e8 sp=0xc0000c87e0 pc=0x5bb7383edde1 created by runtime.gcBgMarkStartWorkers in goroutine 18 runtime/mgc.go:1234 +0x1c goroutine 42 gp=0xc000083180 m=nil [GC worker (idle), 66 minutes]: runtime.gopark(0x1a5228359582?, 0x3?, 0x1c?, 0x4?, 0x0?) runtime/proc.go:402 +0xce fp=0xc0000cff50 sp=0xc0000cff30 pc=0x5bb7383bc00e runtime.gcBgMarkWorker() runtime/mgc.go:1310 +0xe5 fp=0xc0000cffe0 sp=0xc0000cff50 pc=0x5bb73839d585 runtime.goexit({}) runtime/asm_amd64.s:1695 +0x1 fp=0xc0000cffe8 sp=0xc0000cffe0 pc=0x5bb7383edde1 created by runtime.gcBgMarkStartWorkers in goroutine 18 runtime/mgc.go:1234 +0x1c goroutine 11112 gp=0xc000156a80 m=nil [IO wait, 4 minutes]: runtime.gopark(0x10?, 0x10?, 0xf0?, 0xbd?, 0xb?) runtime/proc.go:402 +0xce fp=0xc00019bda8 sp=0xc00019bd88 pc=0x5bb7383bc00e runtime.netpollblock(0x5bb738422558?, 0x38384b26?, 0xb7?) runtime/netpoll.go:573 +0xf7 fp=0xc00019bde0 sp=0xc00019bda8 pc=0x5bb7383b4257 internal/poll.runtime_pollWait(0x7ae067dc7ee8, 0x72) runtime/netpoll.go:345 +0x85 fp=0xc00019be00 sp=0xc00019bde0 pc=0x5bb7383e8aa5 internal/poll.(*pollDesc).wait(0xc000164a00?, 0xc00010ab81?, 0x0) internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc00019be28 sp=0xc00019be00 pc=0x5bb7384389c7 internal/poll.(*pollDesc).waitRead(...) internal/poll/fd_poll_runtime.go:89 internal/poll.(*FD).Read(0xc000164a00, {0xc00010ab81, 0x1, 0x1}) internal/poll/fd_unix.go:164 +0x27a fp=0xc00019bec0 sp=0xc00019be28 pc=0x5bb73843951a net.(*netFD).Read(0xc000164a00, {0xc00010ab81?, 0xc00019bf48?, 0x5bb7383ea6d0?}) net/fd_posix.go:55 +0x25 fp=0xc00019bf08 sp=0xc00019bec0 pc=0x5bb7384a77a5 net.(*conn).Read(0xc00004a000, {0xc00010ab81?, 0x385041544f792f41?, 0xc00010ab78?}) net/net.go:185 +0x45 fp=0xc00019bf50 sp=0xc00019bf08 pc=0x5bb7384b1a65 net.(*TCPConn).Read(0xc00010ab70?, {0xc00010ab81?, 0x3450472f58332f59?, 0x636f422b44786847?}) <autogenerated>:1 +0x25 fp=0xc00019bf80 sp=0xc00019bf50 pc=0x5bb7384bd445 net/http.(*connReader).backgroundRead(0xc00010ab70) net/http/server.go:681 +0x37 fp=0xc00019bfc8 sp=0xc00019bf80 pc=0x5bb7385cc1d7 net/http.(*connReader).startBackgroundRead.gowrap2() net/http/server.go:677 +0x25 fp=0xc00019bfe0 sp=0xc00019bfc8 pc=0x5bb7385cc105 runtime.goexit({}) runtime/asm_amd64.s:1695 +0x1 fp=0xc00019bfe8 sp=0xc00019bfe0 pc=0x5bb7383edde1 created by net/http.(*connReader).startBackgroundRead in goroutine 22 net/http/server.go:677 +0xba rax 0x204803fbc rbx 0x7adfd17adce0 rcx 0xfef rdx 0x7adfd10edb90 rdi 0x7adfd10edba0 rsi 0x0 rbp 0x7adffa7ddeb0 rsp 0x7adffa7dde90 r8 0x1 r9 0x7adfd16203b8 r10 0x0 r11 0x246 r12 0x7ade6000ccc0 r13 0x7adfd10edba0 r14 0x0 r15 0x7ae0b4ef57d0 rip 0x7ae06884d1d7 rflags 0x10297 cs 0x33 fs 0x0 gs 0x0 SIGABRT: abort PC=0x7ae04269eb1c m=4 sigcode=18446744073709551610 signal arrived during cgo execution goroutine 7 gp=0xc000156000 m=4 mp=0xc00004d808 [syscall]: runtime.cgocall(0x5bb738602e90, 0xc000056b60) runtime/cgocall.go:157 +0x4b fp=0xc000056b38 sp=0xc000056b00 pc=0x5bb7383853cb github.com/ollama/ollama/llama._Cfunc_llama_decode(0x7adfec006460, {0x1, 0x7adfec3acb70, 0x0, 0x0, 0x7adfec3ad380, 0x7adfec3adb90, 0x7adfec17b380, 0x7adfd10c2910, 0x0, ...}) _cgo_gotypes.go:543 +0x52 fp=0xc000056b60 sp=0xc000056b38 pc=0x5bb738482952 github.com/ollama/ollama/llama.(*Context).Decode.func1(0x5bb7385fed4b?, 0x7adfec006460?) github.com/ollama/ollama/llama/llama.go:167 +0xd8 fp=0xc000056c80 sp=0xc000056b60 pc=0x5bb738484e78 github.com/ollama/ollama/llama.(*Context).Decode(0xc000056d68?, 0x1?) github.com/ollama/ollama/llama/llama.go:167 +0x17 fp=0xc000056cc8 sp=0xc000056c80 pc=0x5bb738484cd7 main.(*Server).processBatch(0xc000128120, 0xc000126150, 0xc0001261c0) github.com/ollama/ollama/llama/runner/runner.go:424 +0x29e fp=0xc000056ed0 sp=0xc000056cc8 pc=0x5bb7385fdd7e main.(*Server).run(0xc000128120, {0x5bb73893ca40, 0xc00007c050}) github.com/ollama/ollama/llama/runner/runner.go:338 +0x1a5 fp=0xc000056fb8 sp=0xc000056ed0 pc=0x5bb7385fd765 main.main.gowrap2() github.com/ollama/ollama/llama/runner/runner.go:901 +0x28 fp=0xc000056fe0 sp=0xc000056fb8 pc=0x5bb738601ec8 runtime.goexit({}) runtime/asm_amd64.s:1695 +0x1 fp=0xc000056fe8 sp=0xc000056fe0 pc=0x5bb7383edde1 created by main.main in goroutine 1 github.com/ollama/ollama/llama/runner/runner.go:901 +0xc2b goroutine 1 gp=0xc0000061c0 m=nil [IO wait, 520 minutes]: runtime.gopark(0xc000038a08?, 0xc00014b908?, 0xb1?, 0x7a?, 0x2000?) runtime/proc.go:402 +0xce fp=0xc00014b888 sp=0xc00014b868 pc=0x5bb7383bc00e runtime.netpollblock(0xc00014b920?, 0x38384b26?, 0xb7?) runtime/netpoll.go:573 +0xf7 fp=0xc00014b8c0 sp=0xc00014b888 pc=0x5bb7383b4257 internal/poll.runtime_pollWait(0x7ae067dc7fe0, 0x72) runtime/netpoll.go:345 +0x85 fp=0xc00014b8e0 sp=0xc00014b8c0 pc=0x5bb7383e8aa5 internal/poll.(*pollDesc).wait(0x3?, 0x7c?, 0x0) internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc00014b908 sp=0xc00014b8e0 pc=0x5bb7384389c7 internal/poll.(*pollDesc).waitRead(...) internal/poll/fd_poll_runtime.go:89 internal/poll.(*FD).Accept(0xc000150080) internal/poll/fd_unix.go:611 +0x2ac fp=0xc00014b9b0 sp=0xc00014b908 pc=0x5bb738439e8c net.(*netFD).accept(0xc000150080) net/fd_unix.go:172 +0x29 fp=0xc00014ba68 sp=0xc00014b9b0 pc=0x5bb7384a88a9 net.(*TCPListener).accept(0xc0000321e0) net/tcpsock_posix.go:159 +0x1e fp=0xc00014ba90 sp=0xc00014ba68 pc=0x5bb7384b95de net.(*TCPListener).Accept(0xc0000321e0) net/tcpsock.go:327 +0x30 fp=0xc00014bac0 sp=0xc00014ba90 pc=0x5bb7384b8930 net/http.(*onceCloseListener).Accept(0xc000190090?) <autogenerated>:1 +0x24 fp=0xc00014bad8 sp=0xc00014bac0 pc=0x5bb7385dfa44 net/http.(*Server).Serve(0xc000168000, {0x5bb73893c400, 0xc0000321e0}) net/http/server.go:3260 +0x33e fp=0xc00014bc08 sp=0xc00014bad8 pc=0x5bb7385d685e main.main() github.com/ollama/ollama/llama/runner/runner.go:921 +0xfcc fp=0xc00014bf50 sp=0xc00014bc08 pc=0x5bb738601c4c runtime.main() runtime/proc.go:271 +0x29d fp=0xc00014bfe0 sp=0xc00014bf50 pc=0x5bb7383bbbdd runtime.goexit({}) runtime/asm_amd64.s:1695 +0x1 fp=0xc00014bfe8 sp=0xc00014bfe0 pc=0x5bb7383edde1 goroutine 2 gp=0xc000006c40 m=nil [force gc (idle), 3 minutes]: runtime.gopark(0x1dd19be52e23?, 0x0?, 0x0?, 0x0?, 0x0?) runtime/proc.go:402 +0xce fp=0xc000046fa8 sp=0xc000046f88 pc=0x5bb7383bc00e runtime.goparkunlock(...) runtime/proc.go:408 runtime.forcegchelper() runtime/proc.go:326 +0xb8 fp=0xc000046fe0 sp=0xc000046fa8 pc=0x5bb7383bbe98 runtime.goexit({}) runtime/asm_amd64.s:1695 +0x1 fp=0xc000046fe8 sp=0xc000046fe0 pc=0x5bb7383edde1 created by runtime.init.6 in goroutine 1 runtime/proc.go:314 +0x1a goroutine 3 gp=0xc000007180 m=nil [GC sweep wait]: runtime.gopark(0x5bb738b09e01?, 0x5bb738b09e40?, 0xc?, 0x9?, 0x1?) runtime/proc.go:402 +0xce fp=0xc000047780 sp=0xc000047760 pc=0x5bb7383bc00e runtime.goparkunlock(...) runtime/proc.go:408 runtime.bgsweep(0xc00006e000) runtime/mgcsweep.go:318 +0xdf fp=0xc0000477c8 sp=0xc000047780 pc=0x5bb7383a6b9f runtime.gcenable.gowrap1() runtime/mgc.go:203 +0x25 fp=0xc0000477e0 sp=0xc0000477c8 pc=0x5bb73839b685 runtime.goexit({}) runtime/asm_amd64.s:1695 +0x1 fp=0xc0000477e8 sp=0xc0000477e0 pc=0x5bb7383edde1 created by runtime.gcenable in goroutine 1 runtime/mgc.go:203 +0x66 goroutine 4 gp=0xc000007340 m=nil [GC scavenge wait]: runtime.gopark(0x10000?, 0x166b9ea?, 0x0?, 0x0?, 0x0?) runtime/proc.go:402 +0xce fp=0xc000047f78 sp=0xc000047f58 pc=0x5bb7383bc00e runtime.goparkunlock(...) runtime/proc.go:408 runtime.(*scavengerState).park(0x5bb738b0a4c0) runtime/mgcscavenge.go:425 +0x49 fp=0xc000047fa8 sp=0xc000047f78 pc=0x5bb7383a4549 runtime.bgscavenge(0xc00006e000) runtime/mgcscavenge.go:658 +0x59 fp=0xc000047fc8 sp=0xc000047fa8 pc=0x5bb7383a4af9 runtime.gcenable.gowrap2() runtime/mgc.go:204 +0x25 fp=0xc000047fe0 sp=0xc000047fc8 pc=0x5bb73839b625 runtime.goexit({}) runtime/asm_amd64.s:1695 +0x1 fp=0xc000047fe8 sp=0xc000047fe0 pc=0x5bb7383edde1 created by runtime.gcenable in goroutine 1 runtime/mgc.go:204 +0xa5 goroutine 5 gp=0xc000007c00 m=nil [finalizer wait, 3 minutes]: runtime.gopark(0x0?, 0x5bb7389381a0?, 0x0?, 0x60?, 0x1000000010?) runtime/proc.go:402 +0xce fp=0xc000046620 sp=0xc000046600 pc=0x5bb7383bc00e runtime.runfinq() runtime/mfinal.go:194 +0x107 fp=0xc0000467e0 sp=0xc000046620 pc=0x5bb73839a6c7 runtime.goexit({}) runtime/asm_amd64.s:1695 +0x1 fp=0xc0000467e8 sp=0xc0000467e0 pc=0x5bb7383edde1 created by runtime.createfing in goroutine 1 runtime/mfinal.go:164 +0x3d goroutine 22 gp=0xc000196000 m=nil [select]: runtime.gopark(0xc000147a80?, 0x2?, 0x18?, 0x77?, 0xc000147824?) runtime/proc.go:402 +0xce fp=0xc000147698 sp=0xc000147678 pc=0x5bb7383bc00e runtime.selectgo(0xc000147a80, 0xc000147820, 0xc00037de00?, 0x0, 0x2?, 0x1) runtime/select.go:327 +0x725 fp=0xc0001477b8 sp=0xc000147698 pc=0x5bb7383cd3e5 main.(*Server).completion(0xc000128120, {0x5bb73893c5b0, 0xc0000e22a0}, 0xc0000c0360) github.com/ollama/ollama/llama/runner/runner.go:652 +0x8fe fp=0xc000147ab8 sp=0xc0001477b8 pc=0x5bb7385ff6de main.(*Server).completion-fm({0x5bb73893c5b0?, 0xc0000e22a0?}, 0x5bb7385dab8d?) <autogenerated>:1 +0x36 fp=0xc000147ae8 sp=0xc000147ab8 pc=0x5bb7386026b6 net/http.HandlerFunc.ServeHTTP(0xc00010cb60?, {0x5bb73893c5b0?, 0xc0000e22a0?}, 0x10?) net/http/server.go:2171 +0x29 fp=0xc000147b10 sp=0xc000147ae8 pc=0x5bb7385d3629 net/http.(*ServeMux).ServeHTTP(0x5bb73838ef85?, {0x5bb73893c5b0, 0xc0000e22a0}, 0xc0000c0360) net/http/server.go:2688 +0x1ad fp=0xc000147b60 sp=0xc000147b10 pc=0x5bb7385d54ad net/http.serverHandler.ServeHTTP({0x5bb73893b900?}, {0x5bb73893c5b0?, 0xc0000e22a0?}, 0x6?) net/http/server.go:3142 +0x8e fp=0xc000147b90 sp=0xc000147b60 pc=0x5bb7385d64ce net/http.(*conn).serve(0xc000190090, {0x5bb73893ca08, 0xc00010adb0}) net/http/server.go:2044 +0x5e8 fp=0xc000147fb8 sp=0xc000147b90 pc=0x5bb7385d2268 net/http.(*Server).Serve.gowrap3() net/http/server.go:3290 +0x28 fp=0xc000147fe0 sp=0xc000147fb8 pc=0x5bb7385d6c48 runtime.goexit({}) runtime/asm_amd64.s:1695 +0x1 fp=0xc000147fe8 sp=0xc000147fe0 pc=0x5bb7383edde1 created by net/http.(*Server).Serve in goroutine 1 net/http/server.go:3290 +0x4b4 goroutine 21 gp=0xc000082a80 m=nil [GC worker (idle), 4 minutes]: runtime.gopark(0x1db5aaee6275?, 0x3?, 0x58?, 0xf?, 0x0?) runtime/proc.go:402 +0xce fp=0xc0000cdf50 sp=0xc0000cdf30 pc=0x5bb7383bc00e runtime.gcBgMarkWorker() runtime/mgc.go:1310 +0xe5 fp=0xc0000cdfe0 sp=0xc0000cdf50 pc=0x5bb73839d585 runtime.goexit({}) runtime/asm_amd64.s:1695 +0x1 fp=0xc0000cdfe8 sp=0xc0000cdfe0 pc=0x5bb7383edde1 created by runtime.gcBgMarkStartWorkers in goroutine 18 runtime/mgc.go:1234 +0x1c goroutine 41 gp=0xc000082fc0 m=nil [GC worker (idle), 3 minutes]: runtime.gopark(0x1db5aaee606c?, 0x3?, 0xc?, 0xfe?, 0x0?) runtime/proc.go:402 +0xce fp=0xc0000cf750 sp=0xc0000cf730 pc=0x5bb7383bc00e runtime.gcBgMarkWorker() runtime/mgc.go:1310 +0xe5 fp=0xc0000cf7e0 sp=0xc0000cf750 pc=0x5bb73839d585 runtime.goexit({}) runtime/asm_amd64.s:1695 +0x1 fp=0xc0000cf7e8 sp=0xc0000cf7e0 pc=0x5bb7383edde1 created by runtime.gcBgMarkStartWorkers in goroutine 18 runtime/mgc.go:1234 +0x1c goroutine 50 gp=0xc0005e2000 m=nil [GC worker (idle), 3 minutes]: runtime.gopark(0x1dd19bf2c88b?, 0x3?, 0xbf?, 0x40?, 0x0?) runtime/proc.go:402 +0xce fp=0xc0000c8750 sp=0xc0000c8730 pc=0x5bb7383bc00e runtime.gcBgMarkWorker() runtime/mgc.go:1310 +0xe5 fp=0xc0000c87e0 sp=0xc0000c8750 pc=0x5bb73839d585 runtime.goexit({}) runtime/asm_amd64.s:1695 +0x1 fp=0xc0000c87e8 sp=0xc0000c87e0 pc=0x5bb7383edde1 created by runtime.gcBgMarkStartWorkers in goroutine 18 runtime/mgc.go:1234 +0x1c goroutine 42 gp=0xc000083180 m=nil [GC worker (idle), 66 minutes]: runtime.gopark(0x1a5228359582?, 0x3?, 0x1c?, 0x4?, 0x0?) runtime/proc.go:402 +0xce fp=0xc0000cff50 sp=0xc0000cff30 pc=0x5bb7383bc00e runtime.gcBgMarkWorker() runtime/mgc.go:1310 +0xe5 fp=0xc0000cffe0 sp=0xc0000cff50 pc=0x5bb73839d585 runtime.goexit({}) runtime/asm_amd64.s:1695 +0x1 fp=0xc0000cffe8 sp=0xc0000cffe0 pc=0x5bb7383edde1 created by runtime.gcBgMarkStartWorkers in goroutine 18 runtime/mgc.go:1234 +0x1c goroutine 11112 gp=0xc000156a80 m=nil [IO wait, 4 minutes]: runtime.gopark(0x10?, 0x10?, 0xf0?, 0xbd?, 0xb?) runtime/proc.go:402 +0xce fp=0xc00019bda8 sp=0xc00019bd88 pc=0x5bb7383bc00e runtime.netpollblock(0x5bb738422558?, 0x38384b26?, 0xb7?) runtime/netpoll.go:573 +0xf7 fp=0xc00019bde0 sp=0xc00019bda8 pc=0x5bb7383b4257 internal/poll.runtime_pollWait(0x7ae067dc7ee8, 0x72) runtime/netpoll.go:345 +0x85 fp=0xc00019be00 sp=0xc00019bde0 pc=0x5bb7383e8aa5 internal/poll.(*pollDesc).wait(0xc000164a00?, 0xc00010ab81?, 0x0) internal/poll/fd_poll_runtime.go:84 +0x27 fp=0xc00019be28 sp=0xc00019be00 pc=0x5bb7384389c7 internal/poll.(*pollDesc).waitRead(...) internal/poll/fd_poll_runtime.go:89 internal/poll.(*FD).Read(0xc000164a00, {0xc00010ab81, 0x1, 0x1}) internal/poll/fd_unix.go:164 +0x27a fp=0xc00019bec0 sp=0xc00019be28 pc=0x5bb73843951a net.(*netFD).Read(0xc000164a00, {0xc00010ab81?, 0xc00019bf48?, 0x5bb7383ea6d0?}) net/fd_posix.go:55 +0x25 fp=0xc00019bf08 sp=0xc00019bec0 pc=0x5bb7384a77a5 net.(*conn).Read(0xc00004a000, {0xc00010ab81?, 0x385041544f792f41?, 0xc00010ab78?}) net/net.go:185 +0x45 fp=0xc00019bf50 sp=0xc00019bf08 pc=0x5bb7384b1a65 net.(*TCPConn).Read(0xc00010ab70?, {0xc00010ab81?, 0x3450472f58332f59?, 0x636f422b44786847?}) <autogenerated>:1 +0x25 fp=0xc00019bf80 sp=0xc00019bf50 pc=0x5bb7384bd445 net/http.(*connReader).backgroundRead(0xc00010ab70) net/http/server.go:681 +0x37 fp=0xc00019bfc8 sp=0xc00019bf80 pc=0x5bb7385cc1d7 net/http.(*connReader).startBackgroundRead.gowrap2() net/http/server.go:677 +0x25 fp=0xc00019bfe0 sp=0xc00019bfc8 pc=0x5bb7385cc105 runtime.goexit({}) runtime/asm_amd64.s:1695 +0x1 fp=0xc00019bfe8 sp=0xc00019bfe0 pc=0x5bb7383edde1 created by net/http.(*connReader).startBackgroundRead in goroutine 22 net/http/server.go:677 +0xba rax 0x0 rbx 0xa6ba rcx 0x7ae04269eb1c rdx 0x6 rdi 0xa6b7 rsi 0xa6ba rbp 0x7adffa7de010 rsp 0x7adffa7ddfd0 r8 0x0 r9 0x0 r10 0x8 r11 0x246 r12 0x6 r13 0xfcc r14 0x16 r15 0x0 rip 0x7ae04269eb1c rflags 0x246 cs 0x33 fs 0x0 gs 0x0 [GIN] 2024/11/19 - 07:09:34 | 500 | 3m37s | 127.0.0.1 | POST "/api/generate" ``` ### OS Linux ### GPU Nvidia ### CPU AMD ### Ollama version 0.4.1
bug
low
Critical
2,673,858,191
flutter
[DisplayList] The ImageFilter inset/outset_bounds methods are not migrated to Impeller classes
While implementing https://github.com/flutter/engine/pull/56720 the unit tests for the ImageFilter methods that test the bounds methods were having trouble passing for perspective transforms. In order to get that PR landed it reverted 2 of the methods back to using Skia geometry classes for implementation. We need to find the problem that was preventing the Impeller classes from correctly doing the work and finish the migration...
engine,P2,team-engine,triaged-engine
low
Minor
2,673,861,470
deno
Add support for wildcards in `deno task`
It would be great if one could do `deno task build:*` to run all tasks that start with `build:`. Eg: ```jsonc { "tasks": { "build:frontend": "...", "build:server": "...", "build:queue": "...", "serve": "..." } } ``` running `deno task build:*` would match `build:frontend`, `build:server` and `build:queue` and run them all in parallel. Somewhat related to https://github.com/denoland/deno/issues/26462
feat,task runner
low
Minor
2,673,947,071
pytorch
[triton 3.2] test_convolution_as_mm failure on A100
### 🐛 Describe the bug **Update**: Tracked in triton in https://github.com/triton-lang/triton/issues/5204. As a workaround, a PR was reverted in 3.2 (but not the main triton branch). We also filed an nvbug. This is uncovered during the triton 3.2 pin update. @embg and I are looking into this, but I'm filing an issue to provide additional context. The issue can be reproed without the inductor test with this script: ```python import torch import triton import triton.language as tl from triton.compiler.compiler import AttrsDescriptor from torch._inductor.runtime import triton_helpers, triton_heuristics from torch._inductor.runtime.triton_helpers import libdevice, math as tl_math from torch._inductor.runtime.hints import AutotuneHint, ReductionHint, TileHint, DeviceProperties ''' @triton_heuristics.template( num_stages=5, num_warps=4, triton_meta={'signature': {'in_ptr0': '*fp32', 'arg_A': '*fp32', 'arg_B': '*fp32', 'out_ptr0': '*fp32'}, 'device': DeviceProperties(type='cuda', index=0, cc=80, major=8, regs_per_multiprocessor=65536, max_threads_per_multi_processor=2048, multi_processor_count=108, warp_size=32), 'constants': {}, 'configs': [AttrsDescriptor(divisible_by_16=(0, 1, 2, 3), equal_to_1=())]}, inductor_meta={'kernel_name': 'Placeholder.DESCRIPTIVE_NAME', 'backend_hash': '5C8FD749058C1A642DD2829E30FB5522FD8965A7E9F6B2F8F8AB489A76569BB4', 'are_deterministic_algorithms_enabled': False, 'assert_indirect_indexing': True, 'autotune_local_cache': True, 'autotune_pointwise': True, 'autotune_remote_cache': None, 'force_disable_caches': False, 'dynamic_scale_rblock': True, 'max_autotune': True, 'max_autotune_pointwise': False, 'min_split_scan_rblock': 256, 'spill_threshold': 16, 'store_cubin': False}, ) ''' @triton.jit def triton_mm(in_ptr0, arg_A, arg_B, out_ptr0, bias_ptr): GROUP_M : tl.constexpr = 8 EVEN_K : tl.constexpr = False ALLOW_TF32 : tl.constexpr = False # ALLOW_TF32 : tl.constexpr = True ACC_TYPE : tl.constexpr = tl.float32 B_PROLOGUE_CAST_TYPE : tl.constexpr = None BLOCK_M : tl.constexpr = 64 BLOCK_N : tl.constexpr = 64 BLOCK_K : tl.constexpr = 64 A = arg_A B = arg_B M = 512 N = 34 K = 33 if M * N == 0: # early exit due to zero-size input(s) return stride_am = 33 stride_ak = 1 stride_bk = 1 stride_bn = 33 # based on triton.ops.matmul pid = tl.program_id(0) grid_m = (M + BLOCK_M - 1) // BLOCK_M grid_n = (N + BLOCK_N - 1) // BLOCK_N # re-order program ID for better L2 performance width = GROUP_M * grid_n group_id = pid // width group_size = min(grid_m - group_id * GROUP_M, GROUP_M) pid_m = group_id * GROUP_M + (pid % group_size) pid_n = (pid % width) // (group_size) rm = pid_m * BLOCK_M + tl.arange(0, BLOCK_M) rn = pid_n * BLOCK_N + tl.arange(0, BLOCK_N) if (stride_am == 1 and stride_ak == M) or (stride_am == K and stride_ak == 1): ram = tl.max_contiguous(tl.multiple_of(rm % M, BLOCK_M), BLOCK_M) else: ram = rm % M if (stride_bk == 1 and stride_bn == K) or (stride_bk == N and stride_bn == 1): rbn = tl.max_contiguous(tl.multiple_of(rn % N, BLOCK_N), BLOCK_N) else: rbn = rn % N rk = tl.arange(0, BLOCK_K) A = A + (ram[:, None] * stride_am + rk[None, :] * stride_ak) B = B + (rk[:, None] * stride_bk + rbn[None, :] * stride_bn) acc = tl.zeros((BLOCK_M, BLOCK_N), dtype=ACC_TYPE) for k in range(K, 0, -BLOCK_K): if EVEN_K: a = tl.load(A) b = tl.load(B) else: a = tl.load(A, mask=rk[None, :] < k, other=0.) b = tl.load(B, mask=rk[:, None] < k, other=0.) if B_PROLOGUE_CAST_TYPE is not None: b = b.to(B_PROLOGUE_CAST_TYPE) acc += tl.dot(a, b, allow_tf32=ALLOW_TF32) A += BLOCK_K * stride_ak B += BLOCK_K * stride_bk # rematerialize rm and rn to save registers rm = pid_m * BLOCK_M + tl.arange(0, BLOCK_M) rn = pid_n * BLOCK_N + tl.arange(0, BLOCK_N) idx_m = rm[:, None] idx_n = rn[None, :] mask = (idx_m < M) & (idx_n < N) # inductor generates a suffix xindex = idx_n + (34*idx_m) tmp0 = tl.load(in_ptr0 + (tl.broadcast_to(idx_n, acc.shape)), mask, eviction_policy='evict_last') # tl.store(bias_ptr + tl.broadcast_to(xindex, acc.shape), tmp0, mask) tmp1 = acc + tmp0 tl.store(out_ptr0 + (tl.broadcast_to(xindex, acc.shape)), tmp1, mask) tl.store(bias_ptr + tl.broadcast_to(xindex, acc.shape), tmp0, mask) BLOCK_M : tl.constexpr = 64 BLOCK_N : tl.constexpr = 64 BLOCK_K : tl.constexpr = 64 dtype = torch.float32 A = torch.ones(512, 33, device="cuda", dtype=dtype) B = torch.ones(33, 34, device="cuda", dtype=dtype) bias = torch.ones(34, device="cuda", dtype=dtype) out = torch.empty(512, 34, device="cuda", dtype=dtype) bias_out = torch.empty(512, 34, device="cuda", dtype=dtype) # ret = triton_mm[(512 // 64,)](bias, A, B, out, bias_out, debug=True) ret = triton_mm[(512 // 64,)](bias, A, B, out, bias_out) ''' from triton.tools.disasm import get_sass sass = get_sass(ret.asm['cubin']) with open("sass.sass", "w") as f: f.write(sass) ''' expect = A @ B + bias # breakpoint() print((expect-out).abs().max()) assert (expect-out).abs().max().item() < 0.01 # v = ((expect-out).abs() > 0.01).to(torch.int32).tolist() # print("\n".join([str(x) for x in v])) ``` Bisect finds https://github.com/triton-lang/triton/pull/4582 and although this blame seems reliable, it's unclear how the PR causes the issue ### Versions A100, triton main, pytorch viable/strict cc @bertmaher @int3 @nmacchioni @chenyang78 @embg @peterbell10 @aakhundov
triaged,upstream triton
low
Critical
2,673,966,507
pytorch
ARM build failed with recent XNNPACK update: third_party/XNNPACK/src/reference/unary-elementwise.cc:125:14: error: invalid ‘static_cast’ from type ‘xnn_bfloat16’ to type ‘_Float16’
### 🐛 Describe the bug After the XNNPACK update of https://github.com/pytorch/pytorch/pull/139913, our nightly ARM build fails (x86 build still works) build log ``` [3174/5315] Building CXX object confu-deps/XNNPACK/CMakeFiles/reference-ukernels.dir/src/reference/unary-elementwise.cc.o FAILED: confu-deps/XNNPACK/CMakeFiles/reference-ukernels.dir/src/reference/unary-elementwise.cc.o /usr/bin/c++ -DCAFFE2_PERF_WITH_SVE=1 -DXNN_ENABLE_ARM_BF16=0 -DXNN_ENABLE_ARM_DOTPROD=1 -DXNN_ENABLE_ARM_FP16_SCALAR=1 -DXNN_ENABLE_ARM_FP16_VECTOR=1 -DXNN_ENABLE_ARM_I8MM=0 -DXNN_ENABLE_ARM_SME2=1 -DXNN_ENABLE_ARM_SME=1 -DXNN_ENABLE_ASSEMBLY=1 -DXNN_ENABLE_AVX256SKX=1 -DXNN_ENABLE_AVX256VNNI=1 -DXNN_ENABLE_AVX256VNNIGFNI=1 -DXNN_ENABLE_AVX512AMX=1 -DXNN_ENABLE_AVX512F=1 -DXNN_ENABLE_AVX512FP16=1 -DXNN_ENABLE_AVX512SKX=1 -DXNN_ENABLE_AVX512VBMI=1 -DXNN_ENABLE_AVX512VNNI=1 -DXNN_ENABLE_AVX512VNNIGFNI=1 -DXNN_ENABLE_AVXVNNI=0 -DXNN_ENABLE_AVXVNNIINT8=0 -DXNN_ENABLE_CPUINFO=1 -DXNN_ENABLE_DWCONV_MULTIPASS=0 -DXNN_ENABLE_GEMM_M_SPECIALIZATION=1 -DXNN_ENABLE_HVX=1 -DXNN_ENABLE_KLEIDIAI=0 -DXNN_ENABLE_MEMOPT=1 -DXNN_ENABLE_RISCV_VECTOR=1 -DXNN_ENABLE_SPARSE=1 -DXNN_ENABLE_VSX=1 -I/opt/pytorch/pytorch/third_party/XNNPACK/include -I/opt/pytorch/pytorch/third_party/XNNPACK/src -I/opt/pytorch/pytorch/third_party/pthreadpool/include -isystem /opt/pytorch/pytorch/third_party/protobuf/src -DNO_CUDNN_DESTROY_HANDLE -fno-gnu-unique -D_GLIBCXX_USE_CXX11_ABI=1 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -O3 -DNDEBUG -std=c++14 -fPIC -Wno-psabi -O2 -pthread -MD -MT confu-deps/XNNPACK/CMakeFiles/reference-ukernels.dir/src/reference/unary-elementwise.cc.o -MF confu-deps/XNNPACK/CMakeFiles/reference-ukernels.dir/src/reference/unary-elementwise.cc.o.d -o confu-deps/XNNPACK/CMakeFiles/reference-ukernels.dir/src/reference/unary-elementwise.cc.o -c /opt/pytorch/pytorch/third_party/XNNPACK/src/reference/unary-elementwise.cc /opt/pytorch/pytorch/third_party/XNNPACK/src/reference/unary-elementwise.cc: In instantiation of ‘TOut {anonymous}::ConvertOp<TIn, TOut>::operator()(TIn) const [with TIn = xnn_bfloat16; TOut = _Float16]’: /opt/pytorch/pytorch/third_party/XNNPACK/src/reference/unary-elementwise.cc:39:32: required from ‘void {anonymous}::unary_ukernel_unquantized(size_t, const TIn*, TOut*, const xnn_unary_uparams*) [with TIn = xnn_bfloat16; TOut = _Float16; Operator = ConvertOp<xnn_bfloat16, _Float16>; size_t = long unsigned int]’ /opt/pytorch/pytorch/third_party/XNNPACK/src/reference/unary-elementwise.cc:167:7: required from ‘const xnn_unary_elementwise_config* {anonymous}::get_convert_config(std::false_type, std::false_type) [with TIn = xnn_bfloat16; TOut = _Float16; std::false_type = std::integral_constant<bool, false>]’ /opt/pytorch/pytorch/third_party/XNNPACK/src/reference/unary-elementwise.cc:183:50: required from ‘const xnn_unary_elementwise_config* {anonymous}::get_convert_config(xnn_datatype, InputQuantized) [with TIn = xnn_bfloat16; InputQuantized = std::integral_constant<bool, false>]’ /opt/pytorch/pytorch/third_party/XNNPACK/src/reference/unary-elementwise.cc:217:46: required from here /opt/pytorch/pytorch/third_party/XNNPACK/src/reference/unary-elementwise.cc:125:14: error: invalid ‘static_cast’ from type ‘xnn_bfloat16’ to type ‘_Float16’ 125 | return static_cast<TOut>(x); | ^~~~~~~~~~~~~~~~~~~~ ``` ### Versions environment: - ubuntu 24.04 - gcc 13.2.0 - python 3.12 - pytorch https://github.com/pytorch/pytorch/commit/cca34be584467a622a984a0421d886fb26f7dda7 CPU is neoverse-v2, or Nvidia Grace cc @malfet @seemethere @snadampal @milpuz01 @ptrblck @nWEIdia @tinglvv @mcr229 @huydhn @digantdesai @atalman
module: build,triaged,module: regression,module: xnnpack,module: arm
low
Critical
2,673,968,902
rust
Add an equivalent to -grecord-command-line/-grecord-gcc-switches
Other widely used compilers (gcc, clang, etc) have the ability to record their command line arguments in the debug information (for DWARF in the DW_AT_producer field). Rustc currently hardcodes this to "" when calling LLVMRustDIBuilderCreateCompileUnit. @rustbot label +A-debuginfo +WG-debugging
A-debuginfo,T-compiler,C-feature-request,WG-debugging,A-CLI
low
Critical
2,673,987,221
ollama
Support for LLaVA-o1
First version of LLaVA-o1 model weights were released a few days back - [LLaVA-o1](https://huggingface.co/Xkev/Llama-3.2V-11B-cot). Would be good to have this. Thanks!
model request
low
Major
2,674,037,191
opencv
imshow issue in qt6
### System Information Ubuntu 22.04.4 LTS x64 qt 6.7.2 qt widgets application Qt Creator 14.0.1 Based on Qt 6.7.2 (GCC 10.3.1 20210422 (Red Hat 10.3.1-1), x86_64) ### Detailed description when I use cv::imshow("Original Image", cvImg); in qt 6 wigdet app. The app is built ok, But I can see the threed in the monitor,but it was no any window showed! when i remarked imshow clause,the window will show correctly! ### Steps to reproduce //qt 6 qt widgets application cv::Mat cvImg = cv::imread("/home/devgis/Pictures/gyy.jpeg");// 使用OpenCV加载图片 cv::imshow("Original Image", cvImg); ### Issue submission checklist - [X] I report the issue, it's not a question - [X] I checked the problem with documentation, FAQ, open issues, forum.opencv.org, Stack Overflow, etc and have not found any solution - [X] I updated to the latest OpenCV version and the issue is still there - [ ] There is reproducer code and related data files (videos, images, onnx, etc)
question (invalid tracker)
low
Minor
2,674,052,178
flutter
Proposal to add `SliverAnimatedList.separated`
### Use case since #48226 got done, why it wasnt added to SliverAnimatedList too? ### Proposal make the same separated behavior for SliverAnimatedList also.
c: new feature,framework,a: animation,f: scrolling,c: proposal,P3,team-framework,triaged-framework
low
Minor
2,674,066,391
go
x/telemetry/internal/mmap: TestSharedMemory failures
``` #!watchflakes default <- pkg == "golang.org/x/telemetry/internal/mmap" && test == "TestSharedMemory" ``` Issue created automatically to collect these failures. Example ([log](https://ci.chromium.org/b/8732010246569277185)): === RUN TestSharedMemory mmap_test.go:113: incremented 99 times, want 100 --- FAIL: TestSharedMemory (0.76s) — [watchflakes](https://go.dev/wiki/Watchflakes)
NeedsInvestigation,telemetry
low
Critical
2,674,066,426
go
x/telemetry/internal/mmap: TestMultipleMaps failures
``` #!watchflakes default <- pkg == "golang.org/x/telemetry/internal/mmap" && test == "TestMultipleMaps" ``` Issue created automatically to collect these failures. Example ([log](https://ci.chromium.org/b/8731892843629401745)): === RUN TestMultipleMaps mmap_test.go:170: counter 0 has value 296, want 300 mmap_test.go:170: counter 1 has value 296, want 300 mmap_test.go:170: counter 2 has value 296, want 300 mmap_test.go:176: counter 0 has value 296, want 300 mmap_test.go:176: counter 1 has value 296, want 300 --- FAIL: TestMultipleMaps (0.00s) — [watchflakes](https://go.dev/wiki/Watchflakes)
NeedsInvestigation,telemetry
low
Critical
2,674,069,944
PowerToys
Maybe folder watching will be useful?
### Description of the new feature / enhancement Let's think about this situation: I have preset to monitor the "Download" folder as well as some rules regarding target file directories, target file names, or target directories. When I download some files to the "Download" folder, if the existing rules can cover the judgment of the file type, the files will be directly and automatically moved to a certain folder. For example, if it is "homework1.docx", it will be automatically moved to "/Homeworks/$today/homework1.docx". If the file type cannot be covered by the rules or multiple rules are applicable, a quick selection window will pop up to ask where these files should be placed. We can have many rules to automate this process, such as name matching, extension matching, regular expressions, time, and so on. This should significantly improve the convenience and consistency of file management. ### Scenario when this would be used? Here are some scenarios where this kind of file management automation would be used: Office Scenarios In daily office work, various files are often downloaded from email attachments, internal company systems, or online cloud drives, such as work reports like "report_202411.docx" and project materials like "project_files.zip". By setting rules, for example, automatically moving all files starting with "report_" to a dedicated "Work Reports" folder (such as "/Office/Reports/today/project_files.zip"), office files can be organized more orderly, making it convenient for subsequent searching and use. Financial staff may regularly download financial statements like "financial_statement_2024Q4.xlsx". By setting time-related rules (such as automatically moving the quarterly financial statements downloaded at the end of each month to the corresponding quarterly folder, like "/Finance/2024Q4/financial_statement_2024Q4.xlsx"), financial files can be accurately classified according to time and type, improving the efficiency of financial management. Learning Scenarios Students download course homework like "homework_week3.docx" and study materials like "lecture_notes_week3.pdf" from online learning platforms. Using name matching rules, the homework can be automatically moved to the "Homework" folder ("/Studies/Homeworks/today/lecture_notes_week3.pdf"), facilitating organization and review. Researchers download experimental data like "experiment_data_202411.csv" and relevant literature like "research_paper_2024.pdf". Through rules such as extension matching and regular expressions, different types of data and literature can be respectively classified into appropriate folders. For example, data files can be moved to the "Experimental Data" folder ("/Research/ExperimentData/today/research_paper_2024.pdf"), which is helpful for the progress of research work. Personal Life Scenarios Photography enthusiasts often download photos like "photo_20241120.jpg" and videos like "video_20241120.mp4" from cameras or mobile phones. Rules can be set according to the shooting time, automatically moving the photos of each month to the corresponding monthly folder (such as "/Personal/Photos/202411/photo_20241120.jpg"), and the videos to the corresponding video folder ("/Personal/Videos/202411/video_20241120.mp4"), making it convenient to manage and recall the shooting works of different periods. Users who like to download music like "song_202411.mp3" and audio books like "audiobook_202411.ogg" can use name or extension matching rules to automatically move music files to the "Music" folder ("/Personal/Music/today/audiobook_202411.ogg"), making the storage of personal media files more organized. ### Supporting information _No response_
Needs-Triage
low
Minor
2,674,077,820
vscode
Keyboard shortcut issue: duplicated "Notebook: Stop Edting Cell" bound with **Esc** key
<!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ --> <!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ --> <!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions --> <!-- 🔎 Search existing issues to avoid creating duplicates. --> <!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ --> <!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. --> <!-- 🔧 Launch with `code --disable-extensions` to check. --> Does this issue occur when all extensions are disabled?: Yes/No <!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. --> <!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. --> - VS Code Version: 1.95.3 - OS Version: Windows_NT x64 10.0 19045 Steps to Reproduce: 1. Edit a cell in a notebook (jupyter or polyglot) 2. Type something to let code completion assistant pop up 3. Press **Esc** key to cancel the completion pop up 4. Expected staying focused on currently editing cell, but lost it Self analysis: 1. There are three shortcuts named "Notebook: Stop Editing Cell". All of them are defined by "System". 2. Two of them are bound with **Esc** and **Ctrl + Alt + Enter**, and have the condition expression "inputFocus && notebookEditorFocused && !editorHasMultipleSelections && !editorHasSelection && !editorHoverVisible && !inlineChatFocused". 3. The rest one is also bound with **Esc**, but has the condition expression "notebookEditorFocused && notebookOutputFocused". 4. After removing the shortcut of No.3 above, **Esc** key behaves as expected. Since these shortcuts are defined by "System", suppose this may be a bug? Best regards, Bill
bug,notebook
low
Critical
2,674,103,755
flutter
The methods in LayerStateStack should have doc comments
The methods in `LayerStateStack` are all well commented, but they lack the doc comment format, they should be upgraded to doc comments by using `///`
engine,P2,team-engine,triaged-engine
low
Minor
2,674,116,214
godot
The order index of new node of Editable-Children-enaled scene is not saved in `.tscn` file sometimes
### Tested versions - Reproducible in v4.4.dev4.official [36e6207bb] ### System information Windows 11 - Godot Engine v4.4.dev4.official.36e6207bb - Vulkan 1.3.280 - Forward+ - Using Device #0: NVIDIA - NVIDIA GeForce RTX 3060 Ti ### Issue description ![Image](https://github.com/user-attachments/assets/9d1d4a9e-1ffc-4d40-8c23-7cef7e38fdaa) ![Image](https://github.com/user-attachments/assets/a743926a-01ec-46d2-a019-dc743db85828) When I add a new node `Node2D_xxxx` between `Node2D3` and `Node2D4` and save, the order index is not saved in `main.tscn` file so that when I re-open `main.tscn`, the `Node2D_xxxx` came after `Node2D4`. Meanwhile, the `Node2D_yyyy`'s order is saved correctly. So I guess there is a bug to cause such different behavior. ### Steps to reproduce 1. Open MRP and open `main.tscn` 2. drag `Node2D_xxxx` between `Node2D3` and `Node2D4` and save。 3. re-open `main.tscn` and see the position of `Node2D_xxxx` ### Minimal reproduction project (MRP) [test_child_order.zip](https://github.com/user-attachments/files/17823918/test_child_order.zip)
bug,topic:editor
low
Critical
2,674,120,740
rust
Type inference chooses wrong trait impl based on generic parameter
<!-- Thank you for filing a bug report! 🐛 Please provide a short summary of the bug, along with any information you feel relevant to replicating the bug. --> I tried this code: ```rust fn foo<T1, T2, E>(a: T1) -> Result<T2, E> where T1: TryInto<usize>, T2: TryFrom<T1, Error = E>, { a.try_into() } fn bar<T1, T2, E>(a: T1) -> Result<T2, E> where T1: TryInto<usize>, T2: TryFrom<T1, Error = E>, { TryInto::<T2>::try_into(a) } ``` I expected to see this happen: Both functions should compile. Instead, this happened: Only `bar` compiles. `foo` produces the following compiler error: ``` error[E0308]: mismatched types --> src/main.rs:9:5 | 4 | fn foo<T1, T2, E>(a: T1) -> Result<T2, E> | -- ------------- expected `Result<T2, E>` because of return type | | | expected this type parameter ... 9 | a.try_into() | ^^^^^^^^^^^^ expected `Result<T2, E>`, found `Result<usize, ...>` | = note: expected enum `Result<T2, E>` found enum `Result<usize, <T1 as TryInto<usize>>::Error>` For more information about this error, try `rustc --explain E0308`. ``` I also tried compiling with the new trait solver (`rustc -Znext-solver src/main.rs`) and get the same error message. I am not 100% sure if that means that I am compiling it the wrong way or if the output is just identical. ### Meta <!-- If you're using the stable version of the compiler, you should also check if the bug also exists in the beta or nightly versions. --> `rustc --version --verbose`: ``` rustc 1.84.0-nightly (5ec7d6eee 2024-11-17) binary: rustc commit-hash: 5ec7d6eee7e0f5236ec1559499070eaf836bc608 commit-date: 2024-11-17 host: x86_64-unknown-linux-gnu release: 1.84.0-nightly LLVM version: 19.1.3 ```
A-trait-system,C-bug,T-types
low
Critical
2,674,127,029
pytorch
plus prefix-less wheels not being published on https://download.pytorch.org/whl/torch/ for linux x86_64
In the process of answering something that @charliermarsh was asking, I noticed that on https://download.pytorch.org/whl/torch/ we aren't publishing Linux x86_64 wheels that don't have `+` prefix, which is awkward. We should publish the default wheels not just on PyPI's default servers, but also on this index page? cc: @malfet @seemethere cc @seemethere @malfet @osalpekar @atalman
module: binaries,triaged
low
Major
2,674,127,483
tensorflow
Aborted (core dumped) in `tf.raw_ops.MatrixInverse`
### Issue type Bug ### Have you reproduced the bug with TensorFlow Nightly? Yes ### Source source ### TensorFlow version tf 2.17 ### Custom code Yes ### OS platform and distribution Linux Ubuntu 22.04.3 LTS (x86_64) ### Mobile device _No response_ ### Python version 3.9.13 ### Bazel version _No response_ ### GCC/compiler version _No response_ ### CUDA/cuDNN version _No response_ ### GPU model and memory _No response_ ### Current behavior? When the shape of the input argument is empty and the gpu is available, tf.raw_ops.MatrixInverse triggers a crash. It can be reproduced on tf-nightly when the gpu is available. ### Standalone code to reproduce the issue ```shell import tensorflow as tf tf.raw_ops.MatrixInverse(input=tf.cast(tf.random.uniform([], dtype=tf.dtypes.float32, maxval=60000), dtype=tf.complex128),adjoint=True) ``` ### Relevant log output ```shell 2024-11-20 10:46:15.940818: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`. 2024-11-20 10:46:16.001155: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:485] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered 2024-11-20 10:46:16.076386: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:8454] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered 2024-11-20 10:46:16.100080: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1452] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered 2024-11-20 10:46:16.154057: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: SSE4.1 SSE4.2 AVX AVX2 AVX512F AVX512_VNNI AVX512_BF16 AVX512_FP16 AVX_VNNI AMX_TILE AMX_INT8 AMX_BF16 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. 2024-11-20 10:46:23.889652: I tensorflow/core/common_runtime/gpu/gpu_device.cc:2021] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 21903 MB memory: -> device: 0, name: NVIDIA GeForce RTX 4090, pci bus id: 0000:1f:00.0, compute capability: 8.9 2024-11-20 10:46:23.891964: I tensorflow/core/common_runtime/gpu/gpu_device.cc:2021] Created device /job:localhost/replica:0/task:0/device:GPU:1 with 71 MB memory: -> device: 1, name: NVIDIA GeForce RTX 4090, pci bus id: 0000:d4:00.0, compute capability: 8.9 2024-11-20 10:46:24.756574: F tensorflow/core/framework/tensor_shape.cc:356] Check failed: d >= 0 (0 vs. -1) Aborted (core dumped) ```
stat:awaiting tensorflower,type:bug,comp:ops,2.17
medium
Critical
2,674,133,138
pytorch
Benchmark result upgrade script is broken
### 🐛 Describe the bug ```CH_KEY_ID=abc CH_KEY_SECRET=def python benchmarks/dynamo/ci_expected_accuracy/update_expected.py b43688515a50aba071aef22329de565566d91e19``` (removed the secret keys for obvious reasons) results in ``` inductor_huggingface 1, 1: https://gha-artifacts.s3.amazonaws.com/pytorch/pytorch/11880698620/1/artifact/test-reports-test-inductor_huggingface-1-1-linux.g5.4xlarge.nvidia.gpu_33104789789.zip inductor_timm 1, 2: https://gha-artifacts.s3.amazonaws.com/pytorch/pytorch/11880698620/1/artifact/test-reports-test-inductor_timm-1-2-linux.g5.4xlarge.nvidia.gpu_33104789865.zip inductor_timm 2, 2: https://gha-artifacts.s3.amazonaws.com/pytorch/pytorch/11880698620/1/artifact/test-reports-test-inductor_timm-2-2-linux.g5.4xlarge.nvidia.gpu_33104789924.zip inductor_torchbench 1, 2: https://gha-artifacts.s3.amazonaws.com/pytorch/pytorch/11880698620/1/artifact/test-reports-test-inductor_torchbench-1-2-linux.g5.4xlarge.nvidia.gpu_33104790005.zip inductor_torchbench 2, 2: https://gha-artifacts.s3.amazonaws.com/pytorch/pytorch/11880698620/1/artifact/test-reports-test-inductor_torchbench-2-2-linux.g5.4xlarge.nvidia.gpu_33104790091.zip inductor_timm 1, 2: https://gha-artifacts.s3.amazonaws.com/pytorch/pytorch/11880698620/1/artifact/test-reports-test-inductor_timm-1-2-linux.g5.4xlarge.nvidia.gpu_33104793832.zip inductor_timm 2, 2: https://gha-artifacts.s3.amazonaws.com/pytorch/pytorch/11880698620/1/artifact/test-reports-test-inductor_timm-2-2-linux.g5.4xlarge.nvidia.gpu_33104793920.zip cpu_inductor_torchbench 1, 2: https://gha-artifacts.s3.amazonaws.com/pytorch/pytorch/11880698620/1/artifact/test-reports-test-cpu_inductor_torchbench-1-2-linux.8xlarge.amx_33104456757.zip cpu_inductor_torchbench 2, 2: https://gha-artifacts.s3.amazonaws.com/pytorch/pytorch/11880698620/1/artifact/test-reports-test-cpu_inductor_torchbench-2-2-linux.8xlarge.amx_33104456805.zip dynamic_cpu_inductor_huggingface 1, 1: https://gha-artifacts.s3.amazonaws.com/pytorch/pytorch/11880698620/1/artifact/test-reports-test-dynamic_cpu_inductor_huggingface-1-1-linux.8xlarge.amx_33104456837.zip dynamic_cpu_inductor_timm 1, 2: https://gha-artifacts.s3.amazonaws.com/pytorch/pytorch/11880698620/1/artifact/test-reports-test-dynamic_cpu_inductor_timm-1-2-linux.8xlarge.amx_33104456880.zip dynamic_cpu_inductor_timm 2, 2: https://gha-artifacts.s3.amazonaws.com/pytorch/pytorch/11880698620/1/artifact/test-reports-test-dynamic_cpu_inductor_timm-2-2-linux.8xlarge.amx_33104456933.zip dynamic_cpu_inductor_torchbench 1, 2: https://gha-artifacts.s3.amazonaws.com/pytorch/pytorch/11880698620/1/artifact/test-reports-test-dynamic_cpu_inductor_torchbench-1-2-linux.8xlarge.amx_33104456986.zip dynamic_cpu_inductor_torchbench 2, 2: https://gha-artifacts.s3.amazonaws.com/pytorch/pytorch/11880698620/1/artifact/test-reports-test-dynamic_cpu_inductor_torchbench-2-2-linux.8xlarge.amx_33104457050.zip Traceback (most recent call last): File "/data/users/bobren/pytorch/benchmarks/dynamo/ci_expected_accuracy/update_expected.py", line 219, in <module> urls = get_artifacts_urls(results, suites) File "/data/users/bobren/pytorch/benchmarks/dynamo/ci_expected_accuracy/update_expected.py", line 115, in get_artifacts_urls config_str, test_str = parse_job_name(r["jobName"]) ValueError: too many values to unpack (expected 2) ``` ### Versions (pytorch) [18:53] devgpu006:/data/users/bobren/pytorch python collect_env.py Collecting environment information... PyTorch version: 2.6.0a0+gitc3fbec7 Is debug build: False CUDA used to build PyTorch: 12.0 ROCM used to build PyTorch: N/A OS: CentOS Stream 9 (x86_64) GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-2) Clang version: Could not collect CMake version: version 3.31.0 Libc version: glibc-2.34 Python version: 3.10.15 (main, Oct 3 2024, 07:27:34) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-6.4.3-0_fbk14_zion_2601_gcd42476b84e9-x86_64-with-glibc2.34 Is CUDA available: True CUDA runtime version: 12.0.140 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA H100 GPU 1: NVIDIA H100 GPU 2: NVIDIA H100 GPU 3: NVIDIA H100 GPU 4: NVIDIA H100 GPU 5: NVIDIA H100 GPU 6: NVIDIA H100 GPU 7: NVIDIA H100 Nvidia driver version: 535.154.05 cuDNN version: Probably one of the following: /usr/lib64/libcudnn.so.8.8.0 /usr/lib64/libcudnn_adv_infer.so.8.8.0 /usr/lib64/libcudnn_adv_train.so.8.8.0 /usr/lib64/libcudnn_cnn_infer.so.8.8.0 /usr/lib64/libcudnn_cnn_train.so.8.8.0 /usr/lib64/libcudnn_ops_infer.so.8.8.0 /usr/lib64/libcudnn_ops_train.so.8.8.0 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 52 bits physical, 57 bits virtual Byte Order: Little Endian CPU(s): 384 On-line CPU(s) list: 0-383 Vendor ID: AuthenticAMD Model name: AMD EPYC 9654 96-Core Processor CPU family: 25 Model: 17 Thread(s) per core: 2 Core(s) per socket: 96 Socket(s): 2 Stepping: 1 Frequency boost: enabled CPU(s) scaling MHz: 71% CPU max MHz: 3707.8120 CPU min MHz: 1500.0000 BogoMIPS: 4792.82 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid overflow_recov succor smca fsrm flush_l1d Virtualization: AMD-V L1d cache: 6 MiB (192 instances) L1i cache: 6 MiB (192 instances) L2 cache: 192 MiB (192 instances) L3 cache: 768 MiB (24 instances) NUMA node(s): 2 NUMA node0 CPU(s): 0-95,192-287 NUMA node1 CPU(s): 96-191,288-383 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Vulnerable: eIBRS with unprivileged eBPF Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] bert_pytorch==0.0.1a4 [pip3] flake8==6.1.0 [pip3] flake8-bugbear==23.3.23 [pip3] flake8-comprehensions==3.15.0 [pip3] flake8-executable==2.1.3 [pip3] flake8-logging-format==0.9.0 [pip3] flake8-pyi==23.3.1 [pip3] flake8-simplify==0.19.3 [pip3] functorch==1.14.0a0+b71aa0b [pip3] mypy==1.11.2 [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.26.4 [pip3] nvidia-cublas-cu12==12.4.5.8 [pip3] nvidia-cuda-cupti-cu12==12.4.127 [pip3] nvidia-cuda-nvrtc-cu12==12.4.127 [pip3] nvidia-cuda-runtime-cu12==12.4.127 [pip3] nvidia-cudnn-cu12==9.1.0.70 [pip3] nvidia-cufft-cu12==11.2.1.3 [pip3] nvidia-curand-cu12==10.3.5.147 [pip3] nvidia-cusolver-cu12==11.6.1.9 [pip3] nvidia-cusparse-cu12==12.3.1.170 [pip3] nvidia-nccl-cu12==2.21.5 [pip3] nvidia-nvjitlink-cu12==12.4.127 [pip3] nvidia-nvtx-cu12==12.4.127 [pip3] onnx==1.17.0 [pip3] onnxscript==0.1.0.dev20241101 [pip3] optree==0.13.0 [pip3] pytorch-labs-segment-anything-fast==0.2 [pip3] torch==2.6.0a0+gitc3fbec7 [pip3] torch_geometric==2.4.0 [pip3] torchao==0.6.1 [pip3] torchaudio==2.5.0a0+ba696ea [pip3] torchbench==0.1 [pip3] torchdata==0.10.0a0+ef62e00 [pip3] torchmetrics==1.0.3 [pip3] torchmultimodal==0.1.0b0 [pip3] torchrec==1.1.0a0+d2ed744 [pip3] torchtext==0.17.0a0+1d4ce73 [pip3] torchvision==0.20.0a0+e9a3213 [pip3] torchviz==0.0.2 [pip3] triton==3.1.0 [conda] bert-pytorch 0.0.1a4 dev_0 <develop> [conda] blas 1.0 mkl [conda] functorch 1.14.0a0+b71aa0b pypi_0 pypi [conda] magma-cuda116 2.6.1 1 pytorch [conda] magma-cuda121 2.6.1 1 pytorch [conda] mkl 2025.0.1 pypi_0 pypi [conda] mkl-include 2025.0.0 pypi_0 pypi [conda] mkl-service 2.4.0 py310h5eee18b_1 [conda] mkl-static 2025.0.0 pypi_0 pypi [conda] mkl_fft 1.3.11 py310h5eee18b_0 [conda] mkl_random 1.2.8 py310h1128e8f_0 [conda] numpy 1.26.4 py310h5f9d8c6_0 [conda] numpy-base 1.26.4 py310hb5e798b_0 [conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi [conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi [conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi [conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi [conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi [conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi [conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi [conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi [conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi [conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi [conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi [conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi [conda] optree 0.13.0 pypi_0 pypi [conda] pytorch-labs-segment-anything-fast 0.2 pypi_0 pypi [conda] torch 2.6.0a0+gitc3fbec7 dev_0 <develop> [conda] torch-geometric 2.4.0 pypi_0 pypi [conda] torchao 0.6.1 pypi_0 pypi [conda] torchaudio 2.5.0a0+ba696ea dev_0 <develop> [conda] torchbench 0.1 dev_0 <develop> [conda] torchdata 0.10.0a0+ef62e00 pypi_0 pypi [conda] torchfix 0.4.0 pypi_0 pypi [conda] torchmetrics 1.0.3 pypi_0 pypi [conda] torchmultimodal 0.1.0b0 pypi_0 pypi [conda] torchrec 1.1.0a0+d2ed744 pypi_0 pypi [conda] torchtext 0.17.0a0+1d4ce73 dev_0 <develop> [conda] torchvision 0.20.0a0+e9a3213 dev_0 <develop> [conda] torchviz 0.0.2 pypi_0 pypi [conda] triton 3.1.0 pypi_0 pypi cc @seemethere @malfet @pytorch/pytorch-dev-infra @chauhang @penguinwu
module: ci,triaged,oncall: pt2
low
Critical
2,674,141,895
langchain
bug: `init_chat_model` doesn't work with 🤗 huggingface models
### Checked other resources - [X] I added a very descriptive title to this issue. - [X] I searched the LangChain documentation with the integrated search. - [X] I used the GitHub search to find a similar question and didn't find it. - [X] I am sure that this is a bug in LangChain rather than my code. - [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package). ### Example Code Snippet ```python from langchain.chat_models import init_chat_model llm = init_chat_model( model="microsoft/Phi-3-mini-4k-instruct", model_provider="huggingface", temperature=0, max_tokens=1024, timeout=None, max_retries=2, ) ``` ### Error Message and Stack Trace (if applicable) ```bash Traceback (most recent call last): File "/Users/sauravmaheshkar/dev/papersai/mre.py", line 4, in <module> llm = init_chat_model( File "/Users/sauravmaheshkar/dev/papersai/.venv/lib/python3.10/site-packages/langchain/chat_models/base.py", line 304, in init_chat_model return _init_chat_model_helper( File "/Users/sauravmaheshkar/dev/papersai/.venv/lib/python3.10/site-packages/langchain/chat_models/base.py", line 393, in _init_chat_model_helper return ChatHuggingFace(model_id=model, **kwargs) File "/Users/sauravmaheshkar/dev/papersai/.venv/lib/python3.10/site-packages/langchain_huggingface/chat_models/huggingface.py", line 317, in __init__ super().__init__(**kwargs) File "/Users/sauravmaheshkar/dev/papersai/.venv/lib/python3.10/site-packages/langchain_core/load/serializable.py", line 125, in __init__ super().__init__(*args, **kwargs) File "/Users/sauravmaheshkar/dev/papersai/.venv/lib/python3.10/site-packages/pydantic/main.py", line 212, in __init__ validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self) pydantic_core._pydantic_core.ValidationError: 1 validation error for ChatHuggingFace llm Field required [type=missing, input_value={'model_id': 'microsoft/P... None, 'max_retries': 2}, input_type=dict] For further information visit https://errors.pydantic.dev/2.9/v/missing ``` ### Description * I'm trying to use the [`init_chat_model`](https://python.langchain.com/api_reference/langchain/chat_models/langchain.chat_models.base.init_chat_model.html) function to instantiate a model from the huggingface hub. ### System Info System Information ------------------ > OS: Darwin > OS Version: Darwin Kernel Version 24.0.0: Tue Sep 24 23:36:26 PDT 2024; root:xnu-11215.1.12~1/RELEASE_ARM64_T8103 > Python Version: 3.10.15 (main, Sep 9 2024, 22:43:48) [Clang 18.1.8 ] Package Information ------------------- > langchain_core: 0.3.19 > langchain: 0.3.7 > langsmith: 0.1.143 > langchain_anthropic: 0.2.3 > langchain_huggingface: 0.1.2 > langchain_text_splitters: 0.3.2 Optional packages not installed ------------------------------- > langgraph > langserve Other Dependencies ------------------ > aiohttp: 3.11.6 > anthropic: 0.36.2 > async-timeout: 4.0.3 > defusedxml: 0.7.1 > httpx: 0.27.2 > huggingface-hub: 0.26.2 > jsonpatch: 1.33 > numpy: 1.26.4 > orjson: 3.10.11 > packaging: 24.2 > pydantic: 2.9.2 > PyYAML: 6.0.2 > requests: 2.32.3 > requests-toolbelt: 1.0.0 > sentence-transformers: 3.3.1 > SQLAlchemy: 2.0.36 > tenacity: 9.0.0 > tokenizers: 0.20.3 > transformers: 4.46.3 > typing-extensions: 4.12.2
🤖:bug
low
Critical
2,674,226,737
react-native
Touch events unresponsive for flexible child Views in ScrollView component under React Native new architecture
### Description There is an issue with the touch events in a ScrollView when using the new architecture. Specifically, below a ScrollView, there is a View component with a minWidth of 100%. Inside this View, there are three child View components, each set with flexGrow: 1, flexShrink: 1, and flexBasis: "25%". The last two child View component does not respond to the touchstart event under the new architecture, while it works fine in the old architecture. ### Steps to reproduce 1. Create a ScrollView component provided by react-native and set horizontal to true 2. Place a View component below the ScrollView with minWidth: 100%. 3. Inside the View component, add three child View components, each with flexGrow: 1, flexShrink: 1, and flexBasis: "25%". 4. Test the touchstart event on the last two child View component. Expected Result: The last two child View components should respond to the touchstart event. Actual Result: The last two child View components do not respond to the touchstart event when using the new architecture, while this issue does not occur in the old architecture. ### React Native Version 0.74.3、0.75.2、0.76.2 ### Affected Platforms Runtime - iOS ### Areas Fabric - The New Renderer ### Output of `npx react-native info` ```text System: OS: macOS 14.2.1 CPU: (8) x64 Apple M3 Memory: 180.14 MB / 16.00 GB Shell: version: "5.9" path: /bin/zsh Binaries: Node: version: 20.11.1 path: ~/.nvm/versions/node/v20.11.1/bin/node Yarn: version: 1.22.22 path: /usr/local/bin/yarn npm: version: 10.2.4 path: ~/.nvm/versions/node/v20.11.1/bin/npm Watchman: version: 2024.10.21.00 path: /usr/local/bin/watchman Managers: CocoaPods: version: 1.15.2 path: /usr/local/bin/pod SDKs: iOS SDK: Platforms: - DriverKit 23.4 - iOS 17.4 - macOS 14.4 - tvOS 17.4 - visionOS 1.1 - watchOS 10.4 Android SDK: Not Found IDEs: Android Studio: 2023.3 AI-233.14808.21.2331.11709847 Xcode: version: 15.3/15E204a path: /usr/bin/xcodebuild Languages: Java: version: 17.0.11 path: /usr/bin/javac Ruby: version: 2.6.10 path: /usr/bin/ruby npmPackages: "@react-native-community/cli": installed: 15.0.1 wanted: 15.0.1 react: installed: 18.3.1 wanted: 18.3.1 react-native: installed: 0.76.2 wanted: 0.76.2 react-native-macos: Not Found npmGlobalPackages: "*react-native*": Not Found Android: hermesEnabled: true newArchEnabled: true iOS: hermesEnabled: true newArchEnabled: true ``` ### Stacktrace or Logs ```text No logs ``` ### Reproducer https://github.com/yandadaFreedom/rn-original-scrollview ### Screenshots and Videos _No response_
Component: ScrollView,Needs: Triage :mag:,Newer Patch Available,Type: New Architecture
low
Minor
2,674,265,828
PowerToys
PowerToys Ruler: Protractor for Measuring Angles
### Description of the new feature / enhancement Add a protractor tool to PowerToys Ruler for precise angle measurement on screen. This feature would allow users to easily measure angles directly on images, diagrams, or plots displayed on their monitors. It can be integrated into the existing PowerToys Ruler tool, enabling the user to overlay a protractor on the screen and measure angles with high accuracy. ### Scenario when this would be used? As a scientist or professional working with visual data, such as images, graphs, or plots, users often need to measure angles to confirm the accuracy of their analysis. For example, when reviewing scientific plots in Matplotlib or analyzing images that require precise angle measurements, this tool would provide a quick and easy way to verify the angles without manually using a physical protractor. This feature would be particularly useful for verifying geometric relationships, angles in graphs, or alignment in visual data. ### Supporting information Here: Ruler and Protractor https://apps.microsoft.com/detail/9pcx0wjj9g5h?launch=true&mode=full&hl=en-us&gl=us&ocid=bingwebsearch
Needs-Triage
low
Minor
2,674,408,974
angular
docs: Unable to Connect to Port - Components in Angular Tutorial
### Which @angular/* package(s) are the source of the bug? Don't known / other ### Is this a regression? No ### Description All of the other tutorials load the preview, but the first one - Components in Angular Tutorial, continues to throw unable to connect to port error. I tested this on a different machine and continued to get the same error. ### Please provide a link to a minimal reproduction of the bug https://angular.dev/tutorials/learn-angular/1-components-in-angular ### Please provide the exception or error you saw ```true Unable to connect to port 4200 No server listening on port 4200. ``` ### Please provide the environment you discovered this bug in (run `ng version`) ```true ``` ### Anything else? _No response_
area: docs-infra
low
Critical
2,674,414,986
deno
Add support for `--permit-no-files` in `deno bench`
For parity with `deno test`. It's useful for workspaces when creating a deno task bench alias but a given packages has no bench files
suggestion,help wanted
low
Minor
2,674,424,610
pytorch
Cannot view a tensor with shape torch.Size([2, 1, 8, 32]) and strides (32, 512, 64, 1) as a tensor with shape (2, 256)! for transfomer MHA with permute, view
### 🐛 Describe the bug When exporting my transformer model with torch.onnx.export(... dynamo=True, ...), any sequence length >= 2 gives the error. Sequence length 1 works but does not support dynamic shapes in inference. When dynamic_shapes is off, I get the above error, and when dynamic_shapes is set and seq_len >=2, I get an error about guard lengths similar to #126127. I tried latest nightly (9 PM PST on 11/19) but didn't work. Also noticed this issue is similar to #136543 and #139508. Thanks in advance for the help! [onnx_export_2024-11-19_21-08-53-216442_conversion.md](https://github.com/user-attachments/files/17825309/onnx_export_2024-11-19_21-08-53-216442_conversion.md) **Repro** ``` import torch import torch.nn as nn import torch.nn.init as init from torchvision.models import densenet121, DenseNet121_Weights from torch import Tensor class FlattenAndContiguous(nn.Module): def __init__(self): super().__init__() def forward(self, x): return x.flatten(1, 2).contiguous() class Permute(nn.Module): def __init__(self, *dims: int): # asterisk accepts arbitary amount of arguments super().__init__() self.dims = dims def forward(self, x): return x.permute(*self.dims).contiguous() # reorders the tuple class PosEncode1D(nn.Module): def __init__(self, d_model, dropout_percent, max_len, PE_temp): super().__init__() position = torch.arange(max_len).unsqueeze(1) # creates a vector (max_len x 1), 1 is needed for matmul operations dim_t = torch.arange(0, d_model, 2) # 2i term in the denominator exponent scaling = PE_temp **(dim_t/d_model) # entire denominator pe = torch.zeros(max_len, d_model) # pe[:, 0::2] = torch.sin(position / scaling) # every second term starting from 0 (even) pe[:, 1::2] = torch.cos(position / scaling) # every second term starting from 1 (odd) self.dropout = nn.Dropout(dropout_percent) self.register_buffer("pe", pe) # stores pe tensor to be used but not updated def forward(self, x): batch, sequence_length, d_model = x.shape return self.dropout(x + self.pe[None, :sequence_length, :]) # None to broadcast across batch, adds element-wise [x + pe, . . .] class PosEncode2D(nn.Module): def __init__(self, d_model, dropout_percent, max_len, PE_temp): super().__init__() # 1D encoding position = torch.arange(max_len).unsqueeze(1) dim_t = torch.arange(0, d_model, 2) scaling = PE_temp **(dim_t/d_model) pe = torch.zeros(max_len, d_model) pe[:, 0::2] = torch.sin(position / scaling) pe[:, 1::2] = torch.cos(position / scaling) pe_2D = torch.zeros(max_len, max_len, d_model) # some outer product magic for i in range(d_model): pe_2D[:, :, i] = pe[:, i].unsqueeze(1) + pe[:, i].unsqueeze(0) # first unsqueeze changed from -1 self.dropout = nn.Dropout(dropout_percent) self.register_buffer("pe", pe_2D) def forward(self, x): batch, height, width, d_model = x.shape return self.dropout(x + self.pe[None, :height, :width, :]).contiguous() class Model_1(nn.Module): # from https://actamachina.com/handwritten-mathematical-expression-recognition, CNN encoder and then transformer decoder def __init__(self, vocab_size, d_model, nhead, dim_FF, dropout, num_layers): super(Model_1, self).__init__() densenet = densenet121(weights=DenseNet121_Weights.DEFAULT) self.encoder = nn.Sequential( nn.Sequential(*list(densenet.children())[:-1]), # remove the final layer, output (B, 1024, 12, 16) nn.Conv2d(1024, d_model, kernel_size=1), # 1x1 convolution, output of (B, d_model, W, H) ex. (1, 256, 12, 16) Permute(0, 3, 2, 1), PosEncode2D(d_model=d_model, dropout_percent=dropout, max_len=150, PE_temp=10000), # output (1, 16, 12, 256) FlattenAndContiguous() ) self.tgt_embedding = nn.Embedding(num_embeddings=vocab_size, embedding_dim=d_model, padding_idx=0) # simple lookup table, input indices self.word_PE = PosEncode1D(d_model, dropout, max_len=150, PE_temp=10000) self.transformer_decoder = nn.TransformerDecoder( nn.TransformerDecoderLayer(d_model, nhead, dim_FF, dropout, batch_first=True), # batch_first -> (batch, sequence, feature) num_layers, ) # input target and memory (last sequence of the encoder), then tgt_mask, memory_mask self.fc_out = nn.Linear(256, 217) # y = xA^T + b, distribution over all tokens in vocabulary self.d_model = Tensor(d_model).to(device) self._initialize_weights() def _initialize_weights(self): for m in self.modules(): if isinstance(m, nn.Linear): init.xavier_uniform_(m.weight) # Glorot initialization for linear layers if m.bias is not None: init.zeros_(m.bias) # Bias initialized to zeros elif isinstance(m, nn.Conv2d): init.xavier_uniform_(m.weight) # Glorot initialization for conv layers if m.bias is not None: init.zeros_(m.bias) elif isinstance(m, nn.Embedding): init.xavier_uniform_(m.weight) # Glorot initialization for embedding layers def decoder(self, features, tgt, tgt_mask): features = features.contiguous() tgt = tgt.contiguous() padding_mask = tgt.eq(0) # checks where elements of tgt are equal to zero tgt = self.tgt_embedding(tgt) * torch.sqrt(self.d_model) # tgt indices become embedding vectors and are scaled by sqrt of model size for stability tgt = self.word_PE(tgt) # adds positional encoding, size (B, seq_len, d_model) tgt = self.transformer_decoder(tgt=tgt, memory=features, tgt_mask=tgt_mask.to(torch.float32), tgt_key_padding_mask=padding_mask.to(torch.float32), tgt_is_causal=True) # type match #tgt = self.transformer_decoder(tgt=tgt, memory=features, tgt_mask=tgt_mask, tgt_is_causal=True) # type match output = self.fc_out(tgt) # size (B, seq_len, vocab_size return output def forward(self, src, tgt, tgt_mask): features = self.encoder(src) output = self.decoder(features, tgt, tgt_mask) return output import torch.onnx import onnxruntime as ort import numpy as np device = 'cuda' full_model = Model_1(vocab_size=217, d_model=256, nhead=8, dim_FF=1024, dropout=0, num_layers=3) full_model.to(device) #full_model.load_state_dict(torch.load('runs/Exp8E8End_Acc=0.6702922582626343.pt', map_location = device, weights_only=True)) full_model.eval() with torch.no_grad(): seq_len_tgt = torch.export.Dim("seq_len_tgt", max=146) seq_len_0 = torch.export.Dim("seq_len_0", max=146) seq_len_1 = torch.export.Dim("seq_len_1", max=146) torch.onnx.export( full_model.to(device), (torch.randn(1,3,384,512).to(device), torch.ones([1, 2], dtype=torch.long).to(device), torch.triu(torch.ones(2, 2) * float("-inf"), diagonal=1).to(device) ), "model_3.onnx", export_params = True, verbose = True, input_names = ["src", "tgt", "tgt_mask"], output_names = ["output1"], dynamo = True, #dynamic_shapes = {"src": {}, "tgt": {1: seq_len_tgt}, "tgt_mask": {0: seq_len_0, 1: seq_len_1}}, external_data = True, report = True, optimize = True, verify = True, ) ``` ### Versions Collecting environment information... PyTorch version: 2.6.0.dev20241112+cu124 Is debug build: False CUDA used to build PyTorch: 12.4 ROCM used to build PyTorch: N/A OS: Ubuntu 24.04.1 LTS (x86_64) GCC version: (Ubuntu 13.2.0-23ubuntu4) 13.2.0 Clang version: Could not collect CMake version: Could not collect Libc version: glibc-2.39 Python version: 3.12.3 (main, Sep 11 2024, 14:17:37) [GCC 13.2.0] (64-bit runtime) Python platform: Linux-5.15.153.1-microsoft-standard-WSL2-x86_64-with-glibc2.39 Is CUDA available: True CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4070 Laptop GPU Nvidia driver version: 560.94 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 46 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 22 On-line CPU(s) list: 0-21 Vendor ID: GenuineIntel Model name: Intel(R) Core(TM) Ultra 9 185H CPU family: 6 Model: 170 Thread(s) per core: 2 Core(s) per socket: 11 Socket(s): 1 Stepping: 4 BogoMIPS: 6144.00 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities Virtualization: VT-x Hypervisor vendor: Microsoft Virtualization type: full L1d cache: 528 KiB (11 instances) L1i cache: 704 KiB (11 instances) L2 cache: 22 MiB (11 instances) L3 cache: 24 MiB (1 instance) Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Mitigation; Enhanced IBRS Vulnerability Spec rstack overflow: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] numpy==2.1.3 [pip3] nvidia-cublas-cu12==12.4.5.8 [pip3] nvidia-cuda-cupti-cu12==12.4.127 [pip3] nvidia-cuda-nvrtc-cu12==12.4.127 [pip3] nvidia-cuda-runtime-cu12==12.4.127 [pip3] nvidia-cudnn-cu12==9.1.0.70 [pip3] nvidia-cufft-cu12==11.2.1.3 [pip3] nvidia-curand-cu12==10.3.5.147 [pip3] nvidia-cusolver-cu12==11.6.1.9 [pip3] nvidia-cusparse-cu12==12.3.1.170 [pip3] nvidia-cusparselt-cu12==0.6.2 [pip3] nvidia-nccl-cu12==2.21.5 [pip3] nvidia-nvjitlink-cu12==12.4.127 [pip3] nvidia-nvtx-cu12==12.4.127 [pip3] onnx==1.17.0 [pip3] onnxruntime==1.20.0 [pip3] onnxscript==0.1.0.dev20241112 [pip3] pytorch-triton==3.1.0+cf34004b8a [pip3] torch==2.6.0.dev20241112+cu124 [pip3] torchaudio==2.5.0.dev20241112+cu124 [pip3] torchvision==0.20.0.dev20241112+cu124 [pip3] triton==3.1.0 [conda] Could not collect cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4
needs reproduction,module: onnx,oncall: pt2,oncall: export
low
Critical
2,674,432,023
ollama
Memory usage higher than LM Studio for similar model
### What is the issue? Memory usage is different for two services but both services use `llama.cpp` under the hood (not sure, but i think it is). I am not using MLX backend as it's slower on my machine. ### lm-studio (GGUF model) https://github.com/user-attachments/assets/f283a317-c65d-44f9-ba43-37d49f0cb5ec ### ollama https://github.com/user-attachments/assets/81fa67f7-19ae-46dc-aec9-2ed5a3d59b2f #### Env ```bash OLLAMA_NOHISTORY=1 OLLAMA_FLASH_ATTENTION=1 OLLAMA_KEEP_ALIVE=5m OLLAMA_MAX_QUEUE=512 OLLAMA_MAX_LOADED_MODELS=1 OLLAMA_HOST=127.0.0.1 OLLAMA_NUM_PARALLEL=1 ``` ### OS macOS ### GPU Apple ### CPU Apple ### Ollama version 0.4.x
bug
low
Major
2,674,441,699
rust
Terse parse error on `&raw expr`
<!-- Thank you for filing a bug report! 🐛 Please provide a short summary of the bug, along with any information you feel relevant to replicating the bug. --> I tried this code: ```rust mod foo { pub static A: i32 = 0; pub static B: i32 = 0; pub static C: i32 = 0; } fn main() { let _arr = [ &raw foo::A, &raw foo::B, &raw foo::C ]; } ``` I expected to see this happen: Code compiles normally Instead, this happened: A parser error. ``` error: expected one of `!`, `,`, `.`, `::`, `;`, `?`, `]`, `{`, or an operator, found `foo` --> src/main.rs:9:23 | 9 | let _arr = [ &raw foo::A, &raw foo::B, &raw foo::C ]; | ^^^ expected one of 9 possible tokens ``` ### Meta <!-- If you're using the stable version of the compiler, you should also check if the bug also exists in the beta or nightly versions. --> `rustc --version --verbose`: ``` rustc 1.82.0 (f6e511eec 2024-10-15) binary: rustc commit-hash: f6e511eec7342f59a25f7c0534f1dbea00d01b14 commit-date: 2024-10-15 host: x86_64-unknown-linux-gnu release: 1.82.0 LLVM version: 19.1.1 ```
A-diagnostics,A-parser,T-compiler,F-raw_ref_op,D-terse,A-raw-pointers
low
Critical
2,674,457,762
PowerToys
Default save path for New+ templates should not be dependent by language setting
### Microsoft PowerToys version 0.85.1 ### Utility with translation issue New+ ### 🌐 Language affected Japanese ### ❌ Actual phrase(s) `%localappdata%\Microsoft\PowerToys\NewPlus\テンプレート` ![Image](https://github.com/user-attachments/assets/76b0a800-3187-4898-a080-a157c72af61c) ### ✔️ Expected phrase(s) `%localappdata%\Microsoft\PowerToys\NewPlus\Templates` ### ℹ Why is the current translation wrong This issue may not be exactly a translation issue, but forgive me for reporting it as a translation issue because I wasn't sure where it would fall! Main Issue. -- New+ template directory names are affected by language settings This may not be much of a problem for most languages, but Japanese uses double-byte characters and Japanese uses double-byte characters, which can cause all sorts of bugs if included in the path. Also, the PowerToys configuration folder is located in `%localappdata%MicrosoftPowerToys`, but the New+ Template folder is the only localized folder, including subdirectories. There is also a problem with the Japanese documentation. The documentation lists the non-localized folders as template folders. Here is a link to the documentation and a screenshot of the relevant section. * [Windows 用 PowerToysNew+ | Microsoft Learn](https://learn.microsoft.com/ja-jp/windows/powertoys/newplus#templates-location) ![Image](https://github.com/user-attachments/assets/4789fd1f-aa02-4b46-bfd3-fbfc7c9189c2) I find the existence of this directory confusing to both developers and the target users of PowerToys (at least it confused me) Is this specification (bug?) is intended?
Issue-Bug,Area-Localization,Needs-Triage,Issue-Translation
low
Critical
2,674,503,757
material-ui
[docs] [joyui] Container Component Missing in Documentation
### Related page https://mui.com/joy-ui/getting-started/ ### Kind of issue Missing information ### Issue description The Container component is not documented in the "Layout" section of the JoyUI documentation. Despite its absence from the documentation, [the component is present in the library and functional.](https://github.com/mui/material-ui/blob/9c98e3fb475d989ad676672a9dedd3ddbea5258e/packages/mui-joy/src/Container/index.ts) ### Context Add the Container component to the documentation under the "Layout" section, If all that is needed is to adapt content from the mui/material docs, I am happy to help with this task. You may assign it to me. **Search keywords**: JoyUI, Container component, Component API, Documentation missing
docs,on hold,package: joy-ui,support: docs-feedback
low
Minor
2,674,527,127
pytorch
[torch.compile] Mutating backward input of autograd function is not supported by ```torch.compile```
### 🐛 Describe the bug This issue was raised from tracing Megatron/xlformers, where ```torch.distributed.all_reduce``` was called in backward of an ```autograd.Function``` and then it was rewritten by Dynamo into [```torch.distributed._functional_collectives.all_reduce_inplace```](https://github.com/pytorch/pytorch/blob/93e3c91679cbb36bd351d4caa8fdec2fe7388947/torch/distributed/_functional_collectives.py#L1100) which has a inplace ```copy_``` op. We can use the following simple repro to reproduce this error: ``` import torch torch.set_default_device('cuda') class Foo(torch.autograd.Function): @staticmethod def forward(ctx, x): return x.clone(), x.clone() @staticmethod def backward(ctx, grad1, grad2): return grad1.copy_(grad2) @torch.compile(backend="aot_eager", fullgraph=True) def f(x): return Foo.apply(x) x = torch.randn(3, requires_grad=True) result = f(x) print(result) ``` Error stack: ``` Traceback (most recent call last): File "/data/users/ybliang/debug/debug7.py", line 19, in <module> result = f(x) File "/home/ybliang/local/pytorch/torch/_dynamo/eval_frame.py", line 556, in _fn return fn(*args, **kwargs) File "/home/ybliang/local/pytorch/torch/_dynamo/convert_frame.py", line 1404, in __call__ return self._torchdynamo_orig_callable( File "/home/ybliang/local/pytorch/torch/_dynamo/convert_frame.py", line 549, in __call__ return _compile( File "/home/ybliang/local/pytorch/torch/_dynamo/convert_frame.py", line 985, in _compile guarded_code = compile_inner(code, one_graph, hooks, transform) File "/home/ybliang/local/pytorch/torch/_dynamo/convert_frame.py", line 712, in compile_inner return _compile_inner(code, one_graph, hooks, transform) File "/home/ybliang/local/pytorch/torch/_utils_internal.py", line 95, in wrapper_function return function(*args, **kwargs) File "/home/ybliang/local/pytorch/torch/_dynamo/convert_frame.py", line 747, in _compile_inner out_code = transform_code_object(code, transform) File "/home/ybliang/local/pytorch/torch/_dynamo/bytecode_transformation.py", line 1348, in transform_code_object transformations(instructions, code_options) File "/home/ybliang/local/pytorch/torch/_dynamo/convert_frame.py", line 233, in _fn return fn(*args, **kwargs) File "/home/ybliang/local/pytorch/torch/_dynamo/convert_frame.py", line 664, in transform tracer.run() File "/home/ybliang/local/pytorch/torch/_dynamo/symbolic_convert.py", line 2841, in run super().run() File "/home/ybliang/local/pytorch/torch/_dynamo/symbolic_convert.py", line 1032, in run while self.step(): File "/home/ybliang/local/pytorch/torch/_dynamo/symbolic_convert.py", line 944, in step self.dispatch_table[inst.opcode](self, inst) File "/home/ybliang/local/pytorch/torch/_dynamo/symbolic_convert.py", line 3021, in RETURN_VALUE self._return(inst) File "/home/ybliang/local/pytorch/torch/_dynamo/symbolic_convert.py", line 3006, in _return self.output.compile_subgraph( File "/home/ybliang/local/pytorch/torch/_dynamo/output_graph.py", line 1110, in compile_subgraph self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root) File "/home/ybliang/local/pytorch/torch/_dynamo/output_graph.py", line 1349, in compile_and_call_fx_graph compiled_fn = self.call_user_compiler(gm) File "/home/ybliang/local/pytorch/torch/_dynamo/output_graph.py", line 1399, in call_user_compiler return self._call_user_compiler(gm) File "/home/ybliang/local/pytorch/torch/_dynamo/output_graph.py", line 1448, in _call_user_compiler raise BackendCompilerFailed(self.compiler_fn, e).with_traceback( File "/home/ybliang/local/pytorch/torch/_dynamo/output_graph.py", line 1429, in _call_user_compiler compiled_fn = compiler_fn(gm, self.example_inputs()) File "/home/ybliang/local/pytorch/torch/_dynamo/repro/after_dynamo.py", line 130, in __call__ compiled_gm = compiler_fn(gm, example_inputs) File "/home/ybliang/local/pytorch/torch/_dynamo/repro/after_dynamo.py", line 130, in __call__ compiled_gm = compiler_fn(gm, example_inputs) File "/home/ybliang/local/pytorch/torch/__init__.py", line 2346, in __call__ return self.compiler_fn(model_, inputs_, **self.kwargs) File "/home/ybliang/local/pytorch/torch/_dynamo/backends/debugging.py", line 153, in aot_eager return aot_autograd( File "/home/ybliang/local/pytorch/torch/_dynamo/backends/common.py", line 72, in __call__ cg = aot_module_simplified(gm, example_inputs, **self.kwargs) File "/home/ybliang/local/pytorch/torch/_functorch/aot_autograd.py", line 1103, in aot_module_simplified compiled_fn = dispatch_and_compile() File "/home/ybliang/local/pytorch/torch/_functorch/aot_autograd.py", line 1079, in dispatch_and_compile compiled_fn, _ = create_aot_dispatcher_function( File "/home/ybliang/local/pytorch/torch/_functorch/aot_autograd.py", line 527, in create_aot_dispatcher_function return _create_aot_dispatcher_function( File "/home/ybliang/local/pytorch/torch/_functorch/aot_autograd.py", line 778, in _create_aot_dispatcher_function compiled_fn, fw_metadata = compiler_fn( File "/home/ybliang/local/pytorch/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py", line 373, in aot_dispatch_autograd fx_g, joint_inputs, maybe_subclass_meta = aot_dispatch_autograd_graph( File "/home/ybliang/local/pytorch/torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py", line 318, in aot_dispatch_autograd_graph fx_g = _create_graph(joint_fn_to_trace, updated_joint_inputs, aot_config=aot_config) File "/home/ybliang/local/pytorch/torch/_functorch/_aot_autograd/dispatch_and_compile_graph.py", line 55, in _create_graph fx_g = make_fx( File "/home/ybliang/local/pytorch/torch/fx/experimental/proxy_tensor.py", line 2188, in wrapped return make_fx_tracer.trace(f, *args) File "/home/ybliang/local/pytorch/torch/fx/experimental/proxy_tensor.py", line 2126, in trace return self._trace_inner(f, *args) File "/home/ybliang/local/pytorch/torch/fx/experimental/proxy_tensor.py", line 2097, in _trace_inner t = dispatch_trace( File "/home/ybliang/local/pytorch/torch/_compile.py", line 32, in inner return disable_fn(*args, **kwargs) File "/home/ybliang/local/pytorch/torch/_dynamo/eval_frame.py", line 721, in _fn return fn(*args, **kwargs) File "/home/ybliang/local/pytorch/torch/fx/experimental/proxy_tensor.py", line 1137, in dispatch_trace graph = tracer.trace(root, concrete_args) # type: ignore[arg-type] File "/home/ybliang/local/pytorch/torch/_dynamo/eval_frame.py", line 721, in _fn return fn(*args, **kwargs) File "/home/ybliang/local/pytorch/torch/fx/_symbolic_trace.py", line 843, in trace (self.create_arg(fn(*args)),), File "/home/ybliang/local/pytorch/torch/fx/_symbolic_trace.py", line 700, in flatten_fn tree_out = root_fn(*tree_args) File "/home/ybliang/local/pytorch/torch/fx/experimental/proxy_tensor.py", line 1192, in wrapped out = f(*tensors) File "/home/ybliang/local/pytorch/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 693, in inner_fn outs = fn(*args) File "/home/ybliang/local/pytorch/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 644, in joint_helper return _functionalized_f_helper(primals, tangents) File "/home/ybliang/local/pytorch/torch/_functorch/_aot_autograd/traced_function_transforms.py", line 480, in _functionalized_f_helper assert not has_metadata_mutation( torch._dynamo.exc.BackendCompilerFailed: backend='aot_eager' raised: AssertionError: Found an input to the backward that was mutated during the backward pass. This is not supported ``` ### Versions N/A cc @H-Huang @awgu @kwen2501 @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames @zou3519 @bdhirsh @yf225
oncall: distributed,triaged,oncall: pt2,module: dynamo,module: pt2-dispatcher,dynamo-autograd-function
low
Critical
2,674,528,438
tauri
[bug] In dev,node_modules 500
### Describe the bug ![image](https://github.com/user-attachments/assets/fed9c618-bace-4f15-ab71-25998c7bc94f) ![77202562b27ddf5c344b3240e89284e0](https://github.com/user-attachments/assets/dbbe788e-951f-4073-a429-072792cd9b26) In Windows, everything is normal. When running on Android, the development environment dependency reports a 500 error that cannot be loaded, but it can be accessed via localhost:1420, and it can be accessed after packaging into an APK ### Reproduction _No response_ ### Expected behavior _No response_ ### Full `tauri info` output ```text [✔] Environment - OS: Windows 10.0.22631 x86_64 (X64) ✔ WebView2: 127.0.2651.105 ✔ MSVC: Visual Studio Community 2022 ✔ rustc: 1.82.0 (f6e511eec 2024-10-15) ✔ cargo: 1.82.0 (8f40fc59f 2024-08-21) ✔ rustup: 1.27.1 (54dd3d00f 2024-04-24) ✔ Rust toolchain: stable-x86_64-pc-windows-msvc (default) - node: 22.2.0 - npm: 10.7.0 - bun: 1.1.30 [-] Packages - tauri 🦀: 2.1.1 - tauri-build 🦀: 2.0.3 - wry 🦀: 0.47.0 - tao 🦀: 0.30.8 - @tauri-apps/api : 2.1.1 - @tauri-apps/cli : 2.1.0 [-] Plugins - tauri-plugin-http 🦀: 2.0.3 - @tauri-apps/plugin-http : 2.0.1 - tauri-plugin-fs 🦀: 2.0.3 - @tauri-apps/plugin-fs : not installed! - tauri-plugin-os 🦀: 2.0.1 - @tauri-apps/plugin-os : 2.0.0 - tauri-plugin-shell 🦀: 2.0.2 - @tauri-apps/plugin-shell : 2.0.1 [-] App - build-type: bundle - CSP: unset - frontendDist: ../dist - devUrl: http://localhost:1420/ - framework: Vue.js - bundler: Vite ``` ### Stack trace _No response_ ### Additional context _No response_
type: bug,status: needs triage
low
Critical
2,674,569,637
godot
Encoding empty string to base64 reports an error `Condition "ret.is_empty()" is true. Returning: ret`, but should not
### Tested versions - Reproducible in v4.4.dev2.mono.official [97ef3c837] and v4.3.stable.mono.official [77dcf97d8] ### System information Godot v4.4.dev2.mono - Windows 10.0.22631 - OpenGL 3 (Compatibility) - NVIDIA GeForce RTX 4070 Ti SUPER (NVIDIA; 32.0.15.6109) - AMD Ryzen 7 7800X3D 8-Core Processor (16 Threads) ### Issue description When encoding the empty string as base64, the resulting base64 string is empty, as is appropriate per [RFC4648](https://www.rfc-editor.org/rfc/rfc4648#section-10). However, an error is reported: ```Condition "ret.is_empty()" is true. Returning: ret```. An error should not be reported, because an empty return value is appropriate in this case. Possible location of issue: https://github.com/godotengine/godot/blob/a0cd8f187a43935d756e49bf3778f39f0964f0ac/core/core_bind.cpp#L1234-L1238 ### Steps to reproduce 1. Use the MRP below or add the following line to any method (such as a `_ready()` method) ```gdscript Marshalls.utf8_to_base64("") ``` 2. Run the project. 3. Check the godot debugger > errors tab to see the incorrectly reported error. ### Minimal reproduction project (MRP) [ret_project.zip](https://github.com/user-attachments/files/17825851/ret_project.zip)
bug,topic:core,usability
low
Critical